Sample records for block-circulant deconvolution matrix

  1. Identification of methylation haplotype blocks aids in deconvolution of heterogeneous tissue samples and tumor tissue-of-origin mapping from plasma DNA.

    PubMed

    Guo, Shicheng; Diep, Dinh; Plongthongkum, Nongluk; Fung, Ho-Lim; Zhang, Kang; Zhang, Kun

    2017-04-01

    Adjacent CpG sites in mammalian genomes can be co-methylated owing to the processivity of methyltransferases or demethylases, yet discordant methylation patterns have also been observed, which are related to stochastic or uncoordinated molecular processes. We focused on a systematic search and investigation of regions in the full human genome that show highly coordinated methylation. We defined 147,888 blocks of tightly coupled CpG sites, called methylation haplotype blocks, after analysis of 61 whole-genome bisulfite sequencing data sets and validation with 101 reduced-representation bisulfite sequencing data sets and 637 methylation array data sets. Using a metric called methylation haplotype load, we performed tissue-specific methylation analysis at the block level. Subsets of informative blocks were further identified for deconvolution of heterogeneous samples. Finally, using methylation haplotypes we demonstrated quantitative estimation of tumor load and tissue-of-origin mapping in the circulating cell-free DNA of 59 patients with lung or colorectal cancer.

  2. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  3. Encoders for block-circulant LDPC codes

    NASA Technical Reports Server (NTRS)

    Andrews, Kenneth; Dolinar, Sam; Thorpe, Jeremy

    2005-01-01

    In this paper, we present two encoding methods for block-circulant LDPC codes. The first is an iterative encoding method based on the erasure decoding algorithm, and the computations required are well organized due to the block-circulant structure of the parity check matrix. The second method uses block-circulant generator matrices, and the encoders are very similar to those for recursive convolutional codes. Some encoders of the second type have been implemented in a small Field Programmable Gate Array (FPGA) and operate at 100 Msymbols/second.

  4. Deconvolution using a neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  5. An adaptive sparse deconvolution method for distinguishing the overlapping echoes of ultrasonic guided waves for pipeline crack inspection

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Zi, Yanyang; Zhao, Jiyuan; Yang, Zhe; He, Wangpeng; Sun, Hailiang

    2017-03-01

    In guided wave pipeline inspection, echoes reflected from closely spaced reflectors generally overlap, meaning useful information is lost. To solve the overlapping problem, sparse deconvolution methods have been developed in the past decade. However, conventional sparse deconvolution methods have limitations in handling guided wave signals, because the input signal is directly used as the prototype of the convolution matrix, without considering the waveform change caused by the dispersion properties of the guided wave. In this paper, an adaptive sparse deconvolution (ASD) method is proposed to overcome these limitations. First, the Gaussian echo model is employed to adaptively estimate the column prototype of the convolution matrix instead of directly using the input signal as the prototype. Then, the convolution matrix is constructed upon the estimated results. Third, the split augmented Lagrangian shrinkage (SALSA) algorithm is introduced to solve the deconvolution problem with high computational efficiency. To verify the effectiveness of the proposed method, guided wave signals obtained from pipeline inspection are investigated numerically and experimentally. Compared to conventional sparse deconvolution methods, e.g. the {{l}1} -norm deconvolution method, the proposed method shows better performance in handling the echo overlap problem in the guided wave signal.

  6. The Twist Tensor Nuclear Norm for Video Completion.

    PubMed

    Hu, Wenrui; Tao, Dacheng; Zhang, Wensheng; Xie, Yuan; Yang, Yehui

    2017-12-01

    In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm (t-TNN). The twist tensor denotes a three-way tensor representation to laterally store 2-D data slices in order. On one hand, t-TNN convexly relaxes the tensor multirank of the twist tensor in the Fourier domain, which allows an efficient computation using fast Fourier transform. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a nonstationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.

  7. Extraction of near-surface properties for a lossy layered medium using the propagator matrix

    USGS Publications Warehouse

    Mehta, K.; Snieder, R.; Graizer, V.

    2007-01-01

    Near-surface properties play an important role in advancing earthquake hazard assessment. Other areas where near-surface properties are crucial include civil engineering and detection and delineation of potable groundwater. From an exploration point of view, near-surface properties are needed for wavefield separation and correcting for the local near-receiver structure. It has been shown that these properties can be estimated for a lossless homogeneous medium using the propagator matrix. To estimate the near-surface properties, we apply deconvolution to passive borehole recordings of waves excited by an earthquake. Deconvolution of these incoherent waveforms recorded by the sensors at different depths in the borehole with the recording at the surface results in waves that propagate upwards and downwards along the array. These waves, obtained by deconvolution, can be used to estimate the P- and S-wave velocities near the surface. As opposed to waves obtained by cross-correlation that represent filtered version of the sum of causal and acausal Green's function between the two receivers, the waves obtained by deconvolution represent the elements of the propagator matrix. Finally, we show analytically the extension of the propagator matrix analysis to a lossy layered medium for a special case of normal incidence. ?? 2007 The Authors Journal compilation ?? 2007 RAS.

  8. SU-E-I-08: Investigation of Deconvolution Methods for Blocker-Based CBCT Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, C; Jin, M; Ouyang, L

    2015-06-15

    Purpose: To investigate whether deconvolution methods can improve the scatter estimation under different blurring and noise conditions for blocker-based scatter correction methods for cone-beam X-ray computed tomography (CBCT). Methods: An “ideal” projection image with scatter was first simulated for blocker-based CBCT data acquisition by assuming no blurring effect and no noise. The ideal image was then convolved with long-tail point spread functions (PSF) with different widths to mimic the blurring effect from the finite focal spot and detector response. Different levels of noise were also added. Three deconvolution Methods: 1) inverse filtering; 2) Wiener; and 3) Richardson-Lucy, were used tomore » recover the scatter signal in the blocked region. The root mean square error (RMSE) of estimated scatter serves as a quantitative measure for the performance of different methods under different blurring and noise conditions. Results: Due to the blurring effect, the scatter signal in the blocked region is contaminated by the primary signal in the unblocked region. The direct use of the signal in the blocked region to estimate scatter (“direct method”) leads to large RMSE values, which increase with the increased width of PSF and increased noise. The inverse filtering is very sensitive to noise and practically useless. The Wiener and Richardson-Lucy deconvolution methods significantly improve scatter estimation compared to the direct method. For a typical medium PSF and medium noise condition, both methods (∼20 RMSE) can achieve 4-fold improvement over the direct method (∼80 RMSE). The Wiener method deals better with large noise and Richardson-Lucy works better on wide PSF. Conclusion: We investigated several deconvolution methods to recover the scatter signal in the blocked region for blocker-based scatter correction for CBCT. Our simulation results demonstrate that Wiener and Richardson-Lucy deconvolution can significantly improve the scatter estimation compared to the direct method.« less

  9. Novel image compression-encryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua

    2014-10-01

    The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.

  10. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  11. Image Processing Research

    DTIC Science & Technology

    1975-09-30

    systems a linear model results in an object f being mappad into an image _ by a point spread function matrix H. Thus with noise j +Hf +n (1) The simplest... linear models for imaging systems are given by space invariant point spread functions (SIPSF) in which case H is block circulant. If the linear model is...Ij,...,k-IM1 is a set of two dimensional indices each distinct and prior to k. Modeling Procedare: To derive the linear predictor (block LP of figure

  12. Is There a Direct Correlation Between Microvascular Wall Structure and k-Trans Values Obtained From Perfusion CT Measurements in Lymphomas?

    PubMed

    Horger, Marius; Fallier-Becker, Petra; Thaiss, Wolfgang M; Sauter, Alexander; Bösmüller, Hans; Martella, Manuela; Preibsch, Heike; Fritz, Jan; Nikolaou, Konstantin; Kloth, Christopher

    2018-05-03

    This study aimed to test the hypothesis that ultrastructural wall abnormalities of lymphoma vessels correlate with perfusion computed tomography (PCT) kinetics. Our local institutional review board approved this prospective study. Between February 2013 and June 2016, we included 23 consecutive subjects with newly diagnosed lymphoma, who were referred for computed tomography-guided biopsy (6 women, 17 men; mean age, 60.61 ± 12.43 years; range, 28-74 years) and additionally agreed to undergo PCT of the target lymphoma tissues. PCT was obtained for 40 seconds using 80 kV, 120 mAs, 64 × 0.6-mm collimation, 6.9-cm z-axis coverage, and 26 volume measurements. Mean and maximum k-trans (mL/100 mL/min), blood flow (BF; mL/100 mL/min) and blood volume (BV) were quantified using the deconvolution and the maximum slope + Patlak calculation models. Immunohistochemical staining was performed for microvessel density quantification (vessels/m 2 ), and electron microscopy was used to determine the presence or absence of tight junctions, endothelial fenestration, basement membrane, and pericytes, and to measure extracellular matrix thickness. Extracellular matrix thickness as well as the presence or absence of tight junctions, basal lamina, and pericytes did not correlate with computed tomography perfusion parameters. Endothelial fenestrations correlated significantly with mean BF deconvolution (P = .047, r = 0.418) and additionally was significantly associated with higher mean BV deconvolution (P < .005). Mean k-trans Patlak correlated strongly with mean k-trans deconvolution (r = 0.939, P = .001), and both correlated with mean BF deconvolution (P = .001, r = 0.748), max BF deconvolution (P = .028, r = 0.564), mean BV deconvolution (P = .001, r = 0.752), and max BV deconvolution (P = .001, r = 0.771). Microvessel density correlated with max k-trans deconvolution (r = 0.564, P = .023). Vascular endothelial growth factor receptor-3 expression (receptor specific for lymphatics) correlated significantly with max k-trans Patlak (P = .041, r = 0.686) and mean BF deconvolution (P = .038, r = 0.695). k-Trans values of PCT do not correlate with ultrastructural microvessel features, whereas endothelial fenestrations correlate with increased intra-tumoral BVs. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  13. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  14. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data

    PubMed Central

    Pnevmatikakis, Eftychios A.; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A.; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M.; Peterka, Darcy S.; Yuste, Rafael; Paninski, Liam

    2016-01-01

    SUMMARY We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multineuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160

  15. Spectral identification of a 90Sr source in the presence of masking nuclides using Maximum-Likelihood deconvolution

    NASA Astrophysics Data System (ADS)

    Neuer, Marcus J.

    2013-11-01

    A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.

  16. Unsupervised Learning for Monaural Source Separation Using Maximization–Minimization Algorithm with Time–Frequency Deconvolution †

    PubMed Central

    Bouridane, Ahmed; Ling, Bingo Wing-Kuen

    2018-01-01

    This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629

  17. Application of Financial Risk-reward Theory to Link and Network Optimization

    DTIC Science & Technology

    2011-10-01

    OFDM systems the matrices V k and U k are Fourier matrices which diagonalize a circulant or block-circulant matrix Hk [18]. In multi-antenna systems...probability α=Pr(η r <=t) Figure 13: Mean link spectral efficiency as a function of target link spectral efficiency ηt and outage probability ζ in a MIMO ...in a MIMO channel. Distribution A: Approved for public release; distribution is unlimited. 41 (75) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8

  18. Deconvolution of Energy Spectra in the ATIC Experiment

    NASA Technical Reports Server (NTRS)

    Batkov, K. E.; Panov, A. D.; Adams, J. H.; Ahn, H. S.; Bashindzhagyan, G. L.; Chang, J.; Christl, M.; Fazley, A. R.; Ganel, O.; Gunasigha, R. M.; hide

    2005-01-01

    The Advanced Thin Ionization Calorimeter (ATIC) balloon-borne experiment is designed to perform cosmic- ray elemental spectra measurements from below 100 GeV up to tens TeV for nuclei from hydrogen to iron. The instrument is composed of a silicon matrix detector followed by a carbon target, interleaved with scintillator tracking layers, and a segmented BGO calorimeter composed of 320 individual crystals totalling 18 radiation lengths, used to determine the particle energy. The technique for deconvolution of the energy spectra measured in the thin calorimeter is based on detailed simulations of the response of the ATIC instrument to different cosmic ray nuclei over a wide energy range. The method of deconvolution is described and energy spectrum of carbon obtained by this technique is presented.

  19. Least-squares deconvolution of evoked potentials and sequence optimization for multiple stimuli under low-jitter conditions.

    PubMed

    Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram

    2014-04-01

    Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.

  20. Design, installation, and performance evaluation of a custom dye matrix standard for automated capillary electrophoresis.

    PubMed

    Cloete, Kevin Wesley; Ristow, Peter Gustav; Kasu, Mohaimin; D'Amato, Maria Eugenia

    2017-03-01

    CE equipment detects and deconvolutes mixtures containing up to six fluorescently labeled DNA fragments. This deconvolution is done by the collection software that requires a spectral calibration file. The calibration file is used to adjust for the overlap that occurs between the emission spectra of fluorescence dyes. All commercial genotyping and sequencing kits require the installation of a corresponding matrix standard to generate a calibration file. Due to the differences in emission spectrum overlap between fluorescent dyes, the application of existing commercial matrix standards to the electrophoretic separation of DNA labeled with other fluorescent dyes can yield undesirable results. Currently, the number of fluorescent dyes available for oligonucleotide labeling surpasses the availability of commercial matrix standards. Therefore, in this study we developed and evaluated a customized matrix standard using ATTO 633, ATTO 565, ATTO 550, ATTO Rho6G, and 6-FAM dyes for which no commercial matrix standard is available. We highlighted the potential genotyping errors of using an incorrect matrix standard by evaluating the relative performance of our custom dye set using six matrix standards. The specific performance of two genotyping kits (UniQTyper™ Y-10 version 1.0 and PowerPlex® Y23 System) was also evaluated using their specific matrix standards. The procedure we followed for the construction of our custom dye matrix standard can be extended to other fluorescent dyes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  2. Kinetics of removal of intravenous testosterone pulses in normal men.

    PubMed

    Veldhuis, Johannes D; Keenan, Daniel M; Liu, Peter Y; Takahashi, Paul Y

    2010-04-01

    Testosterone is secreted into the bloodstream episodically, putatively distributing into total, bioavailable (bio) nonsex hormone-binding globulin (nonSHBG-bound), and free testosterone moieties. The kinetics of total, bio, and free testosterone pulses are unknown. Design Adrenal and gonadal steroidogenesis was blocked pharmacologically, glucocorticoid was replaced, and testosterone was infused in pulses in four distinct doses in 14 healthy men under two different paradigms (a total of 220 testosterone pulses). Testosterone kinetics were assessed by deconvolution analysis of total, free, bioavailable, SHBG-bound, and albumin-bound testosterone concentration-time profiles. Independently of testosterone dose or paradigm, rapid-phase half-lives (min) of total, free, bioavailable, SHBG-bound, and albumin-bound testosterone were comparable at 1.4+/-0.22 min (grand mean+/-S.E.M. of geometric means). Slow-phase testosterone half-lives were highest for SHBG-bound testosterone (32 min) and total testosterone (27 min) with the former exceeding that of free testosterone (18 min), bioavailable testosterone (14 min), and albumin-bound testosterone (18 min; P<0.001). Collective outcomes indicate that i) the rapid phase of testosterone disappearance from point sampling in the circulation is not explained by testosterone dose; ii) SHBG-bound testosterone and total testosterone kinetics are prolonged; and iii) the half-lives of bioavailable, albumin-bound, and free testosterone are short. A frequent-sampling strategy comprising an experimental hormone clamp, estimation of hormone concentrations as bound and free moieties, mimicry of physiological pulses, and deconvolution analysis may have utility in estimating the in vivo kinetics of other hormones, substrates, and metabolites.

  3. Denoised Wigner distribution deconvolution via low-rank matrix completion

    DOE PAGES

    Lee, Justin; Barbastathis, George

    2016-08-23

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less

  4. Denoised Wigner distribution deconvolution via low-rank matrix completion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Justin; Barbastathis, George

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less

  5. Drug Delivery and Transport into the Central Circulation: An Example of Zero-Order In vivo Absorption of Rotigotine from a Transdermal Patch Formulation.

    PubMed

    Cawello, Willi; Braun, Marina; Andreas, Jens-Otto

    2018-01-13

    Pharmacokinetic studies using deconvolution methods and non-compartmental analysis to model clinical absorption of drugs are not well represented in the literature. The purpose of this research was (1) to define the system of equations for description of rotigotine (a dopamine receptor agonist delivered via a transdermal patch) absorption based on a pharmacokinetic model and (2) to describe the kinetics of rotigotine disposition after single and multiple dosing. The kinetics of drug disposition was evaluated based on rotigotine plasma concentration data from three phase 1 trials. In two trials, rotigotine was administered via a single patch over 24 h in healthy subjects. In a third trial, rotigotine was administered once daily over 1 month in subjects with early-stage Parkinson's disease (PD). A pharmacokinetic model utilizing deconvolution methods was developed to describe the relationship between drug release from the patch and plasma concentrations. Plasma-concentration over time profiles were modeled based on a one-compartment model with a time lag, a zero-order input (describing a constant absorption via skin into central circulation) and first-order elimination. Corresponding mathematical models for single- and multiple-dose administration were developed. After single-dose administration of rotigotine patches (using 2, 4 or 8 mg/day) in healthy subjects, a constant in vivo absorption was present after a minor time lag (2-3 h). On days 27 and 30 of the multiple-dose study in patients with PD, absorption was constant during patch-on periods and resembled zero-order kinetics. Deconvolution based on rotigotine pharmacokinetic profiles after single- or multiple-dose administration of the once-daily patch demonstrated that in vivo absorption of rotigotine showed constant input through the skin into the central circulation (resembling zero-order kinetics). Continuous absorption through the skin is a basis for stable drug exposure.

  6. Plasma DNA tissue mapping by genome-wide methylation sequencing for noninvasive prenatal, cancer, and transplantation assessments

    PubMed Central

    Sun, Kun; Jiang, Peiyong; Chan, K. C. Allen; Wong, John; Cheng, Yvonne K. Y.; Liang, Raymond H. S.; Chan, Wai-kong; Ma, Edmond S. K.; Chan, Stephen L.; Cheng, Suk Hang; Chan, Rebecca W. Y.; Tong, Yu K.; Ng, Simon S. M.; Wong, Raymond S. M.; Hui, David S. C.; Leung, Tse Ngong; Leung, Tak Y.; Lai, Paul B. S.; Chiu, Rossa W. K.; Lo, Yuk Ming Dennis

    2015-01-01

    Plasma consists of DNA released from multiple tissues within the body. Using genome-wide bisulfite sequencing of plasma DNA and deconvolution of the sequencing data with reference to methylation profiles of different tissues, we developed a general approach for studying the major tissue contributors to the circulating DNA pool. We tested this method in pregnant women, patients with hepatocellular carcinoma, and subjects following bone marrow and liver transplantation. In most subjects, white blood cells were the predominant contributors to the circulating DNA pool. The placental contributions in the plasma of pregnant women correlated with the proportional contributions as revealed by fetal-specific genetic markers. The graft-derived contributions to the plasma in the transplant recipients correlated with those determined using donor-specific genetic markers. Patients with hepatocellular carcinoma showed elevated plasma DNA contributions from the liver, which correlated with measurements made using tumor-associated copy number aberrations. In hepatocellular carcinoma patients and in pregnant women exhibiting copy number aberrations in plasma, comparison of methylation deconvolution results using genomic regions with different copy number status pinpointed the tissue type responsible for the aberrations. In a pregnant woman diagnosed as having follicular lymphoma during pregnancy, methylation deconvolution indicated a grossly elevated contribution from B cells into the plasma DNA pool and localized B cells as the origin of the copy number aberrations observed in plasma. This method may serve as a powerful tool for assessing a wide range of physiological and pathological conditions based on the identification of perturbed proportional contributions of different tissues into plasma. PMID:26392541

  7. Symmetric convolution of asymmetric multidimensional sequences using discrete trigonometric transforms.

    PubMed

    Foltz, T M; Welsh, B M

    1999-01-01

    This paper uses the fact that the discrete Fourier transform diagonalizes a circulant matrix to provide an alternate derivation of the symmetric convolution-multiplication property for discrete trigonometric transforms. Derived in this manner, the symmetric convolution-multiplication property extends easily to multiple dimensions using the notion of block circulant matrices and generalizes to multidimensional asymmetric sequences. The symmetric convolution of multidimensional asymmetric sequences can then be accomplished by taking the product of the trigonometric transforms of the sequences and then applying an inverse trigonometric transform to the result. An example is given of how this theory can be used for applying a two-dimensional (2-D) finite impulse response (FIR) filter with nonlinear phase which models atmospheric turbulence.

  8. Effects of block copolymer properties on nanocarrier protection from in vivo clearance

    PubMed Central

    D’Addio, Suzanne M.; Saad, Walid; Ansell, Steven M.; Squiers, John J.; Adamson, Douglas; Herrera-Alonso, Margarita; Wohl, Adam R.; Hoye, Thomas R.; Macosko, Christopher W.; Mayer, Lawrence D.; Vauthier, Christine; Prud’homme, Robert K.

    2012-01-01

    Drug nanocarrier clearance by the immune system must be minimized to achieve targeted delivery to pathological tissues. There is considerable interest in finding in vitro tests that can predict in vivo clearance outcomes. In this work, we produce nanocarriers with dense PEG layers resulting from block copolymer-directed assembly during rapid precipitation. Nanocarriers are formed using block copolymers with hydrophobic blocks of polystyrene (PS), poly-ε-caprolactone (PCL), poly-D,L-lactide (PLA), or poly-lactide-co-glycolide (PLGA), and hydrophilic blocks of polyethylene glycol (PEG) with molecular weights from 1.5 kg/mol to 9 kg/mol. Nanocarriers with paclitaxel prodrugs are evaluated in vivo in Foxn1nu mice to determine relative rates of clearance. The amount of nanocarrier in circulation after 4 h varies from 10% to 85% of initial dose, depending on the block copolymer. In vitro complement activation assays are conducted in an effort to correlate the protection of the nanocarrier surface from complement binding and activation and in vivo circulation. Guidelines for optimizing block copolymer structure to maximize circulation of nanocarriers formed by rapid precipitation and directed assembly are proposed, relating to the relative size of the hydrophilic and hydrophobic block, the hydrophobicity of the anchoring block, the absolute size of the PEG block, and polymer crystallinity. The in vitro results distinguish between the poorly circulating PEG5k-PCL9k and the better circulating nanocarriers, but could not rank the better circulating nanocarriers in order of circulation time. Analysis of PEG surface packing on monodisperse 200 nm latex spheres indicates that the sizes of the hydrophobic PCL, PS, and PLA blocks are correlated with the PEG blob size, and possibly the clearance from circulation. Suggestions for next step in vitro measurements are made. PMID:22732478

  9. cAMP-Signalling Regulates Gametocyte-Infected Erythrocyte Deformability Required for Malaria Parasite Transmission

    PubMed Central

    Thompson, Eloise; Breil, Florence; Lorthiois, Audrey; Dupuy, Florian; Cummings, Ross; Duffier, Yoann; Corbett, Yolanda; Mercereau-Puijalon, Odile; Vernick, Kenneth; Taramelli, Donatella; Baker, David A.; Langsley, Gordon; Lavazec, Catherine

    2015-01-01

    Blocking Plasmodium falciparum transmission to mosquitoes has been designated a strategic objective in the global agenda of malaria elimination. Transmission is ensured by gametocyte-infected erythrocytes (GIE) that sequester in the bone marrow and at maturation are released into peripheral blood from where they are taken up during a mosquito blood meal. Release into the blood circulation is accompanied by an increase in GIE deformability that allows them to pass through the spleen. Here, we used a microsphere matrix to mimic splenic filtration and investigated the role of cAMP-signalling in regulating GIE deformability. We demonstrated that mature GIE deformability is dependent on reduced cAMP-signalling and on increased phosphodiesterase expression in stage V gametocytes, and that parasite cAMP-dependent kinase activity contributes to the stiffness of immature gametocytes. Importantly, pharmacological agents that raise cAMP levels in transmissible stage V gametocytes render them less deformable and hence less likely to circulate through the spleen. Therefore, phosphodiesterase inhibitors that raise cAMP levels in P. falciparum infected erythrocytes, such as sildenafil, represent new candidate drugs to block transmission of malaria parasites. PMID:25951195

  10. Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.

    PubMed

    Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S

    2010-03-01

    This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.

  11. Detecting the Spectrum of the Atlantic's Thermo-haline Circulation: Deconvolved Climate Proxies Show How Polar Climates Communicate

    NASA Astrophysics Data System (ADS)

    Reischmann, Elizabeth; Yang, Xiao; Rial, José

    2014-05-01

    Deconvolution is widely used in a wide variety of scientific fields, including its significant use in seismology, as a tool to recover real input from a system's impulse response and output. Our research uses spectral division deconvolution in the context of studying the impulse response of the possible relationship between the nonlinear climates of the Polar Regions by using select δ18O ice cores from both poles. This is feasible in spite of the fact that the records may be the result of nonlinear processes because the two polar climates are synchronized for the period studied, forming a Hilbert transform pair. In order to perform this analysis, the age models of three Greenland and four Antarctica records have been matched using a Monte Carlo method with the methane-matched pair GRIP and BYRD as a basis of calculations. For all of the twelve resulting pairs, various deconvolutions schemes (Weiner, Damped Least Squares, Tikhonov, Truncated Singular Value Decomposition) give consistent, quasi-periodic, impulse responses of the system. Multitaper analysis then demonstrates strong, millennia scale, quasi-periodic oscillations in these system responses with a range of 2,500 to 1,000 years. However, these results are directionally dependent, with the transfer function from north to south differing from that of south north. High amplitude power peaks at 5,000 to 1,7000 years characterize the former, while the latter contains peaks at 2,500 to 1,700 years. These predominant periodicities are also found in the data, some of which have been identified as solar forcing, but others of which may indicate internal oscillations of the climate system (1.6-1.4ky). The approximately 1,500 year period transfer function, which does not have a corresponding solar forcing, may indicate one of these internal periodicities of the system, perhaps even indicating the long-term presence of the Deep Water circulation, also known as the thermo-haline circulation (THC). Simplified models of the polar climate fluctuations are shown to support these findings.

  12. A deconvolution extraction method for 2D multi-object fibre spectroscopy based on the regularized least-squares QR-factorization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Jian; Yin, Qian; Guo, Ping; Luo, A.-li

    2014-09-01

    This paper presents an efficient method for the extraction of astronomical spectra from two-dimensional (2D) multifibre spectrographs based on the regularized least-squares QR-factorization (LSQR) algorithm. We address two issues: we propose a modified Gaussian point spread function (PSF) for modelling the 2D PSF from multi-emission-line gas-discharge lamp images (arc images), and we develop an efficient deconvolution method to extract spectra in real circumstances. The proposed modified 2D Gaussian PSF model can fit various types of 2D PSFs, including different radial distortion angles and ellipticities. We adopt the regularized LSQR algorithm to solve the sparse linear equations constructed from the sparse convolution matrix, which we designate the deconvolution spectrum extraction method. Furthermore, we implement a parallelized LSQR algorithm based on graphics processing unit programming in the Compute Unified Device Architecture to accelerate the computational processing. Experimental results illustrate that the proposed extraction method can greatly reduce the computational cost and memory use of the deconvolution method and, consequently, increase its efficiency and practicability. In addition, the proposed extraction method has a stronger noise tolerance than other methods, such as the boxcar (aperture) extraction and profile extraction methods. Finally, we present an analysis of the sensitivity of the extraction results to the radius and full width at half-maximum of the 2D PSF.

  13. Model-free arterial spin labelling for cerebral blood flow quantification: introduction of regional arterial input functions identified by factor analysis.

    PubMed

    Knutsson, Linda; Bloch, Karin Markenroth; Holtås, Stig; Wirestam, Ronnie; Ståhlberg, Freddy

    2008-05-01

    To identify regional arterial input functions (AIFs) using factor analysis of dynamic studies (FADS) when quantification of perfusion is performed using model-free arterial spin labelling. Five healthy volunteers and one patient were examined on a 3-T Philips unit using quantitative STAR labelling of arterial regions (QUASAR). Two sets of images were retrieved, one where the arterial signal had been crushed and another where it was retained. FADS was applied to the arterial signal curves to acquire the AIFs. Perfusion maps were obtained using block-circulant SVD deconvolution and regional AIFs obtained by FADS. In the volunteers, the ASL experiment was repeated within 24 h. The patient was also examined using dynamic susceptibility contrast MRI. In the healthy volunteers, CBF was 64+/-10 ml/[min 100 g] (mean+/-S.D.) in GM and 24+/-4 ml/[min 100 g] in WM, while the mean aBV was 0.94% in GM and 0.25% in WM. Good CBF image quality and reasonable quantitative CBF values were obtained using the combined QUASAR/FADS technique. We conclude that FADS may be a useful supplement in the evaluation of ASL data using QUASAR.

  14. Archaeometric Prospection Using Electrical Survey Predictive Deconvolution (ESPD)

    NASA Astrophysics Data System (ADS)

    Glover, P. W.

    2009-05-01

    Once upon a time archaeological prospection was carried out mainly using electrical techniques. These days magnetic techniques and GPR are used by preference. However, we have shown that electrical surveying combined with the technique of predictive deconvolution is very effective at finding buried features where the shape of the feature can be predicted in advance. One such type of feature is the Grubenhaus (or sunken-featured, sunken-floored building, or SFB). Grubenhaüser exist in the archaeological record as individual well-defined oblong pits that have been filled and buried with other material. Aerial photographs at New Bewick in Northumberland, northern England (UK Grid reference NU061206) showed quasi-rectangular features similar to those on aerial photographs at the nearby Anglo-Saxon palace of Milfield (NT941339) which had been confirmed by excavation to be Grubenhaüser. Several electrical resistivity surveys were carried out over the area with an ABEM Mk II Terrameter and a multiplexing box serving 31 electrodes in line at any given time. Both double-dipole and Wenner configurations were used with an electrode spacing of 1 m. Data was acquired in blocks of 30 m by 30 m during a period of dry summer weather while the field was under young winter wheat. The Wenner array produces a characteristic 'M' or 'W' shaped response over filled in excavations such as those expected to represent a Grubenhaus. While this seems a disadvantage in the first instance, it can be used to improve the data. Such anomalies were present in the raw New Bewick data. The resulting data were analysed using 1D and 2D predictive deconvolution in order to remove the Wenner response. The deconvolution was carried out using an inverse matrix element method. The filtered results indicated the presence of an anomaly that is consistent with a Grubenhaus measuring about 5 m by 4 m and with a pit depth of 0.6 m below 0.5 m of topsoil. The results also showed broader areas of increased resistivity which have been attributed to compaction resulting from human and animal movement. Following the geophysical study the site was excavated (T. Gates and C. O'Brien "Cropmarks at Milfield and New Bewick and the Recognition of Grubenhaüser in Northumberland." Archaeologia Aeliana 5th series, Vol XVI, 1988, 1-9) and a Grubenhaus was discovered at the site. The excavated Grubenhaus measured 4.7 m by 3.9 m with a pit depth of 0.5 m below the base of the topsoil. The deconvolved Wenner data performed better than the double-dipole resistivity survey but was marginally slower.

  15. Suspected-target pesticide screening using gas chromatography-quadrupole time-of-flight mass spectrometry with high resolution deconvolution and retention index/mass spectrum library.

    PubMed

    Zhang, Fang; Wang, Haoyang; Zhang, Li; Zhang, Jing; Fan, Ruojing; Yu, Chongtian; Wang, Wenwen; Guo, Yinlong

    2014-10-01

    A strategy for suspected-target screening of pesticide residues in complicated matrices was exploited using gas chromatography in combination with hybrid quadrupole time-of-flight mass spectrometry (GC-QTOF MS). The screening workflow followed three key steps of, initial detection, preliminary identification, and final confirmation. The initial detection of components in a matrix was done by a high resolution mass spectrum deconvolution; the preliminary identification of suspected pesticides was based on a special retention index/mass spectrum (RI/MS) library that contained both the first-stage mass spectra (MS(1) spectra) and retention indices; and the final confirmation was accomplished by accurate mass measurements of representative ions with their response ratios from the MS(1) spectra or representative product ions from the second-stage mass spectra (MS(2) spectra). To evaluate the applicability of the workflow in real samples, three matrices of apple, spinach, and scallion, each spiked with 165 test pesticides in a set of concentrations, were selected as the models. The results showed that the use of high-resolution TOF enabled effective extractions of spectra from noisy chromatograms, which was based on a narrow mass window (5 mDa) and suspected-target compounds identified by the similarity match of deconvoluted full mass spectra and filtering of linear RIs. On average, over 74% of pesticides at 50 ng/mL could be identified using deconvolution and the RI/MS library. Over 80% of pesticides at 5 ng/mL or lower concentrations could be confirmed in each matrix using at least two representative ions with their response ratios from the MS(1) spectra. In addition, the application of product ion spectra was capable of confirming suspected pesticides with specificity for some pesticides in complicated matrices. In conclusion, GC-QTOF MS combined with the RI/MS library seems to be one of the most efficient tools for the analysis of suspected-target pesticide residues in complicated matrices. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Characterizing the inverses of block tridiagonal, block Toeplitz matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boffi, Nicholas M.; Hill, Judith C.; Reuter, Matthew G.

    2014-12-04

    We consider the inversion of block tridiagonal, block Toeplitz matrices and comment on the behaviour of these inverses as one moves away from the diagonal. Using matrix M bius transformations, we first present an O(1) representation (with respect to the number of block rows and block columns) for the inverse matrix and subsequently use this representation to characterize the inverse matrix. There are four symmetry-distinct cases where the blocks of the inverse matrix (i) decay to zero on both sides of the diagonal, (ii) oscillate on both sides, (iii) decay on one side and oscillate on the other and (iv)more » decay on one side and grow on the other. This characterization exposes the necessary conditions for the inverse matrix to be numerically banded and may also aid in the design of preconditioners and fast algorithms. Finally, we present numerical examples of these matrix types.« less

  17. Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery

    PubMed Central

    Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin

    2017-01-01

    This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565

  18. Evaluation of Isoprene Chain Extension from PEO Macromolecular Chain Transfer Agents for the Preparation of Dual, Invertible Block Copolymer Nanoassemblies.

    PubMed

    Bartels, Jeremy W; Cauët, Solène I; Billings, Peter L; Lin, Lily Yun; Zhu, Jiahua; Fidge, Christopher; Pochan, Darrin J; Wooley, Karen L

    2010-09-14

    Two RAFT-capable PEO macro-CTAs, 2 and 5 kDa, were prepared and used for the polymerization of isoprene which yielded well-defined block copolymers of varied lengths and compositions. GPC analysis of the PEO macro-CTAs and block copolymers showed remaining unreacted PEO macro-CTA. Mathematical deconvolution of the GPC chromatograms allowed for the estimation of the blocking efficiency, about 50% for the 5 kDa PEO macro-CTA and 64% for the 2 kDa CTA. Self assembly of the block copolymers in both water and decane was investigated and the resulting regular and inverse assemblies, respectively, were analyzed with DLS, AFM, and TEM to ascertain their dimensions and properties. Assembly of PEO-b-PIp block copolymers in aqueous solution resulted in well-defined micelles of varying sizes while the assembly in hydrophobic, organic solvent resulted in the formation of different morphologies including large aggregates and well-defined cylindrical and spherical structures.

  19. Surface plasmon enhanced cell microscopy with blocked random spatial activation

    NASA Astrophysics Data System (ADS)

    Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun

    2016-03-01

    We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.

  20. Pulse-Inversion Subharmonic Ultrafast Active Cavitation Imaging in Tissue Using Fast Eigenspace-Based Adaptive Beamforming and Cavitation Deconvolution.

    PubMed

    Bai, Chen; Xu, Shanshan; Duan, Junbo; Jing, Bowen; Yang, Miao; Wan, Mingxi

    2017-08-01

    Pulse-inversion subharmonic (PISH) imaging can display information relating to pure cavitation bubbles while excluding that of tissue. Although plane-wave-based ultrafast active cavitation imaging (UACI) can monitor the transient activities of cavitation bubbles, its resolution and cavitation-to-tissue ratio (CTR) are barely satisfactory but can be significantly improved by introducing eigenspace-based (ESB) adaptive beamforming. PISH and UACI are a natural combination for imaging of pure cavitation activity in tissue; however, it raises two problems: 1) the ESB beamforming is hard to implement in real time due to the enormous amount of computation associated with the covariance matrix inversion and eigendecomposition and 2) the narrowband characteristic of the subharmonic filter will incur a drastic degradation in resolution. Thus, in order to jointly address these two problems, we propose a new PISH-UACI method using novel fast ESB (F-ESB) beamforming and cavitation deconvolution for nonlinear signals. This method greatly reduces the computational complexity by using F-ESB beamforming through dimensionality reduction based on principal component analysis, while maintaining the high quality of ESB beamforming. The degraded resolution is recovered using cavitation deconvolution through a modified convolution model and compressive deconvolution. Both simulations and in vitro experiments were performed to verify the effectiveness of the proposed method. Compared with the ESB-based PISH-UACI, the entire computation of our proposed approach was reduced by 99%, while the axial resolution gain and CTR were increased by 3 times and 2 dB, respectively, confirming that satisfactory performance can be obtained for monitoring pure cavitation bubbles in tissue erosion.

  1. Sparse matrix multiplications for linear scaling electronic structure calculations in an atom-centered basis set using multiatom blocks.

    PubMed

    Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin

    2003-04-15

    A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003

  2. Pre-processing liquid chromatography/high-resolution mass spectrometry data: extracting pure mass spectra by deconvolution from the invariance of isotopic distribution.

    PubMed

    Krishnan, Shaji; Verheij, Elwin E R; Bas, Richard C; Hendriks, Margriet W B; Hankemeier, Thomas; Thissen, Uwe; Coulier, Leon

    2013-05-15

    Mass spectra obtained by deconvolution of liquid chromatography/high-resolution mass spectrometry (LC/HRMS) data can be impaired by non-informative mass-over-charge (m/z) channels. This impairment of mass spectra can have significant negative influence on further post-processing, like quantification and identification. A metric derived from the knowledge of errors in isotopic distribution patterns, and quality of the signal within a pre-defined mass chromatogram block, has been developed to pre-select all informative m/z channels. This procedure results in the clean-up of deconvoluted mass spectra by maintaining the intensity counts from m/z channels that originate from a specific compound/molecular ion, for example, molecular ion, adducts, (13) C-isotopes, multiply charged ions and removing all m/z channels that are not related to the specific peak. The methodology has been successfully demonstrated for two sets of high-resolution LC/MS data. The approach described is therefore thought to be a useful tool in the automatic processing of LC/HRMS data. It clearly shows the advantages compared to other approaches like peak picking and de-isotoping in the sense that all information is retained while non-informative data is removed automatically. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Evidence for radical anion formation during liquid secondary ion mass spectrometry analysis of oligonucleotides and synthetic oligomeric analogues: a deconvolution algorithm for molecular ion region clusters.

    PubMed

    Laramée, J A; Arbogast, B; Deinzer, M L

    1989-10-01

    It is shown that one-electron reduction is a common process that occurs in negative ion liquid secondary ion mass spectrometry (LSIMS) of oligonucleotides and synthetic oligonucleosides and that this process is in competition with proton loss. Deconvolution of the molecular anion cluster reveals contributions from (M-2H).-, (M-H)-, M.-, and (M + H)-. A model based on these ionic species gives excellent agreement with the experimental data. A correlation between the concentration of species arising via one-electron reduction [M.- and (M + H)-] and the electron affinity of the matrix has been demonstrated. The relative intensity of M.- is mass-dependent; this is rationalized on the basis of base-stacking. Base sequence ion formation is theorized to arise from M.- radical anion among other possible pathways.

  4. A Partitioning Algorithm for Block-Diagonal Matrices With Overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guy Antoine Atenekeng Kahou; Laura Grigori; Masha Sosonkina

    2008-02-02

    We present a graph partitioning algorithm that aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in [3]. The graph partitioning algorithm partitions the graph of the input matrix into K partitions, such that every partition {Omega}{sub i} has at most two neighbors {Omega}{sub i-1} and {Omega}{sub i+1}. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile ismore » performed. An initial overlapped block-diagonal partition is obtained from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transferred between neighboring partitions. Experiments are performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach.« less

  5. A new estimation of equivalent matrix block sizes in fractured media with two-phase flow applications in dual porosity models

    NASA Astrophysics Data System (ADS)

    Jerbi, Chahir; Fourno, André; Noetinger, Benoit; Delay, Frederick

    2017-05-01

    Single and multiphase flows in fractured porous media at the scale of natural reservoirs are often handled by resorting to homogenized models that avoid the heavy computations associated with a complete discretization of both fractures and matrix blocks. For example, the two overlapping continua (fractures and matrix) of a dual porosity system are coupled by way of fluid flux exchanges that deeply condition flow at the large scale. This characteristic is a key to realistic flow simulations, especially for multiphase flow as capillary forces and contrasts of fluid mobility compete in the extraction of a fluid from a capacitive matrix then conveyed through the fractures. The exchange rate between fractures and matrix is conditioned by the so-called mean matrix block size which can be viewed as the size of a single matrix block neighboring a single fracture within a mesh of a dual porosity model. We propose a new evaluation of this matrix block size based on the analysis of discrete fracture networks. The fundaments rely upon establishing at the scale of a fractured block the equivalence between the actual fracture network and a Warren and Root network only made of three regularly spaced fracture families parallel to the facets of the fractured block. The resulting matrix block sizes are then compared via geometrical considerations and two-phase flow simulations to the few other available methods. It is shown that the new method is stable in the sense it provides accurate sizes irrespective of the type of fracture network investigated. The method also results in two-phase flow simulations from dual porosity models very close to that from references calculated in finely discretized networks. Finally, calculations of matrix block sizes by this new technique reveal very rapid, which opens the way to cumbersome applications such as preconditioning a dual porosity approach applied to regional fractured reservoirs.

  6. Strong Matrix & Weak Blocks: Evolutionary Inversion of Mélange Rheological Relationships During Subduction and Its Implications for Seismogenesis

    NASA Astrophysics Data System (ADS)

    Clarke, A. P.; Vannucchi, P.; Ougier-Simonin, A.; Morgan, J. P.

    2017-12-01

    Subduction zone interface layers are often conceived to be heterogeneous, polyrheological zones analogous to exhumed mélanges. Mélanges typically contain mechanically strong blocks within a weaker matrix. However, our geomechanical study of the Osa Mélange, SW Costa Rica shows that this mélange contains blocks of altered basalt which are now weaker in friction than their surrounding indurated volcanoclastic matrix. Triaxial deformation experiments were conducted on samples of both the altered basalt blocks and the indurated volcanoclastic matrix at confining pressures of 60 and 120 MPa. These revealed that the volcanoclastic matrix has a strength 7.5 times that of the altered basalt at 60 MPa and 4 times at 120 MPa, with the altered basalt experiencing multi-stage failure. The inverted strength relationship between weaker blocks and stronger matrix evolved during subduction and diagenesis of the melange unit by dewatering, compaction and diagenesis of the matrix and cataclastic brecciation and hydrothermal alteration of the basalt blocks. During the evolution of this material, the matrix progressively indurated until its plastic yield stress became greater than the brittle yield stress of the blocks. At this point, the typical rheological relationship found within melanges inverts and melange blocks can fail seismically as the weakest links along the subduction plate interface. The Osa Melange is currently in the forearc of the erosive Middle America Trench and is being incorporated into the subduction zone interface at the updip limit of seismogenesis. The presence of altered basalt blocks acting as weak inclusions within this rock unit weakens the mélange as a whole rock mass. Seismic fractures can nucleate at or within these weak inclusions and the size of the block may limit the size of initial microseismic rock failure. However, when fractures are able to bridge across the matrix between blocks, significantly larger rupture areas may be possible. While this mechanism is a promising candidate for the updip limit of the unusually shallow seismogenic zone beneath Osa, it remains to be seen whether analogous evolutionary strength-inversions control the updip limit of other subduction seismogenic zones.

  7. Learning Circulant Sensing Kernels

    DTIC Science & Technology

    2014-03-01

    Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance. We...scale. Furthermore, we test learning the circulant sensing matrix/operator and the nonparametric dictionary altogether and obtain even better performance...matrices, Tropp et al.[28] de - scribes a random filter for acquiring a signal x̄; Haupt et al.[12] describes a channel estimation problem to identify a

  8. Functional blockage of EMMPRIN ameliorates atherosclerosis in apolipoprotein E-deficient mice.

    PubMed

    Liu, Hong; Yang, Li-xia; Guo, Rui-wei; Zhu, Guo-Fu; Shi, Yan-Kun; Wang, Xian-mei; Qi, Feng; Guo, Chuan-ming; Ye, Jin-shan; Yang, Zhi-hua; Liang, Xing

    2013-10-09

    Extracellular matrix metalloproteinase inducer (EMMPRIN), a 58-kDa cell surface glycoprotein, has been identified as a key receptor for transmitting cellular signals mediating metalloproteinase activities, as well as inflammation and oxidative stress. Clinical evidence has revealed that EMMPRIN is expressed in human atherosclerotic plaque; however, the relationship between EMMPRIN and atherosclerosis is unclear. To evaluate the functional role of EMMPRIN in atherosclerosis, we treated apolipoprotein E-deficient (ApoE(-/-)) mice with an EMMPRIN function-blocking antibody. EMMPRIN was found to be up-regulated in ApoE(-/-) mice fed a 12-week high-fat diet in contrast to 12 weeks of normal diet. Administration of a function-blocking EMMPRIN antibody (100 μg, twice per week for 4 weeks) to ApoE(-/-) mice, starting after 12 weeks of high-fat diet feeding caused attenuated and more stable atherosclerotic lesions, less reactive oxygen stress generation on plaque, as well as down-regulation of circulating interleukin-6 and monocyte chemotactic protein-1 in ApoE(-/-) mice. The benefit of EMMPRIN functional blockage was associated with reduced metalloproteinases proteolytic activity, which delayed the circulating monocyte transmigrating into atherosclerotic lesions. EMMPRIN antibody intervention ameliorated atherosclerosis in ApoE(-/-) mice by the down-regulation of metalloproteinase activity, suggesting that EMMPRIN may be a viable therapeutic target in atherosclerosis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Block LU factorization

    NASA Technical Reports Server (NTRS)

    Demmel, James W.; Higham, Nicholas J.; Schreiber, Robert S.

    1992-01-01

    Many of the currently popular 'block algorithms' are scalar algorithms in which the operations have been grouped and reordered into matrix operations. One genuine block algorithm in practical use is block LU factorization, and this has recently been shown by Demmel and Higham to be unstable in general. It is shown here that block LU factorization is stable if A is block diagonally dominant by columns. Moreover, for a general matrix the level of instability in block LU factorization can be founded in terms of the condition number kappa(A) and the growth factor for Gaussian elimination without pivoting. A consequence is that block LU factorization is stable for a matrix A that is symmetric positive definite or point diagonally dominant by rows or columns as long as A is well-conditioned.

  10. A gene profiling deconvolution approach to estimating immune cell composition from complex tissues.

    PubMed

    Chen, Shu-Hwa; Kuo, Wen-Yu; Su, Sheng-Yao; Chung, Wei-Chun; Ho, Jen-Ming; Lu, Henry Horng-Shing; Lin, Chung-Yen

    2018-05-08

    A new emerged cancer treatment utilizes intrinsic immune surveillance mechanism that is silenced by those malicious cells. Hence, studies of tumor infiltrating lymphocyte populations (TILs) are key to the success of advanced treatments. In addition to laboratory methods such as immunohistochemistry and flow cytometry, in silico gene expression deconvolution methods are available for analyses of relative proportions of immune cell types. Herein, we used microarray data from the public domain to profile gene expression pattern of twenty-two immune cell types. Initially, outliers were detected based on the consistency of gene profiling clustering results and the original cell phenotype notation. Subsequently, we filtered out genes that are expressed in non-hematopoietic normal tissues and cancer cells. For every pair of immune cell types, we ran t-tests for each gene, and defined differentially expressed genes (DEGs) from this comparison. Equal numbers of DEGs were then collected as candidate lists and numbers of conditions and minimal values for building signature matrixes were calculated. Finally, we used v -Support Vector Regression to construct a deconvolution model. The performance of our system was finally evaluated using blood biopsies from 20 adults, in which 9 immune cell types were identified using flow cytometry. The present computations performed better than current state-of-the-art deconvolution methods. Finally, we implemented the proposed method into R and tested extensibility and usability on Windows, MacOS, and Linux operating systems. The method, MySort, is wrapped as the Galaxy platform pluggable tool and usage details are available at https://testtoolshed.g2.bx.psu.edu/view/moneycat/mysort/e3afe097e80a .

  11. Synthesis and morphology of hydroxyapatite/polyethylene oxide nanocomposites with block copolymer compatibilized interfaces

    NASA Astrophysics Data System (ADS)

    Lee, Ji Hoon; Shofner, Meisha

    2012-02-01

    In order to exploit the promise of polymer nanocomposites, special consideration should be given to component interfaces during synthesis and processing. Previous results from this group have shown that nanoparticles clustered into larger structures consistent with their native shape when the polymer matrix crystallinity was high. Therefore in this research, the nanoparticles are disguised from a highly-crystalline polymer matrix by cloaking them with a matrix-compatible block copolymer. Specifically, spherical and needle-shaped hydroxyapatite nanoparticles were synthesized using a block copolymer templating method. The block copolymer used, polyethylene oxide-b-polymethacrylic acid, remained on the nanoparticle surface following synthesis with the polyethylene oxide block exposed. These nanoparticles were subsequently added to a polyethylene oxide matrix using solution processing. Characterization of the nanocomposites indicated that the copolymer coating prevented the nanoparticles from assembling into ordered clusters and that the matrix crystallinity was decreased at a nanoparticle spacing of approximately 100 nm.

  12. Block-circulant matrices with circulant blocks, Weil sums, and mutually unbiased bases. II. The prime power case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combescure, Monique

    2009-03-15

    In our previous paper [Combescure, M., 'Circulant matrices, Gauss sums and the mutually unbiased bases. I. The prime number case', Cubo A Mathematical Journal (unpublished)] we have shown that the theory of circulant matrices allows to recover the result that there exists p+1 mutually unbiased bases in dimension p, p being an arbitrary prime number. Two orthonormal bases B, B{sup '} of C{sup d} are said mutually unbiased if for all b(set-membership sign)B, for all b{sup '}(set-membership sign)B{sup '} one has that |b{center_dot}b{sup '}|=1/{radical}(d) (b{center_dot}b{sup '} Hermitian scalar product in C{sup d}). In this paper we show that the theorymore » of block-circulant matrices with circulant blocks allows to show very simply the known result that if d=p{sup n} (p a prime number and n any integer) there exists d+1 mutually unbiased bases in C{sup d}. Our result relies heavily on an idea of Klimov et al. [''Geometrical approach to the discrete Wigner function,'' J. Phys. A 39, 14471 (2006)]. As a subproduct we recover properties of quadratic Weil sums for p{>=}3, which generalizes the fact that in the prime case the quadratic Gauss sum properties follow from our results.« less

  13. Safety of Epinephrine in Digital Nerve Blocks: A Literature Review.

    PubMed

    Ilicki, Jonathan

    2015-11-01

    Digital nerve blocks are commonly performed in emergency departments. Health care practitioners are often taught to avoid performing blocks with epinephrine due to a risk of digital necrosis. To review the literature on the safety of epinephrine 1:100,000-200,000 (5-10 μg/mL) with local anesthetics in digital nerve blocks in healthy patients and in patients with risk for poor peripheral circulation. PubMed, Web of Science, and the Cochrane Library were searched in June 2014 using the query "digital block AND epinephrine OR digital block AND adrenaline". The searches were performed without any limits. Sixty-three articles were identified, and 39 of these were found to be relevant. These include nine reviews, 12 randomized control trials, and 18 other articles. Most studies excluded patients with risk for poor peripheral circulation. Two studies described using epinephrine on patients with vascular comorbidities. No study reported digital necrosis or gangrene attributable to epinephrine, either in healthy patients or in patients with risk for poor peripheral circulation. In total, at least 2797 digital nerve blocks with epinephrine have been performed without any complications. Epinephrine 1:100,000-200,000 (5-10 μg/mL) is safe to use in digital nerve blocks in healthy patients. Physiological studies show epinephrine-induced vasoconstriction to be transient. There are no reported cases of epinephrine-induced harm to patients with risk for poor peripheral circulation despite a theoretical risk of harmful epinephrine-induced vasoconstriction. A lack of reported complications suggests that the risk of epinephrine-induced vasoconstriction to digits may be overstated. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A general parallel sparse-blocked matrix multiply for linear scaling SCF theory

    NASA Astrophysics Data System (ADS)

    Challacombe, Matt

    2000-06-01

    A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.

  15. Constructing LDPC Codes from Loop-Free Encoding Modules

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to channel capacity limits can be achieved for the codes of the type in question having low maximum variable node degrees. The decoding thresholds in these examples are lower than those of the best-known unstructured irregular LDPC codes constrained to have the same maximum node degrees. Furthermore, the present method enables the construction of codes of any desired rate with thresholds that stay uniformly close to their respective channel capacity thresholds.

  16. Atmospheric Blocking and Atlantic Multi-Decadal Ocean Variability

    NASA Technical Reports Server (NTRS)

    Hakkinen, Sirpa; Rhines, Peter B.; Worthen, Denise L.

    2011-01-01

    Atmospheric blocking over the northern North Atlantic involves isolation of large regions of air from the westerly circulation for 5-14 days or more. From a recent 20th century atmospheric reanalysis (1,2) winters with more frequent blocking persist over several decades and correspond to a warm North Atlantic Ocean, in-phase with Atlantic multi-decadal ocean variability (AMV). Ocean circulation is forced by wind-stress curl and related air/sea heat exchange, and we find that their space-time structure is associated with dominant blocking patterns: weaker ocean gyres and weaker heat exchange contribute to the warm phase of AMV. Increased blocking activity extending from Greenland to British Isles is evident when winter blocking days of the cold years (1900-1929) are subtracted from those of the warm years (1939-1968).

  17. Dissecting the Impact of Matrix Anchorage and Elasticity in Cell Adhesion

    PubMed Central

    Pompe, Tilo; Glorius, Stefan; Bischoff, Thomas; Uhlmann, Ina; Kaufmann, Martin; Brenner, Sebastian; Werner, Carsten

    2009-01-01

    Abstract Extracellular matrices determine cellular fate decisions through the regulation of intracellular force and stress. Previous studies suggest that matrix stiffness and ligand anchorage cause distinct signaling effects. We show herein how defined noncovalent anchorage of adhesion ligands to elastic substrates allows for dissection of intracellular adhesion signaling pathways related to matrix stiffness and receptor forces. Quantitative analysis of the mechanical balance in cell adhesion using traction force microscopy revealed distinct scalings of the strain energy imparted by the cells on the substrates dependent either on matrix stiffness or on receptor force. Those scalings suggested the applicability of a linear elastic theoretical framework for the description of cell adhesion in a certain parameter range, which is cell-type-dependent. Besides the deconvolution of biophysical adhesion signaling, site-specific phosphorylation of focal adhesion kinase, dependent either on matrix stiffness or on receptor force, also demonstrated the dissection of biochemical signaling events in our approach. Moreover, the net contractile moment of the adherent cells and their strain energy exerted on the elastic substrate was found to be a robust measure of cell adhesion with a unifying power-law scaling exponent of 1.5 independent of matrix stiffness. PMID:19843448

  18. Reduced order feedback control equations for linear time and frequency domain analysis

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1981-01-01

    An algorithm was developed which can be used to obtain the equations. In a more general context, the algorithm computes a real nonsingular similarity transformation matrix which reduces a real nonsymmetric matrix to block diagonal form, each block of which is a real quasi upper triangular matrix. The algorithm works with both defective and derogatory matrices and when and if it fails, the resultant output can be used as a guide for the reformulation of the mathematical equations that lead up to the ill conditioned matrix which could not be block diagonalized.

  19. The Resolution Sensitivity of Northern Hemisphere Blocking in Four 25-km Atmospheric Global Circulation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiemann, Reinhard; Demory, Marie-Estelle; Shaffrey, Len C.

    The aim of this study is to investigate if the representation of Northern Hemisphere blocking is sensitive to resolution in current-generation atmospheric global circulation models (AGCMs). An evaluation is thus conducted of how well atmospheric blocking is represented in four AGCMs whose horizontal resolution is increased from a grid spacing of more than 100 km to about 25 km. It is shown that Euro-Atlantic blocking is simulated overall more credibly at higher resolution (i.e., in better agreement with a 50-yr reference blocking climatology created from the reanalyses ERA-40 and ERA-Interim). The improvement seen with resolution depends on the season andmore » to some extent on the model considered. Euro-Atlantic blocking is simulated more realistically at higher resolution in winter, spring, and autumn, and robustly so across the model ensemble. The improvement in spring is larger than that in winter and autumn. Summer blocking is found to be better simulated at higher resolution by one model only, with little change seen in the other three models. The representation of Pacific blocking is not found to systematically depend on resolution. Despite the improvements seen with resolution, the 25-km models still exhibit large biases in Euro-Atlantic blocking. For example, three of the four 25-km models underestimate winter northern European blocking frequency by about one-third. The resolution sensitivity and biases in the simulated blocking are shown to be in part associated with the mean-state biases in the models' midlatitude circulation.« less

  20. The Resolution Sensitivity of Northern Hemisphere Blocking in Four 25-km Atmospheric Global Circulation Models

    DOE PAGES

    Schiemann, Reinhard; Demory, Marie-Estelle; Shaffrey, Len C.; ...

    2016-12-19

    The aim of this study is to investigate if the representation of Northern Hemisphere blocking is sensitive to resolution in current-generation atmospheric global circulation models (AGCMs). An evaluation is thus conducted of how well atmospheric blocking is represented in four AGCMs whose horizontal resolution is increased from a grid spacing of more than 100 km to about 25 km. It is shown that Euro-Atlantic blocking is simulated overall more credibly at higher resolution (i.e., in better agreement with a 50-yr reference blocking climatology created from the reanalyses ERA-40 and ERA-Interim). The improvement seen with resolution depends on the season andmore » to some extent on the model considered. Euro-Atlantic blocking is simulated more realistically at higher resolution in winter, spring, and autumn, and robustly so across the model ensemble. The improvement in spring is larger than that in winter and autumn. Summer blocking is found to be better simulated at higher resolution by one model only, with little change seen in the other three models. The representation of Pacific blocking is not found to systematically depend on resolution. Despite the improvements seen with resolution, the 25-km models still exhibit large biases in Euro-Atlantic blocking. For example, three of the four 25-km models underestimate winter northern European blocking frequency by about one-third. The resolution sensitivity and biases in the simulated blocking are shown to be in part associated with the mean-state biases in the models' midlatitude circulation.« less

  1. BCYCLIC: A parallel block tridiagonal matrix cyclic solver

    NASA Astrophysics Data System (ADS)

    Hirshman, S. P.; Perumalla, K. S.; Lynch, V. E.; Sanchez, R.

    2010-09-01

    A block tridiagonal matrix is factored with minimal fill-in using a cyclic reduction algorithm that is easily parallelized. Storage of the factored blocks allows the application of the inverse to multiple right-hand sides which may not be known at factorization time. Scalability with the number of block rows is achieved with cyclic reduction, while scalability with the block size is achieved using multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. This dual scalability is a noteworthy feature of this new solver, as well as its ability to efficiently handle arbitrary (non-powers-of-2) block row and processor numbers. Comparison with a state-of-the art parallel sparse solver is presented. It is expected that this new solver will allow many physical applications to optimally use the parallel resources on current supercomputers. Example usage of the solver in magneto-hydrodynamic (MHD), three-dimensional equilibrium solvers for high-temperature fusion plasmas is cited.

  2. Climate model biases in jet streams, blocking and storm tracks resulting from missing orographic drag

    NASA Astrophysics Data System (ADS)

    Pithan, Felix; Shepherd, Theodore G.; Zappa, Giuseppe; Sandu, Irina

    2016-07-01

    State-of-the art climate models generally struggle to represent important features of the large-scale circulation. Common model deficiencies include an equatorward bias in the location of the midlatitude westerlies and an overly zonal orientation of the North Atlantic storm track. Orography is known to strongly affect the atmospheric circulation and is notoriously difficult to represent in coarse-resolution climate models. Yet how the representation of orography affects circulation biases in current climate models is not understood. Here we show that the effects of switching off the parameterization of drag from low-level orographic blocking in one climate model resemble the biases of the Coupled Model Intercomparison Project Phase 5 ensemble: An overly zonal wintertime North Atlantic storm track and less European blocking events, and an equatorward shift in the Southern Hemispheric jet and increase in the Southern Annular Mode time scale. This suggests that typical circulation biases in coarse-resolution climate models may be alleviated by improved parameterizations of low-level drag.

  3. Fundamental Flux Equations for Fracture-Matrix Interactions with Linear Diffusion

    NASA Astrophysics Data System (ADS)

    Oldenburg, C. M.; Zhou, Q.; Rutqvist, J.; Birkholzer, J. T.

    2017-12-01

    The conventional dual-continuum models are only applicable for late-time behavior of pressure propagation in fractured rock, while discrete-fracture-network models may explicitly deal with matrix blocks at high computational expense. To address these issues, we developed a unified-form diffusive flux equation for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular matrix blocks (squares, cubes, rectangles, and rectangular parallelepipeds) by partitioning the entire dimensionless-time domain (Zhou et al., 2017a, b). For each matrix block, this flux equation consists of the early-time solution up until a switch-over time after which the late-time solution is applied to create continuity from early to late time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the coefficients dependent on dimensionless area-to-volume ratio and aspect ratios for rectangular blocks. For the late-time solutions, one exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic blocks. The time-partitioning method was also used for calculating pressure/concentration/temperature distribution within a matrix block. The approximate solution contains an error-function solution for early times and an exponential solution for late times, with relative errors less than 0.003. These solutions form the kernel of multirate and multidimensional hydraulic, solute and thermal diffusion in fractured reservoirs.

  4. Development and validation of a liquid chromatography isotope dilution mass spectrometry method for the reliable quantification of alkylphenols in environmental water samples by isotope pattern deconvolution.

    PubMed

    Fabregat-Cabello, Neus; Sancho, Juan V; Vidal, Andreu; González, Florenci V; Roig-Navarro, Antoni Francesc

    2014-02-07

    We present here a new measurement method for the rapid extraction and accurate quantification of technical nonylphenol (NP) and 4-t-octylphenol (OP) in complex matrix water samples by UHPLC-ESI-MS/MS. The extraction of both compounds is achieved in 30min by means of hollow fiber liquid phase microextraction (HF-LPME) using 1-octanol as acceptor phase, which provides an enrichment (preconcentration) factor of 800. On the other hand we have developed a quantification method based on isotope dilution mass spectrometry (IDMS) and singly (13)C1-labeled compounds. To this end the minimal labeled (13)C1-4-(3,6-dimethyl-3-heptyl)-phenol and (13)C1-t-octylphenol isomers were synthesized, which coelute with the natural compounds and allows the compensation of the matrix effect. The quantification was carried out by using isotope pattern deconvolution (IPD), which permits to obtain the concentration of both compounds without the need to build any calibration graph, reducing the total analysis time. The combination of both extraction and determination techniques have allowed to validate for the first time a HF-LPME methodology at the required levels by legislation achieving limits of quantification of 0.1ngmL(-1) and recoveries within 97-109%. Due to the low cost of HF-LPME and total time consumption, this methodology is ready for implementation in routine analytical laboratories. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. A Perron-Frobenius theory for block matrices associated to a multiplex network

    NASA Astrophysics Data System (ADS)

    Romance, Miguel; Solá, Luis; Flores, Julio; García, Esther; García del Amo, Alejandro; Criado, Regino

    2015-03-01

    The uniqueness of the Perron vector of a nonnegative block matrix associated to a multiplex network is discussed. The conclusions come from the relationships between the irreducibility of some nonnegative block matrix associated to a multiplex network and the irreducibility of the corresponding matrices to each layer as well as the irreducibility of the adjacency matrix of the projection network. In addition the computation of that Perron vector in terms of the Perron vectors of the blocks is also addressed. Finally we present the precise relations that allow to express the Perron eigenvector of the multiplex network in terms of the Perron eigenvectors of its layers.

  6. Verifying the Presence of Low Levels of Neptunium in a Uranium Matrix with Electron Energy-Loss Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buck, Edgar C.; Douglas, Matthew; Wittman, Richard S.

    2010-01-01

    This paper examines the problems associated with the analysis of low levels of neptunium (Np) in a uranium (U) matrix with electron energy-loss spectroscopy (EELS) on the transmission electron microscope (TEM). The detection of Np in a matrix of uranium (U) can be impeded by the occurrence of a plural scattering event from U (U-M5 + U-O4,5) that results in severe overlap on the Np-M5 edge at 3665 eV. Low levels (1600 - 6300 ppm) of Np can be detected in U solids by confirming the energy gap between the Np-M5 and Np-M4 edges is at 184 eV and showingmore » that the M4/M5 ratio for the Np is smaller than that for U. The Richardson-Lucy deconvolution method was applied to energy-loss spectral images and was shown to increase the signal to noise. This method also improves the limits of detection for Np in a U matrix.« less

  7. Estimation of neutron energy distributions from prompt gamma emissions

    NASA Astrophysics Data System (ADS)

    Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.

    2017-11-01

    A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.

  8. Application of an NLME-Stochastic Deconvolution Approach to Level A IVIVC Modeling.

    PubMed

    Kakhi, Maziar; Suarez-Sharp, Sandra; Shepard, Terry; Chittenden, Jason

    2017-07-01

    Stochastic deconvolution is a parameter estimation method that calculates drug absorption using a nonlinear mixed-effects model in which the random effects associated with absorption represent a Wiener process. The present work compares (1) stochastic deconvolution and (2) numerical deconvolution, using clinical pharmacokinetic (PK) data generated for an in vitro-in vivo correlation (IVIVC) study of extended release (ER) formulations of a Biopharmaceutics Classification System class III drug substance. The preliminary analysis found that numerical and stochastic deconvolution yielded superimposable fraction absorbed (F abs ) versus time profiles when supplied with exactly the same externally determined unit impulse response parameters. In a separate analysis, a full population-PK/stochastic deconvolution was applied to the clinical PK data. Scenarios were considered in which immediate release (IR) data were either retained or excluded to inform parameter estimation. The resulting F abs profiles were then used to model level A IVIVCs. All the considered stochastic deconvolution scenarios, and numerical deconvolution, yielded on average similar results with respect to the IVIVC validation. These results could be achieved with stochastic deconvolution without recourse to IR data. Unlike numerical deconvolution, this also implies that in crossover studies where certain individuals do not receive an IR treatment, their ER data alone can still be included as part of the IVIVC analysis. Published by Elsevier Inc.

  9. Spectroscopic investigation of different concentrations of the vapour deposited copper phthalocyanine as a "guest" in polyimide matrix.

    PubMed

    Georgiev, Anton; Yordanov, Dancho; Dimov, Dean; Assa, Jacob; Spassova, Erinche; Danev, Gencho

    2015-04-05

    Nanocomposite layers 250 nm copper phthalocyanine/polyimide prepared by simultaneous vapour deposition of three different sources were studied. Different concentrations of copper phthalocyanine as a "guest" in polyimide matrix as a function of conditions of the preparation have been determined by FTIR (Fourier Transform Infrared) and UV-VIS (Ultraviolet-Visible) spectroscopies. The aim was to estimate the possibility of the spectroscopic methods for quantitative determination of the "guest" and compare with the quality of the polyimide thin films in relation to the "guest" concentration. The band at 1334 cm(-1) has been used for quantitative estimation of "guest" in polyimide matrix. The concentrations of the copper phthalocyanine less than 20% require curve fitting techniques with Fourier self deconvolution. The relationship between "guest" concentrations and degree of imidization, as well as the electronic UV-VIS spectra are discussed in relation to the composition, imidization degree and the two crystallographic modification of the embedded chromophore. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, Paul

    Cellulosic ethanol is an emerging biofuel that will make strong contributions to American domestic energy needs. In the US midwest the standard method for pretreatment of biomass uses hot acid to deconstruct lignocellulose. While other methods work, they are not in common use. Therefore it is necessary to work within this context to achieve process improvements and reductions in biofuel cost. Technology underlying this process could supplement and even replace commodity enzymes with engineered microbes to convert biomass-derived lignocellulose feedstocks into biofuels and valueadded chemicals. The approach that was used here was based on consolidated bioprocessing. Thermoacidophilic microbes belonging tomore » the Domain Archaea were evaluated and modfied to promote deconvolution and saccharification of lignocellulose. Biomass pretreatment (hot acid) was combined with fermentation using an extremely thermoacidophilic microbial platform. The identity and fate of released sugars was controlled using metabolic blocks combined with added biochemical traits where needed. LC/MS analysis supported through the newly established Nebraska Bioenergy Facility provided general support for bioenergy researchers at the University of Nebraska. The primary project strategy was to use microbes that naturally flourish in hot acid (thermoacidophiles) with conventional biomass pretreatment that uses hot acid. The specific objectives were: to screen thermoacidophilic taxa for the ability to deconvolute lignocellulose and depolymerize associated carbohydrates; evaluate and respond to formation of “inhibitors” that arose during incubation of lignocellulose under heated acidic conditions; identify and engineer “sugar flux channeling and catabolic blocks” that redirect metabolic pathways to maximize sugar concentrations; expand the hydrolytic capacity of extremely thermoacidophilic microbes through the addition of deconvolution traits; and establish the Nebraska Bioenergy Facility (NBF) at the University of Nebraska-Lincoln.« less

  11. Electrospray Ionization with High-Resolution Mass Spectrometry as a Tool for Lignomics: Lignin Mass Spectrum Deconvolution

    NASA Astrophysics Data System (ADS)

    Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena

    2018-05-01

    The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.

  12. Electrospray Ionization with High-Resolution Mass Spectrometry as a Tool for Lignomics: Lignin Mass Spectrum Deconvolution

    NASA Astrophysics Data System (ADS)

    Andrianova, Anastasia A.; DiProspero, Thomas; Geib, Clayton; Smoliakova, Irina P.; Kozliak, Evguenii I.; Kubátová, Alena

    2018-03-01

    The capability to characterize lignin, lignocellulose, and their degradation products is essential for the development of new renewable feedstocks. Electrospray ionization high-resolution time-of-flight mass spectrometry (ESI-HR TOF-MS) method was developed expanding the lignomics toolkit while targeting the simultaneous detection of low and high molecular weight (MW) lignin species. The effect of a broad range of electrolytes and various ionization conditions on ion formation and ionization effectiveness was studied using a suite of mono-, di-, and triarene lignin model compounds as well as kraft alkali lignin. Contrary to the previous studies, the positive ionization mode was found to be more effective for methoxy-substituted arenes and polyphenols, i.e., species of a broadly varied MW structurally similar to the native lignin. For the first time, we report an effective formation of multiply charged species of lignin with the subsequent mass spectrum deconvolution in the presence of 100 mmol L-1 formic acid in the positive ESI mode. The developed method enabled the detection of lignin species with an MW between 150 and 9000 Da or higher, depending on the mass analyzer. The obtained M n and M w values of 1500 and 2500 Da, respectively, were in good agreement with those determined by gel permeation chromatography. Furthermore, the deconvoluted ESI mass spectrum was similar to that obtained with matrix-assisted laser desorption/ionization (MALDI)-HR TOF-MS, yet featuring a higher signal-to-noise ratio. The formation of multiply charged species was confirmed with ion mobility ESI-HR Q-TOF-MS. [Figure not available: see fulltext.

  13. Retardation of mobile radionuclides in granitic rock fractures by matrix diffusion

    NASA Astrophysics Data System (ADS)

    Hölttä, P.; Poteri, A.; Siitari-Kauppi, M.; Huittinen, N.

    Transport of iodide and sodium has been studied by means of block fracture and core column experiments to evaluate the simplified radionuclide transport concept. The objectives were to examine the processes causing retention in solute transport, especially matrix diffusion, and to estimate their importance during transport in different scales and flow conditions. Block experiments were performed using a Kuru Grey granite block having a horizontally planar natural fracture. Core columns were constructed from cores drilled orthogonal to the fracture of the granite block. Several tracer tests were performed using uranine, 131I and 22Na as tracers at water flow rates 0.7-50 μL min -1. Transport of tracers was modelled by applying the advection-dispersion model based on the generalized Taylor dispersion added with matrix diffusion. Scoping calculations were combined with experiments to test the model concepts. Two different experimental configurations could be modelled applying consistent transport processes and parameters. The processes, advection-dispersion and matrix diffusion, were conceptualized with sufficient accuracy to replicate the experimental results. The effects of matrix diffusion were demonstrated on the slightly sorbing sodium and mobile iodine breakthrough curves.

  14. A new lumped-parameter model for flow in unsaturated dual-porosity media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.

    A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less

  15. Color normalization of histology slides using graph regularized sparse NMF

    NASA Astrophysics Data System (ADS)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.

  16. Parallel Gaussian elimination of a block tridiagonal matrix using multiple microcomputers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.

    1989-01-01

    The solution of a block tridiagonal matrix using parallel processing is demonstrated. The multiprocessor system on which results were obtained and the software environment used to program that system are described. Theoretical partitioning and resource allocation for the Gaussian elimination method used to solve the matrix are discussed. The results obtained from running 1, 2 and 3 processor versions of the block tridiagonal solver are presented. The PASCAL source code for these solvers is given in the appendix, and may be transportable to other shared memory parallel processors provided that the synchronization outlines are reproduced on the target system.

  17. Advanced Signal Processing Techniques Applied to Terahertz Inspections on Aerospace Foams

    NASA Technical Reports Server (NTRS)

    Trinh, Long Buu

    2009-01-01

    The space shuttle's external fuel tank is thermally insulated by the closed cell foams. However, natural voids composed of air and trapped gas are found as by-products when the foams are cured. Detection of foam voids and foam de-bonding is a formidable task owing to the small index of refraction contrast between foam and air (1.04:1). In the presence of a denser binding matrix agent that bonds two different foam materials, time-differentiation of filtered terahertz signals can be employed to magnify information prior to the main substrate reflections. In the absence of a matrix binder, de-convolution of the filtered time differential terahertz signals is performed to reduce the masking effects of antenna ringing. The goal is simply to increase probability of void detection through image enhancement and to determine the depth of the void.

  18. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  19. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  20. Multi-color incomplete Cholesky conjugate gradient methods for vector computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poole, E.L.

    1986-01-01

    This research is concerned with the solution on vector computers of linear systems of equations. Ax = b, where A is a large, sparse symmetric positive definite matrix with non-zero elements lying only along a few diagonals of the matrix. The system is solved using the incomplete Cholesky conjugate gradient method (ICCG). Multi-color orderings are used of the unknowns in the linear system to obtain p-color matrices for which a no-fill block ICCG method is implemented on the CYBER 205 with O(N/p) length vector operations in both the decomposition of A and, more importantly, in the forward and back solvesmore » necessary at each iteration of the method. (N is the number of unknowns and p is a small constant). A p-colored matrix is a matrix that can be partitioned into a p x p block matrix where the diagonal blocks are diagonal matrices. The matrix is stored by diagonals and matrix multiplication by diagonals is used to carry out the decomposition of A and the forward and back solves. Additionally, if the vectors across adjacent blocks line up, then some of the overhead associated with vector startups can be eliminated in the matrix vector multiplication necessary at each conjugate gradient iteration. Necessary and sufficient conditions are given to determine which multi-color orderings of the unknowns correspond to p-color matrices, and a process is indicated for choosing multi-color orderings.« less

  1. Deconvolution method for accurate determination of overlapping peak areas in chromatograms.

    PubMed

    Nelson, T J

    1991-12-20

    A method is described for deconvoluting chromatograms which contain overlapping peaks. Parameters can be selected to ensure that attenuation of peak areas is uniform over any desired range of peak widths. A simple extension of the method greatly reduces the negative overshoot frequently encountered with deconvolutions. The deconvoluted chromatograms are suitable for integration by conventional methods.

  2. Fast, exact k-space sample density compensation for trajectories composed of rotationally symmetric segments, and the SNR-optimized image reconstruction from non-Cartesian samples.

    PubMed

    Mitsouras, Dimitris; Mulkern, Robert V; Rybicki, Frank J

    2008-08-01

    A recently developed method for exact density compensation of non uniformly arranged samples relies on the analytically known cross-correlations of Fourier basis functions corresponding to the traced k-space trajectory. This method produces a linear system whose solution represents compensated samples that normalize the contribution of each independent element of information that can be expressed by the underlying trajectory. Unfortunately, linear system-based density compensation approaches quickly become computationally demanding with increasing number of samples (i.e., image resolution). Here, it is shown that when a trajectory is composed of rotationally symmetric interleaves, such as spiral and PROPELLER trajectories, this cross-correlations method leads to a highly simplified system of equations. Specifically, it is shown that the system matrix is circulant block-Toeplitz so that the linear system is easily block-diagonalized. The method is described and demonstrated for 32-way interleaved spiral trajectories designed for 256 image matrices; samples are compensated non iteratively in a few seconds by solving the small independent block-diagonalized linear systems in parallel. Because the method is exact and considers all the interactions between all acquired samples, up to a 10% reduction in reconstruction error concurrently with an up to 30% increase in signal to noise ratio are achieved compared to standard density compensation methods. (c) 2008 Wiley-Liss, Inc.

  3. Atmospheric Blocking and Atlantic Multi-Decadal Ocean Variability

    NASA Technical Reports Server (NTRS)

    Haekkinen, Sirpa; Rhines, Peter B.; Worthlen, Denise L.

    2011-01-01

    Based on the 20th century atmospheric reanalysis, winters with more frequent blocking, in a band of blocked latitudes from Greenland to Western Europe, are found to persist over several decades and correspond to a warm North Atlantic Ocean, in-phase with Atlantic multi-decadal ocean variability. Atmospheric blocking over the northern North Atlantic, which involves isolation of large regions of air from the westerly circulation for 5 days or more, influences fundamentally the ocean circulation and upper ocean properties by impacting wind patterns. Winters with clusters of more frequent blocking between Greenland and western Europe correspond to a warmer, more saline subpolar ocean. The correspondence between blocked westerly winds and warm ocean holds in recent decadal episodes (especially, 1996-2010). It also describes much longer-timescale Atlantic multidecadal ocean variability (AMV), including the extreme, pre-greenhouse-gas, northern warming of the 1930s-1960s. The space-time structure of the wind forcing associated with a blocked regime leads to weaker ocean gyres and weaker heat-exchange, both of which contribute to the warm phase of AMV.

  4. Solving periodic block tridiagonal systems using the Sherman-Morrison-Woodbury formula

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice

    1989-01-01

    Many algorithms for solving the Navier-Stokes equations require the solution of periodic block tridiagonal systems of equations. By applying a splitting to the matrix representing this system of equations, it may first be reduced to a block tridiagonal matrix plus an outer product of two block vectors. The Sherman-Morrison-Woodbury formula is then applied. The algorithm thus reduces a periodic banded system to a non-periodic banded system with additional right-hand sides and is of higher efficiency than standard Thomas algorithm/LU decompositions.

  5. Bayesian block-diagonal variable selection and model averaging

    PubMed Central

    Papaspiliopoulos, O.; Rossell, D.

    2018-01-01

    Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501

  6. Self-assembly Morphology and Crystallinity Control of Di-block Copolymer Inspired by Spider Silk

    NASA Astrophysics Data System (ADS)

    Huang, Wenwen; Krishnaji, Sreevidhya; Kaplan, David; Cebe, Peggy

    2012-02-01

    To obtain a fuller understanding of the origin of self-assembly behavior, and thus be able to control the morphology of biomaterials with well defined amino acid sequences for tissue regeneration and drug delivery, we created a family of synthetic silk-based block copolymers inspired by the genetic sequences found in spider dragline, HABn and HBAn (n=1,2,3,6), where B = hydrophilic block, A = hydrophobic block, and H is a histidine tag. We assessed the secondary structure of water cast films by Fourier transform infrared spectroscopy (FTIR). The crystallinity was determined by Fourier self-deconvolution of amide I spectra and confirmed by wide angle X-ray diffraction (WAXD). Results indicate that we can control the self-assembled morphology and the crystallinity by varying the block length, and a minimum of 3 A-blocks are required to form beta sheet crystalline regions in water-cast spider silk block copolymers. The morphology and crystallinity can also be tuned by annealing. Thermal properties of water cast films and films annealed at 120 C were determined by differential scanning calorimetry and thermogravimetry. The sample films were also treated with 1,1,1,3,3,3-Hexafluoro-2-propanol (HFIP) to obtain wholly amorphous samples, and crystallized by exposure to methanol. Using scanning and transmission electron microscopies, we observe that fibrillar networks and hollow micelles are formed in water cast and methanol cast samples, but not in samples cast from HFIP.

  7. Entanglement classification in the noninteracting Fermi gas

    NASA Astrophysics Data System (ADS)

    Jafarizadeh, M. A.; Eghbalifam, F.; Nami, S.; Yahyavi, M.

    In this paper, entanglement classification shared among the spins of localized fermions in the noninteracting Fermi gas is studied. It is proven that the Fermi gas density matrix is block diagonal on the basis of the projection operators to the irreducible representations of symmetric group Sn. Every block of density matrix is in the form of the direct product of a matrix and identity matrix. Then it is useful to study entanglement in every block of density matrix separately. The basis of corresponding Hilbert space are identified from the Schur-Weyl duality theorem. Also, it can be shown that the symmetric part of the density matrix is fully separable. Then it has been shown that the entanglement measure which is introduced in Eltschka et al. [New J. Phys. 10, 043104 (2008)] and Guhne et al. [New J. Phys. 7, 229 (2005)], is zero for the even n qubit Fermi gas density matrix. Then by focusing on three spin reduced density matrix, the entanglement classes have been investigated. In three qubit states there is an entanglement measure which is called 3-tangle. It can be shown that 3-tangle is zero for three qubit density matrix, but the density matrix is not biseparable for all possible values of its parameters and its eigenvectors are in the form of W-states. Then an entanglement witness for detecting non-separable state and an entanglement witness for detecting nonbiseparable states, have been introduced for three qubit density matrix by using convex optimization problem. Finally, the four spin reduced density matrix has been investigated by restricting the density matrix to the irreducible representations of Sn. The restricted density matrix to the subspaces of the irreducible representations: Ssym, S3,1 and S2,2 are denoted by ρsym, ρ3,1 and ρ2,2, respectively. It has been shown that some highly entangled classes (by using the results of Miyake [Phys. Rev. A 67, 012108 (2003)] for entanglement classification) do not exist in the blocks of density matrix ρ3,1 and ρ2,2, so these classes do not exist in the total Fermi gas density matrix.

  8. Improvement of UV electroluminescence of n-ZnO/p-GaN heterojunction LED by ZnS interlayer.

    PubMed

    Zhang, Lichun; Li, Qingshan; Shang, Liang; Wang, Feifei; Qu, Chong; Zhao, Fengzhou

    2013-07-15

    n-ZnO/p-GaN heterojunction light emitting diodes with different interfacial layers were fabricated by pulsed laser deposition. The electroluminescence (EL) spectra of the n-ZnO/p-GaN diodes display a broad blue-violet emission centered at 430 nm, whereas the n-ZnO/ZnS/p-GaN and n-ZnO/AlN/p-GaN devices exhibit ultraviolet (UV) emission. Compared with the AlN interlayer, which is blocking both electron and hole at hetero-interface, the utilization of ZnS as intermediate layer can lower the barrier height for holes and keep an effective blocking for electron. Thus, an improved UV EL intensity and a low turn-on voltage (~5V) were obtained. The results were studied by peak-deconvolution with Gaussian functions and were discussed using the band diagram of heterojunctions.

  9. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  10. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    NASA Astrophysics Data System (ADS)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  11. A neural network approach for the blind deconvolution of turbulent flows

    NASA Astrophysics Data System (ADS)

    Maulik, R.; San, O.

    2017-11-01

    We present a single-layer feedforward artificial neural network architecture trained through a supervised learning approach for the deconvolution of flow variables from their coarse grained computations such as those encountered in large eddy simulations. We stress that the deconvolution procedure proposed in this investigation is blind, i.e. the deconvolved field is computed without any pre-existing information about the filtering procedure or kernel. This may be conceptually contrasted to the celebrated approximate deconvolution approaches where a filter shape is predefined for an iterative deconvolution process. We demonstrate that the proposed blind deconvolution network performs exceptionally well in the a-priori testing of both two-dimensional Kraichnan and three-dimensional Kolmogorov turbulence and shows promise in forming the backbone of a physics-augmented data-driven closure for the Navier-Stokes equations.

  12. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  13. Crowded field photometry with deconvolved images.

    NASA Astrophysics Data System (ADS)

    Linde, P.; Spännare, S.

    A local implementation of the Lucy-Richardson algorithm has been used to deconvolve a set of crowded stellar field images. The effects of deconvolution on detection limits as well as on photometric and astrometric properties have been investigated as a function of the number of deconvolution iterations. Results show that deconvolution improves detection of faint stars, although artifacts are also found. Deconvolution provides more stars measurable without significant degradation of positional accuracy. The photometric precision is affected by deconvolution in several ways. Errors due to unresolved images are notably reduced, while flux redistribution between stars and background increases the errors.

  14. Heat Shield Employing Cured Thermal Protection Material Blocks Bonded in a Large-Cell Honeycomb Matrix

    NASA Technical Reports Server (NTRS)

    Zell, Peter

    2012-01-01

    A document describes a new way to integrate thermal protection materials on external surfaces of vehicles that experience the severe heating environments of atmospheric entry from space. Cured blocks of thermal protection materials are bonded into a compatible, large-cell honeycomb matrix that can be applied on the external surfaces of the vehicles. The honeycomb matrix cell size, and corresponding thermal protection material block size, is envisioned to be between 1 and 4 in. (.2.5 and 10 cm) on a side, with a depth required to protect the vehicle. The cell wall thickness is thin, between 0.01 and 0.10 in. (.0.025 and 0.25 cm). A key feature is that the honeycomb matrix is attached to the vehicle fs unprotected external surface prior to insertion of the thermal protection material blocks. The attachment integrity of the honeycomb can then be confirmed over the full range of temperature and loads that the vehicle will experience. Another key feature of the innovation is the use of uniform-sized thermal protection material blocks. This feature allows for the mass production of these blocks at a size that is convenient for quality control inspection. The honeycomb that receives the blocks must have cells with a compatible set of internal dimensions. The innovation involves the use of a faceted subsurface under the honeycomb. This provides a predictable surface with perpendicular cell walls for the majority of the blocks. Some cells will have positive tapers to accommodate mitered joints between honeycomb panels on each facet of the subsurface. These tapered cells have dimensions that may fall within the boundaries of the uniform-sized blocks.

  15. Compressed sensing of hyperspectral images based on scrambled block Hadamard ensemble

    NASA Astrophysics Data System (ADS)

    Wang, Li; Feng, Yan

    2016-11-01

    A fast measurement matrix based on scrambled block Hadamard ensemble for compressed sensing (CS) of hyperspectral images (HSI) is investigated. The proposed measurement matrix offers several attractive features. First, the proposed measurement matrix possesses Gaussian behavior, which illustrates that the matrix is universal and requires a near-optimal number of samples for exact reconstruction. In addition, it could be easily implemented in the optical domain due to its integer-valued elements. More importantly, the measurement matrix only needs small memory for storage in the sampling process. Experimental results on HSIs reveal that the reconstruction performance of the proposed measurement matrix is comparable or better than Gaussian matrix and Bernoulli matrix using different reconstruction algorithms while consuming less computational time. The proposed matrix could be used in CS of HSI, which would save the storage memory on board, improve the sampling efficiency, and ameliorate the reconstruction quality.

  16. Micron-scale roughness of volcanic surfaces from thermal infrared spectroscopy and scanning electron microscopy

    NASA Astrophysics Data System (ADS)

    Carter, Adam J.; Ramsey, Michael S.; Durant, Adam J.; Skilling, Ian P.; Wolfe, Amy

    2009-02-01

    Textural characteristics of recently emplaced volcanic materials provide information on the degassing history, volatile content, and future explosive activity of volcanoes. Thermal infrared (TIR) remote sensing has been used to derive the micron-scale roughness (i.e., surface vesicularity) of lavas using a two-component (glass plus blackbody) spectral deconvolution model. We apply and test this approach on TIR data of pyroclastic flow (PF) deposits for the first time. Samples from two PF deposits (January 2005: block-rich and March 2000: ash-rich) were collected at Bezymianny Volcano (Russia) and analyzed using (1) TIR emission spectroscopy, (2) scanning electron microscope (SEM)-derived roughness (profiling), (3) SEM-derived surface vesicularity (imaging), and (4) thin section observations. Results from SEM roughness (0.9-2.8 μm) and SEM vesicularity (18-26%) showed a positive correlation. These were compared to the deconvolution results from the laboratory and spaceborne spectra, as well as to field-derived percentages of the block and ash. The spaceborne results were within 5% of the laboratory results and showed a positive correlation. However, a negative correlation between the SEM and spectral results was observed and was likely due to a combination of factors; an incorrect glass end-member, particle size effects, and subsequent weathering/reworking of the PF deposits. Despite these differences, this work shows that microscopic textural heterogeneities on PF deposits can be detected with TIR remote sensing using a technique similar to that used for lavas, but the results must be carefully interpreted. If applied correctly, it could be an important tool to map recent PF deposits and infer the causative eruption style/mechanism.

  17. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  18. Correction for frequency-dependent hydrophone response to nonlinear pressure waves using complex deconvolution and rarefactional filtering: application with fiber optic hydrophones.

    PubMed

    Wear, Keith; Liu, Yunbo; Gammell, Paul M; Maruvada, Subha; Harris, Gerald R

    2015-01-01

    Nonlinear acoustic signals contain significant energy at many harmonic frequencies. For many applications, the sensitivity (frequency response) of a hydrophone will not be uniform over such a broad spectrum. In a continuation of a previous investigation involving deconvolution methodology, deconvolution (implemented in the frequency domain as an inverse filter computed from frequency-dependent hydrophone sensitivity) was investigated for improvement of accuracy and precision of nonlinear acoustic output measurements. Timedelay spectrometry was used to measure complex sensitivities for 6 fiber-optic hydrophones. The hydrophones were then used to measure a pressure wave with rich harmonic content. Spectral asymmetry between compressional and rarefactional segments was exploited to design filters used in conjunction with deconvolution. Complex deconvolution reduced mean bias (for 6 fiber-optic hydrophones) from 163% to 24% for peak compressional pressure (p+), from 113% to 15% for peak rarefactional pressure (p-), and from 126% to 29% for pulse intensity integral (PII). Complex deconvolution reduced mean coefficient of variation (COV) (for 6 fiber optic hydrophones) from 18% to 11% (p+), 53% to 11% (p-), and 20% to 16% (PII). Deconvolution based on sensitivity magnitude or the minimum phase model also resulted in significant reductions in mean bias and COV of acoustic output parameters but was less effective than direct complex deconvolution for p+ and p-. Therefore, deconvolution with appropriate filtering facilitates reliable nonlinear acoustic output measurements using hydrophones with frequency-dependent sensitivity.

  19. A hybrid deconvolution approach for estimation of in vivo non-displaceable binding for brain PET targets without a reference region

    PubMed Central

    Mann, J. John; Ogden, R. Todd

    2017-01-01

    Background and aim Estimation of a PET tracer’s non-displaceable distribution volume (VND) is required for quantification of specific binding to its target of interest. VND is generally assumed to be comparable brain-wide and is determined either from a reference region devoid of the target, often not available for many tracers and targets, or by imaging each subject before and after blocking the target with another molecule that has high affinity for the target, which is cumbersome and involves additional radiation exposure. Here we propose, and validate for the tracers [11C]DASB and [11C]CUMI-101, a new data-driven hybrid deconvolution approach (HYDECA) that determines VND at the individual level without requiring either a reference region or a blocking study. Methods HYDECA requires the tracer metabolite-corrected concentration curve in blood plasma and uses a singular value decomposition to estimate the impulse response function across several brain regions from measured time activity curves. HYDECA decomposes each region’s impulse response function into the sum of a parametric non-displaceable component, which is a function of VND, assumed common across regions, and a nonparametric specific component. These two components differentially contribute to each impulse response function. Different regions show different contributions of the two components, and HYDECA examines data across regions to find a suitable common VND. HYDECA implementation requires determination of two tuning parameters, and we propose two strategies for objectively selecting these parameters for a given tracer: using data from blocking studies, and realistic simulations of the tracer. Using available test-retest data, we compare HYDECA estimates of VND and binding potentials to those obtained based on VND estimated using a purported reference region. Results For [11C]DASB and [11C]CUMI-101, we find that regardless of the strategy used to optimize the tuning parameters, HYDECA provides considerably less biased estimates of VND than those obtained, as is commonly done, using a non-ideal reference region. HYDECA test-retest reproducibility is comparable to that obtained using a VND determined from a non-ideal reference region, when considering the binding potentials BPP and BPND. Conclusions HYDECA can provide subject-specific estimates of VND without requiring a blocking study for tracers and targets for which a valid reference region does not exist. PMID:28459878

  20. Systematic sparse matrix error control for linear scaling electronic structure calculations.

    PubMed

    Rubensson, Emanuel H; Sałek, Paweł

    2005-11-30

    Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.

  1. Possible mechanisms for four regimes associated with cold events over East Asia

    NASA Astrophysics Data System (ADS)

    Yang, Zifan; Huang, Wenyu; Wang, Bin; Chen, Ruyan; Wright, Jonathon S.; Ma, Wenqian

    2017-09-01

    Circulation patterns associated with cold events over East Asia during the winter months of 1948-2014 are classified into four regimes by applying a k-means clustering method based on the area-weighted pattern correlation. The earliest precursor signals for two regimes are anticyclonic anomalies, which evolve into Ural and central Siberian blocking-like circulation patterns. The earliest precursor signals for the other two regimes are cyclonic anomalies, both of which evolve to amplify the East Asian trough (EAT). Both the blocking-like circulation patterns and amplified EAT favor the initialization of cold events. On average, the blocking-related regimes tend to last longer. The lead time of the earliest precursor signal for the central Siberian blocking-related regime is only 4 days, while those for the other regimes range from 16 to 18 days. The North Atlantic Oscillation plays essential roles both in triggering the precursor for the Ural blocking-related regime and in amplifying the precursors for all regimes. All regimes preferentially occur during the positive phase of the Eurasian teleconnection pattern and the negative phase of the El Niño-Southern Oscillation. For three regimes, surface cooling is primarily due to reduced downward infrared radiation and enhanced cold advection. For the remaining regime, which is associated with the southernmost cooling center, sensible and latent heat release and horizontal cold advection dominate the East Asian cooling.

  2. Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.

    PubMed

    Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K

    2016-08-01

    The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.

  3. Optimized Deconvolution for Maximum Axial Resolution in Three-Dimensional Aberration-Corrected Scanning Transmission Electron Microscopy

    PubMed Central

    Ramachandra, Ranjan; de Jonge, Niels

    2012-01-01

    Three-dimensional (3D) data sets were recorded of gold nanoparticles placed on both sides of silicon nitride membranes using focal series aberration-corrected scanning transmission electron microscopy (STEM). The deconvolution of the 3D datasets was optimized to obtain the highest possible axial resolution. The deconvolution involved two different point spread function (PSF)s, each calculated iteratively via blind deconvolution.. Supporting membranes of different thicknesses were tested to study the effect of beam broadening on the deconvolution. It was found that several iterations of deconvolution was efficient in reducing the imaging noise. With an increasing number of iterations, the axial resolution was increased, and most of the structural information was preserved. Additional iterations improved the axial resolution by maximal a factor of 4 to 6, depending on the particular dataset, and up to 8 nm maximal, but at the cost of a reduction of the lateral size of the nanoparticles in the image. Thus, the deconvolution procedure optimized for highest axial resolution is best suited for applications where one is interested in the 3D locations of nanoparticles only. PMID:22152090

  4. Nanocomposite synthesis and photoluminescence properties of MeV Au-ion beam modified Ni thin films

    NASA Astrophysics Data System (ADS)

    Siva, Vantari; Datta, Debi P.; Singh, Avanendra; Som, T.; Sahoo, Pratap K.

    2016-01-01

    We report on the synthesis and properties of nano-composites from thin Ni films on Silica matrix using Au-ion beam. When 2.2 MeV Au-ions are irradiated on 5 nm Ni film on Silica, the surface morphology changes drastically with ion fluence. In fact, within a fluence range of 5 × 1014-1 × 1016 ions/cm2, a sharp increase in surface roughness follows after an initial surface smoothening. The depth profiles extracted from Rutherford backscattering spectra demonstrates the diffusion of Ni and Au into the silica matrix. The photoluminescence spectra of the irradiated samples reveal the development of two bands centered at 3.3 eV and 2.66 eV, respectively. Deconvolution of those bands shows five different emission peaks, corresponding to different luminescence centers, which confirms the existence of Ni-Au nanocomposites in silica matrix. The optical and structural modifications are understood in terms of ion induced local heating and mass transport due to thermal spikes, which leads to nanocomposite formation in silica.

  5. Intelligent peak deconvolution through in-depth study of the data matrix from liquid chromatography coupled with a photo-diode array detector applied to pharmaceutical analysis.

    PubMed

    Arase, Shuntaro; Horie, Kanta; Kato, Takashi; Noda, Akira; Mito, Yasuhiro; Takahashi, Masatoshi; Yanagisawa, Toshinobu

    2016-10-21

    Multivariate curve resolution-alternating least squares (MCR-ALS) method was investigated for its potential to accelerate pharmaceutical research and development. The fast and efficient separation of complex mixtures consisting of multiple components, including impurities as well as major drug substances, remains a challenging application for liquid chromatography in the field of pharmaceutical analysis. In this paper we suggest an integrated analysis algorithm functioning on a matrix of data generated from HPLC coupled with photo-diode array detector (HPLC-PDA) and consisting of the mathematical program for the developed multivariate curve resolution method using an expectation maximization (EM) algorithm with a bidirectional exponentially modified Gaussian (BEMG) model function as a constraint for chromatograms and numerous PDA spectra aligned with time axis. The algorithm provided less than ±1.0% error between true and separated peak area values at resolution (R s ) of 0.6 using simulation data for a three-component mixture with an elution order of a/b/c with similarity (a/b)=0.8410, (b/c)=0.9123 and (a/c)=0.9809 of spectra at peak apex. This software concept provides fast and robust separation analysis even when method development efforts fail to achieve complete separation of the target peaks. Additionally, this approach is potentially applicable to peak deconvolution, allowing quantitative analysis of co-eluted compounds having exactly the same molecular weight. This is complementary to the use of LC-MS to perform quantitative analysis on co-eluted compounds using selected ions to differentiate the proportion of response attributable to each compound. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Coherent Multidecadal Atmospheric and Oceanic Variability in the North Atlantic: Blocking Corresponds with Warm Subpolar Ocean

    NASA Technical Reports Server (NTRS)

    Hakkinen, Sirpa M.; Rhines, P. B.; Worthen, D. L.

    2012-01-01

    Winters with frequent atmospheric blocking, in a band of latitudes from Greenland to Western Europe, are found to persist over several decades and correspond to a warm North Atlantic Ocean. This is evident in atmospheric reanalysis data, both modern and for the full 20th century. Blocking is approximately in phase with Atlantic multidecadal ocean variability (AMV). Wintertime atmospheric blocking involves a highly distorted jetstream, isolating large regions of air from the westerly circulation. It influences the ocean through windstress-curl and associated air/sea heat flux. While blocking is a relatively high-frequency phenomenon, it is strongly modulated over decadal timescales. The blocked regime (weaker ocean gyres, weaker air-sea heat flux, paradoxically increased transport of warm subtropical waters poleward) contributes to the warm phase of AMV. Atmospheric blocking better describes the early 20thC warming and 1996-2010 warm period than does the NAO index. It has roots in the hemispheric circulation and jet stream dynamics. Subpolar Atlantic variability covaries with distant AMOC fields: both these connections may express the global influence of the subpolar North Atlantic ocean on the global climate system.

  7. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  8. Design and analysis of quantitative differential proteomics investigations using LC-MS technology.

    PubMed

    Bukhman, Yury V; Dharsee, Moyez; Ewing, Rob; Chu, Peter; Topaloglou, Thodoros; Le Bihan, Thierry; Goh, Theo; Duewel, Henry; Stewart, Ian I; Wisniewski, Jacek R; Ng, Nancy F

    2008-02-01

    Liquid chromatography-mass spectrometry (LC-MS)-based proteomics is becoming an increasingly important tool in characterizing the abundance of proteins in biological samples of various types and across conditions. Effects of disease or drug treatments on protein abundance are of particular interest for the characterization of biological processes and the identification of biomarkers. Although state-of-the-art instrumentation is available to make high-quality measurements and commercially available software is available to process the data, the complexity of the technology and data presents challenges for bioinformaticians and statisticians. Here, we describe a pipeline for the analysis of quantitative LC-MS data. Key components of this pipeline include experimental design (sample pooling, blocking, and randomization) as well as deconvolution and alignment of mass chromatograms to generate a matrix of molecular abundance profiles. An important challenge in LC-MS-based quantitation is to be able to accurately identify and assign abundance measurements to members of protein families. To address this issue, we implement a novel statistical method for inferring the relative abundance of related members of protein families from tryptic peptide intensities. This pipeline has been used to analyze quantitative LC-MS data from multiple biomarker discovery projects. We illustrate our pipeline here with examples from two of these studies, and show that the pipeline constitutes a complete workable framework for LC-MS-based differential quantitation. Supplementary material is available at http://iec01.mie.utoronto.ca/~thodoros/Bukhman/.

  9. A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.

    We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less

  10. Endocytosis of collagen by hepatic stellate cells regulates extracellular matrix dynamics

    PubMed Central

    Bi, Yan; Mukhopadhyay, Dhriti; Drinane, Mary; Ji, Baoan; Li, Xing; Cao, Sheng

    2014-01-01

    Hepatic stellate cells (HSCs) generate matrix, which in turn may also regulate HSCs function during liver fibrosis. We hypothesized that HSCs may endocytose matrix proteins to sense and respond to changes in microenvironment. Primary human HSCs, LX2, or mouse embryonic fibroblasts (MEFs) [wild-type; c-abl−/−; or Yes, Src, and Fyn knockout mice (YSF−/−)] were incubated with fluorescent-labeled collagen or gelatin. Fluorescence-activated cell sorting analysis and confocal microscopy were used for measuring cellular internalization of matrix proteins. Targeted PCR array and quantitative real-time PCR were used to evaluate gene expression changes. HSCs and LX2 cells endocytose collagens in a concentration- and time-dependent manner. Endocytosed collagen colocalized with Dextran 10K, a marker of macropinocytosis, and 5-ethylisopropyl amiloride, an inhibitor of macropinocytosis, reduced collagen internalization by 46%. Cytochalasin D and ML7 blocked collagen internalization by 47% and 45%, respectively, indicating that actin and myosin are critical for collagen endocytosis. Wortmannin and AKT inhibitor blocked collagen internalization by 70% and 89%, respectively, indicating that matrix macropinocytosis requires phosphoinositide-3-kinase (PI3K)/AKT signaling. Overexpression of dominant-negative dynamin-2 K44A blocked matrix internalization by 77%, indicating a role for dynamin-2 in matrix macropinocytosis. Whereas c-abl−/− MEF showed impaired matrix endocytosis, YSF−/− MEF surprisingly showed increased matrix endocytosis. It was also associated with complex gene regulations that related with matrix dynamics, including increased matrix metalloproteinase 9 (MMP-9) mRNA levels and zymographic activity. HSCs endocytose matrix proteins through macropinocytosis that requires a signaling network composed of PI3K/AKT, dynamin-2, and c-abl. Interaction with extracellular matrix regulates matrix dynamics through modulating multiple gene expressions including MMP-9. PMID:25080486

  11. Endocytosis of collagen by hepatic stellate cells regulates extracellular matrix dynamics.

    PubMed

    Bi, Yan; Mukhopadhyay, Dhriti; Drinane, Mary; Ji, Baoan; Li, Xing; Cao, Sheng; Shah, Vijay H

    2014-10-01

    Hepatic stellate cells (HSCs) generate matrix, which in turn may also regulate HSCs function during liver fibrosis. We hypothesized that HSCs may endocytose matrix proteins to sense and respond to changes in microenvironment. Primary human HSCs, LX2, or mouse embryonic fibroblasts (MEFs) [wild-type; c-abl(-/-); or Yes, Src, and Fyn knockout mice (YSF(-/-))] were incubated with fluorescent-labeled collagen or gelatin. Fluorescence-activated cell sorting analysis and confocal microscopy were used for measuring cellular internalization of matrix proteins. Targeted PCR array and quantitative real-time PCR were used to evaluate gene expression changes. HSCs and LX2 cells endocytose collagens in a concentration- and time-dependent manner. Endocytosed collagen colocalized with Dextran 10K, a marker of macropinocytosis, and 5-ethylisopropyl amiloride, an inhibitor of macropinocytosis, reduced collagen internalization by 46%. Cytochalasin D and ML7 blocked collagen internalization by 47% and 45%, respectively, indicating that actin and myosin are critical for collagen endocytosis. Wortmannin and AKT inhibitor blocked collagen internalization by 70% and 89%, respectively, indicating that matrix macropinocytosis requires phosphoinositide-3-kinase (PI3K)/AKT signaling. Overexpression of dominant-negative dynamin-2 K44A blocked matrix internalization by 77%, indicating a role for dynamin-2 in matrix macropinocytosis. Whereas c-abl(-/-) MEF showed impaired matrix endocytosis, YSF(-/-) MEF surprisingly showed increased matrix endocytosis. It was also associated with complex gene regulations that related with matrix dynamics, including increased matrix metalloproteinase 9 (MMP-9) mRNA levels and zymographic activity. HSCs endocytose matrix proteins through macropinocytosis that requires a signaling network composed of PI3K/AKT, dynamin-2, and c-abl. Interaction with extracellular matrix regulates matrix dynamics through modulating multiple gene expressions including MMP-9. Copyright © 2014 the American Physiological Society.

  12. Increased Obesity-Associated Circulating Levels of the Extracellular Matrix Proteins Osteopontin, Chitinase-3 Like-1 and Tenascin C Are Associated with Colon Cancer

    PubMed Central

    Catalán, Victoria; Gómez-Ambrosi, Javier; Rodríguez, Amaia; Ramírez, Beatriz; Izaguirre, Maitane; Hernández-Lizoain, José Luis; Baixauli, Jorge; Martí, Pablo; Valentí, Víctor; Moncada, Rafael; Silva, Camilo; Salvador, Javier; Frühbeck, Gema

    2016-01-01

    Background Excess adipose tissue represents a major risk factor for the development of colon cancer with inflammation and extracellular matrix (ECM) remodeling being proposed as plausible mechanisms. The aim of this study was to investigate whether obesity can influence circulating levels of inflammation-related extracellular matrix proteins in patients with colon cancer (CC), promoting a microenvironment favorable for tumor growth. Methods Serum samples obtained from 79 subjects [26 lean (LN) and 53 obese (OB)] were used in the study. Enrolled subjects were further subclassified according to the established diagnostic protocol for CC (44 without CC and 35 with CC). Anthropometric measurements as well as circulating metabolites and hormones were determined. Circulating concentrations of the ECM proteins osteopontin (OPN), chitinase-3-like protein 1 (YKL-40), tenascin C (TNC) and lipocalin-2 (LCN-2) were determined by ELISA. Results Significant differences in circulating OPN, YKL-40 and TNC concentrations between the experimental groups were observed, being significantly increased due to obesity (P<0.01) and colon cancer (P<0.05). LCN-2 levels were affected by obesity (P<0.05), but no differences were detected regarding the presence or not of CC. A positive association (P<0.05) with different inflammatory markers was also detected. Conclusions To our knowledge, we herein show for the first time that obese patients with CC exhibit increased circulating levels of OPN, YKL-40 and TNC providing further evidence for the influence of obesity on CC development via ECM proteins, representing promising diagnostic biomarkers or target molecules for therapeutics. PMID:27612200

  13. Shear damage mechanisms in a woven, Nicalon-reinforced ceramic-matrix composite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keith, W.P.; Kedward, K.T.

    The shear response of a Nicalon-reinforced ceramic-matrix composite was investigated using Iosipescu tests. Damage was characterized by X-ray, optical, and SEM techniques. The large inelastic strains which were observed were attributed to rigid body sliding of longitudinal blocks of material. These blocks are created by the development and extension of intralaminar cracks and ply delaminations. This research reveals that the debonding and sliding characteristics of the fiber-matrix interface control the shear strength, strain softening, and cyclic degradation of the material.

  14. Tectonic slicing and mixing processes along the subduction interface: The Sistan example (Eastern Iran)

    NASA Astrophysics Data System (ADS)

    Bonnet, G.; Agard, P.; Angiboust, S.; Monié, P.; Jentzer, M.; Omrani, J.; Whitechurch, H.; Fournier, M.

    2018-06-01

    Suture zones preserve metamorphosed relicts of subducted ocean floor later exhumed along the plate interface that can provide critical insights on subduction zone processes. Mélange-like units are exceptionally well-exposed in the Sistan suture (Eastern Iran), which results from the closure of a branch of the Neotethys between the Lut and Afghan continental blocks. High pressure rocks found in the inner part of the suture zone (i.e., Ratuk complex) around Gazik are herein compared to previously studied outcrops along the belt. Detailed field investigations and mapping allow the distinction of two kinds of subduction-related block-in-matrix units: a siliciclastic-matrix complex and a serpentinite-matrix complex. The siliciclastic-matrix complex includes barely metamorphosed blocks of serpentinized peridotite, radiolarite and basalt of maximum greenschist-facies grade (i.e., maximum temperature of 340 °C). The serpentinite-matrix complex includes blocks of various grades and lithologies: mafic eclogites, amphibolitized blueschists, blue-amphibole-bearing metacherts and aegirine-augite-albite rocks. Eclogites reached peak pressure conditions around 530 °C and 2.3 GPa and isothermal retrogression down to 530 °C and 0.9 GPa. Estimation of peak PT conditions for the other rocks are less-well constrained but suggest equilibration at P < 1 GPa. Strikingly similar Ar-Ar ages of 86 ± 3 Ma, along 70 km, are obtained for phengite and amphibole from fourteen eclogite and amphibolitized blueschist blocks. Ages in Gazik are usually younger than further south (e.g., Sulabest), but there is little age difference between the various kinds of rocks. These results (radiometric ages, observed structures and rock types) support a tectonic origin of the serpentinite-matrix mélange and shed light on subduction zone dynamics, particularly on coeval detachment and exhumation mechanisms of slab-derived rocks.

  15. An efficient blocking M2L translation for low-frequency fast multipole method in three dimensions

    NASA Astrophysics Data System (ADS)

    Takahashi, Toru; Shimba, Yuta; Isakari, Hiroshi; Matsumoto, Toshiro

    2016-05-01

    We propose an efficient scheme to perform the multipole-to-local (M2L) translation in the three-dimensional low-frequency fast multipole method (LFFMM). Our strategy is to combine a group of matrix-vector products associated with M2L translation into a matrix-matrix product in order to diminish the memory traffic. For this purpose, we first developed a grouping method (termed as internal blocking) based on the congruent transformations (rotational and reflectional symmetries) of M2L-translators for each target box in the FMM hierarchy (adaptive octree). Next, we considered another method of grouping (termed as external blocking) that was able to handle M2L translations for multiple target boxes collectively by using the translational invariance of the M2L translation. By combining these internal and external blockings, the M2L translation can be performed efficiently whilst preservingthe numerical accuracy exactly. We assessed the proposed blocking scheme numerically and applied it to the boundary integral equation method to solve electromagnetic scattering problems for perfectly electrical conductor. From the numerical results, it was found that the proposed M2L scheme achieved a few times speedup compared to the non-blocking scheme.

  16. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muthukumaran, M; Manigandan, D; Murali, V

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateralmore » and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.« less

  17. Matrix Management in Practice in Access Services at the NCSU Libraries

    ERIC Educational Resources Information Center

    Harris, Colleen S.

    2010-01-01

    The former Associate Head of Access and Delivery Services of the North Carolina State University Libraries reports on successful use of matrix management techniques for the Circulation and Reserves unit of the department. Despite their having fallen out of favor in much of the management literature, matrix management principles are useful for…

  18. NANOSTRUCTURED METAL OXIDE CATALYSTS VIA BUILDING BLOCK SYNTHESES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig E. Barnes

    2013-03-05

    A broadly applicable methodology has been developed to prepare new single site catalysts on silica supports. This methodology requires of three critical components: a rigid building block that will be the main structural and compositional component of the support matrix; a family of linking reagents that will be used to insert active metals into the matrix as well as cross link building blocks into a three dimensional matrix; and a clean coupling reaction that will connect building blocks and linking agents together in a controlled fashion. The final piece of conceptual strategy at the center of this methodology involves dosingmore » the building block with known amounts of linking agents so that the targeted connectivity of a linking center to surrounding building blocks is obtained. Achieving targeted connectivities around catalytically active metals in these building block matrices is a critical element of the strategy by which single site catalysts are obtained. This methodology has been demonstrated with a model system involving only silicon and then with two metal-containing systems (titanium and vanadium). The effect that connectivity has on the reactivity of atomically dispersed titanium sites in silica building block matrices has been investigated in the selective oxidation of phenols to benezoquinones. 2-connected titanium sites are found to be five times as active (i.e. initial turnover frequencies) than 4-connected titanium sites (i.e. framework titanium sites).« less

  19. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    NASA Astrophysics Data System (ADS)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  20. Efficient quantum circuits for dense circulant and circulant like operators

    PubMed Central

    Zhou, S. S.

    2017-01-01

    Circulant matrices are an important family of operators, which have a wide range of applications in science and engineering-related fields. They are, in general, non-sparse and non-unitary. In this paper, we present efficient quantum circuits to implement circulant operators using fewer resources and with lower complexity than existing methods. Moreover, our quantum circuits can be readily extended to the implementation of Toeplitz, Hankel and block circulant matrices. Efficient quantum algorithms to implement the inverses and products of circulant operators are also provided, and an example application in solving the equation of motion for cyclic systems is discussed. PMID:28572988

  1. Unifying model for random matrix theory in arbitrary space dimensions

    NASA Astrophysics Data System (ADS)

    Cicuta, Giovanni M.; Krausser, Johannes; Milkus, Rico; Zaccone, Alessio

    2018-03-01

    A sparse random block matrix model suggested by the Hessian matrix used in the study of elastic vibrational modes of amorphous solids is presented and analyzed. By evaluating some moments, benchmarked against numerics, differences in the eigenvalue spectrum of this model in different limits of space dimension d , and for arbitrary values of the lattice coordination number Z , are shown and discussed. As a function of these two parameters (and their ratio Z /d ), the most studied models in random matrix theory (Erdos-Renyi graphs, effective medium, and replicas) can be reproduced in the various limits of block dimensionality d . Remarkably, the Marchenko-Pastur spectral density (which is recovered by replica calculations for the Laplacian matrix) is reproduced exactly in the limit of infinite size of the blocks, or d →∞ , which clarifies the physical meaning of space dimension in these models. We feel that the approximate results for d =3 provided by our method may have many potential applications in the future, from the vibrational spectrum of glasses and elastic networks to wave localization, disordered conductors, random resistor networks, and random walks.

  2. What do you gain from deconvolution? - Observing faint galaxies with the Hubble Space Telescope Wide Field Camera

    NASA Technical Reports Server (NTRS)

    Schade, David J.; Elson, Rebecca A. W.

    1993-01-01

    We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.

  3. Post-processing of adaptive optics images based on frame selection and multi-frame blind deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Changhui; Wei, Kai

    2008-07-01

    The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.

  4. Function of Matrix IGF-1 in Coupling Bone Resorption and Formation

    PubMed Central

    Crane, Janet L.; Cao, Xu

    2013-01-01

    Balancing bone resorption and formation is the quintessential component for the prevention of osteoporosis. Signals that determine the recruitment, replication, differentiation, function, and apoptosis of osteoblasts and osteoclasts direct bone remodeling and determine whether bone tissue is gained, lost, or balanced. Therefore understanding the signaling pathways involved in the coupling process will help develop further targets for osteoporosis therapy, by blocking bone resorption or enhancing bone formation in a space and time dependent manner. Insulin-like growth factor type 1 (IGF-1) has long been known to play a role in bone strength. It is one of the most abundant substances in the bone matrix, circulates systemically and is secreted locally, and has a direct relationship with bone mineral density. Recent data has helped further our understanding of the direct role of IGF-1 signaling in coupling bone remodeling which will be discussed in this review. The bone marrow microenvironment plays a critical role in the fate of MSCs and HSCs and thus how IGF-1 interacts with other factors in the microenvironment are equally important. While previous clinical trials with IGF-1 administration have been unsuccessful at enhancing bone formation, advances in basic science studies have provided insight into further mechanisms that should be considered for future trials. Additional basic science studies dissecting the regulation and the function of matrix IGF-1 in modeling and remodeling will continue to provide further insight for future directions for anabolic therapies for osteoporosis. PMID:24068256

  5. Function of matrix IGF-1 in coupling bone resorption and formation.

    PubMed

    Crane, Janet L; Cao, Xu

    2014-02-01

    Balancing bone resorption and formation is the quintessential component for the prevention of osteoporosis. Signals that determine the recruitment, replication, differentiation, function, and apoptosis of osteoblasts and osteoclasts direct bone remodeling and determine whether bone tissue is gained, lost, or balanced. Therefore, understanding the signaling pathways involved in the coupling process will help develop further targets for osteoporosis therapy, by blocking bone resorption or enhancing bone formation in a space- and time-dependent manner. Insulin-like growth factor type 1 (IGF-1) has long been known to play a role in bone strength. It is one of the most abundant substances in the bone matrix, circulates systemically and is secreted locally, and has a direct relationship with bone mineral density. Recent data has helped further our understanding of the direct role of IGF-1 signaling in coupling bone remodeling which will be discussed in this review. The bone marrow microenvironment plays a critical role in the fate of mesenchymal stem cells and hematopoietic stem cells and thus how IGF-1 interacts with other factors in the microenvironment are equally important. While previous clinical trials with IGF-1 administration have been unsuccessful at enhancing bone formation, advances in basic science studies have provided insight into further mechanisms that should be considered for future trials. Additional basic science studies dissecting the regulation and the function of matrix IGF-1 in modeling and remodeling will continue to provide further insight for future directions for anabolic therapies for osteoporosis.

  6. Blind source deconvolution for deep Earth seismology

    NASA Astrophysics Data System (ADS)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  7. Tectonics and crustal structure of the Saurashtra peninsula: based on Gravity and Magnetic data

    NASA Astrophysics Data System (ADS)

    Mishra, A. K.; Singh, A.; Singh, U. K.

    2016-12-01

    The Saurashtra peninsula is located at the North Western margin of the Indian shield which occurs as a horst block between the rifts namely as Kachchh, Cambay and Narmada. It is important because of occurrence of moderate earthquake and presence of mesozoic sediments below the Deccan trap. The maps of bouguer gravity anomaly and the total intensity magnetic anomalies of Saurashtra have delineated six circular gravity highs of magnitudes 40-60 mGal and 800-1000 nT respectively. In order to understand the location, structure and depth of the source body, methods like continuous wavelet transform (CWT), Euler deconvolution and power spectrum analysis have been implemented in the potential field data. The CWT and Euler deconvolution give 16-18 km average depth of volcanic plug in Junagadh and Rajula region. From the power spectrum analysis, it is found that average Moho depth in the Saurashtra is about 36-38 km. Keeping the constraints obtained from geophysical studies like borehole, deep seismic survey, receiver function analysis and geological information, combined gravity and magnetic modeling have been performed. Detailed crustal structure of the Saurashtra region has been delineated along two profiles which pass from prominent geological features Junagadh and Rajula volcanic plugs respectively.

  8. Detecting most influencing courses on students grades using block PCA

    NASA Astrophysics Data System (ADS)

    Othman, Osama H.; Gebril, Rami Salah

    2014-12-01

    One of the modern solutions adopted in dealing with the problem of large number of variables in statistical analyses is the Block Principal Component Analysis (Block PCA). This modified technique can be used to reduce the vertical dimension (variables) of the data matrix Xn×p by selecting a smaller number of variables, (say m) containing most of the statistical information. These selected variables can then be employed in further investigations and analyses. Block PCA is an adapted multistage technique of the original PCA. It involves the application of Cluster Analysis (CA) and variable selection throughout sub principal components scores (PC's). The application of Block PCA in this paper is a modified version of the original work of Liu et al (2002). The main objective was to apply PCA on each group of variables, (established using cluster analysis), instead of involving the whole large pack of variables which was proved to be unreliable. In this work, the Block PCA is used to reduce the size of a huge data matrix ((n = 41) × (p = 251)) consisting of Grade Point Average (GPA) of the students in 251 courses (variables) in the faculty of science in Benghazi University. In other words, we are constructing a smaller analytical data matrix of the GPA's of the students with less variables containing most variation (statistical information) in the original database. By applying the Block PCA, (12) courses were found to `absorb' most of the variation or influence from the original data matrix, and hence worth to be keep for future statistical exploring and analytical studies. In addition, the course Independent Study (Math.) was found to be the most influencing course on students GPA among the 12 selected courses.

  9. Influenza A virus strains that circulate in humans differ in the ability of their NS1 proteins to block the activation of IRF3 and interferon-β transcription.

    PubMed

    Kuo, Rei-Lin; Zhao, Chen; Malur, Meghana; Krug, Robert M

    2010-12-20

    We demonstrate that influenza A virus strains that circulate in humans differ markedly in the ability of their NS1 proteins to block the activation of IRF3 and interferon-β transcription. Strong activation occurs in cells infected with viruses expressing NS1 proteins of seasonal H3N2 and H2N2 viruses, whereas activation is blocked in cells infected with viruses expressing NS1 proteins of some, but not all seasonal H1N1 viruses. The NS1 proteins of the 2009 H1N1 and H5N1 viruses also block these activations. The difference in this NS1 function is mediated largely by the C-terminal region of the effector domain, which contains the only amino acid (K or E at position 196) that covaries with the functional difference. Further, we show that TRIM25 binds the NS1 protein whether or not IRF3 activation is blocked, demonstrating that binding of TRIM25 by the NS1 protein does not necessarily lead to the blocking of IRF3 activation. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Fast Algorithms for Structured Least Squares and Total Least Squares Problems

    PubMed Central

    Kalsi, Anoop; O’Leary, Dianne P.

    2006-01-01

    We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z1 and Z2. We develop formulas for the generators of the matrix M HM in terms of the generators of M and show that the Cholesky factorization of the matrix M HM can be computed quickly if Z1 is close to unitary and Z2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices. PMID:27274922

  11. Fast Algorithms for Structured Least Squares and Total Least Squares Problems.

    PubMed

    Kalsi, Anoop; O'Leary, Dianne P

    2006-01-01

    We consider the problem of solving least squares problems involving a matrix M of small displacement rank with respect to two matrices Z 1 and Z 2. We develop formulas for the generators of the matrix M (H) M in terms of the generators of M and show that the Cholesky factorization of the matrix M (H) M can be computed quickly if Z 1 is close to unitary and Z 2 is triangular and nilpotent. These conditions are satisfied for several classes of matrices, including Toeplitz, block Toeplitz, Hankel, and block Hankel, and for matrices whose blocks have such structure. Fast Cholesky factorization enables fast solution of least squares problems, total least squares problems, and regularized total least squares problems involving these classes of matrices.

  12. Liquid chromatography with diode array detection combined with spectral deconvolution for the analysis of some diterpene esters in Arabica coffee brew.

    PubMed

    Erny, Guillaume L; Moeenfard, Marzieh; Alves, Arminda

    2015-02-01

    In this manuscript, the separation of kahweol and cafestol esters from Arabica coffee brews was investigated using liquid chromatography with a diode array detector. When detected in conjunction, cafestol, and kahweol esters were eluted together, but, after optimization, the kahweol esters could be selectively detected by setting the wavelength at 290 nm to allow their quantification. Such an approach was not possible for the cafestol esters, and spectral deconvolution was used to obtain deconvoluted chromatograms. In each of those chromatograms, the four esters were baseline separated allowing for the quantification of the eight targeted compounds. Because kahweol esters could be quantified either using the chromatogram obtained by setting the wavelength at 290 nm or using the deconvoluted chromatogram, those compounds were used to compare the analytical performances. Slightly better limits of detection were obtained using the deconvoluted chromatogram. Identical concentrations were found in a real sample with both approaches. The peak areas in the deconvoluted chromatograms were repeatable (intraday repeatability of 0.8%, interday repeatability of 1.0%). This work demonstrates the accuracy of spectral deconvolution when using liquid chromatography to mathematically separate coeluting compounds using the full spectra recorded by a diode array detector. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. From spinning conformal blocks to matrix Calogero-Sutherland models

    NASA Astrophysics Data System (ADS)

    Schomerus, Volker; Sobko, Evgeny

    2018-04-01

    In this paper we develop further the relation between conformal four-point blocks involving external spinning fields and Calogero-Sutherland quantum mechanics with matrix-valued potentials. To this end, the analysis of [1] is extended to arbitrary dimensions and to the case of boundary two-point functions. In particular, we construct the potential for any set of external tensor fields. Some of the resulting Schrödinger equations are mapped explicitly to the known Casimir equations for 4-dimensional seed conformal blocks. Our approach furnishes solutions of Casimir equations for external fields of arbitrary spin and dimension in terms of functions on the conformal group. This allows us to reinterpret standard operations on conformal blocks in terms of group-theoretic objects. In particular, we shall discuss the relation between the construction of spinning blocks in any dimension through differential operators acting on seed blocks and the action of left/right invariant vector fields on the conformal group.

  14. Calculating Path-Dependent Travel Time Prediction Variance and Covariance fro a Global Tomographic P-Velocity Model

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.

    2012-12-01

    Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  15. A novel optimised and validated method for analysis of multi-residues of pesticides in fruits and vegetables by microwave-assisted extraction (MAE)-dispersive solid-phase extraction (d-SPE)-retention time locked (RTL)-gas chromatography-mass spectrometry with Deconvolution reporting software (DRS).

    PubMed

    Satpathy, Gouri; Tyagi, Yogesh Kumar; Gupta, Rajinder Kumar

    2011-08-01

    A rapid, effective and ecofriendly method for sensitive screening and quantification of 72 pesticides residue in fruits and vegetables, by microwave-assisted extraction (MAE) followed by dispersive solid-phase extraction (d-SPE), retention time locked (RTL) capillary gas-chromatographic separation in trace ion mode mass spectrometric determination has been validated as per ISO/IEC: 17025:2005. Identification and reporting with total and extracted ion chromatograms were facilitated to a great extent by Deconvolution reporting software (DRS). For all compounds LOD were 0.002-0.02mg/kg and LOQ were 0.025-0.100mg/kg. Correlation coefficients of the calibration curves in the range of 0.025-0.50mg/kg were >0.993. To validate matrix effects repeatability, reproducibility, recovery and overall uncertainty were calculated for the 35 matrices at 0.025, 0.050 and 0.100mg/kg. Recovery ranged between 72% and 114% with RSD of <20% for repeatability and intermediate precision. The reproducibility of the method was evaluated by an inter laboratory participation and Z score obtained within ±2. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. A parallel computer implementation of fast low-rank QR approximation of the Biot-Savart law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D A; Fasenfest, B J; Stowell, M L

    2005-11-07

    In this paper we present a low-rank QR method for evaluating the discrete Biot-Savart law on parallel computers. It is assumed that the known current density and the unknown magnetic field are both expressed in a finite element expansion, and we wish to compute the degrees-of-freedom (DOF) in the basis function expansion of the magnetic field. The matrix that maps the current DOF to the field DOF is full, but if the spatial domain is properly partitioned the matrix can be written as a block matrix, with blocks representing distant interactions being low rank and having a compressed QR representation.more » The matrix partitioning is determined by the number of processors, the rank of each block (i.e. the compression) is determined by the specific geometry and is computed dynamically. In this paper we provide the algorithmic details and present computational results for large-scale computations.« less

  17. Effect of an alpha-blocker (Nicergoline) and of a beta-blocker (Acebutolol) on the in vitro biosynthesis of vascular extracellular matrix.

    PubMed

    Moczar, M; Robert, A M; Jacotot, B; Robert, L

    2001-05-01

    The effect of an alpha-blocking agent and of a beta-blocking agent on the biosynthesis of extracellular matrix macromolecules of the arterial wall was investigated. Rabbit aorta explants were cultured up to 48 hours with radioactive proline, lysine or glucosamine. In presence of these drugs, at concentration shown to be effective for the inhibition of platelet-endothelial cell interactions (10(-7) M), the incorporation of 14C proline in total macromolecular proline was higher than in macromolecular hydroxyproline suggesting a relatively higher rate of biosynthesis of non-collagenous proteins as compared to collagens. The alpha-blocking increased the incorporation of 14C proline in collagenous and non-collagenous proteins after 18 hours of incubation. beta-blocking also increased the incorporation of proline in macromolecular proline and hydroxyproline as compared to control cultures. Both increased the incorporation of 3H glucosamine in newly synthesised glycosaminoglycans. beta-blocking increased mainly the neosynthesis of heparan sulphate, alpha-blocking that of hyaluronan. The incorporation of 14C-lysine in crosslinked, insoluble elastin was not modified. These experiments confirm that alpha and beta-blocking agents can influence not only the tonus of aortic smooth muscle cells but also the relative rates of biosynthesis of extracellular matrix macromolecules. This effect should be taken in consideration for the evaluation of the long range effect of alpha and beta-blocking drugs on the vascular wall.

  18. Salvia miltiorrhiza extract inhibits TPA-induced MMP-9 expression and invasion through the MAPK/AP-1 signaling pathway in human breast cancer MCF-7 cells

    PubMed Central

    Kim, Jeong-Mi; Noh, Eun-Mi; Song, Hyun-Kyung; Lee, Minok; Lee, Soo Ho; Park, Sueng Hyuk; Ahn, Chan-Keun; Lee, Guem-San; Byun, Eui-Baek; Jang, Beom-Su; Kwon, Kang-Beom; Lee, Young-Rae

    2017-01-01

    Cancer cell invasion is crucial for metastasis. A major factor in the capacity of cancer cell invasion is the activation of matrix metalloproteinase-9 (MMP-9), which degrades the extracellular matrix. Salvia miltiorrhiza has been used as a promotion for blood circulation to remove blood stasis. Numerous previous studies have demonstrated that S. miltiorrhiza extracts (SME) decrease lipid levels and inhibit inflammation. However, the mechanism behind the effect of SME on breast cancer invasion has not been identified. The inhibitory effects of SME on 12-O-tetradecanoylphorbol-13-acetate (TPA)-induced MMP-9 expression were assessed using western blotting, reverse transcription-quantitative polymerase chain reaction and zymography assays. MMP-9 upstream signal proteins, including mitogen-activated protein kinases and activator protein 1 (AP-1) were also investigated. Cell invasion was assessed using a matrigel invasion assay. The present study demonstrated the inhibitory effects of the SME ethanol solution on MMP-9 expression and cell invasion in TPA-treated MCF-7 breast cancer cells. SME suppressed TPA-induced MMP-9 expression and MCF-7 cell invasion by blocking the transcriptional activation of AP-1. SME may possess therapeutic potential for inhibiting breast cancer cell invasiveness. PMID:28927117

  19. Salvia miltiorrhiza extract inhibits TPA-induced MMP-9 expression and invasion through the MAPK/AP-1 signaling pathway in human breast cancer MCF-7 cells.

    PubMed

    Kim, Jeong-Mi; Noh, Eun-Mi; Song, Hyun-Kyung; Lee, Minok; Lee, Soo Ho; Park, Sueng Hyuk; Ahn, Chan-Keun; Lee, Guem-San; Byun, Eui-Baek; Jang, Beom-Su; Kwon, Kang-Beom; Lee, Young-Rae

    2017-09-01

    Cancer cell invasion is crucial for metastasis. A major factor in the capacity of cancer cell invasion is the activation of matrix metalloproteinase-9 (MMP-9), which degrades the extracellular matrix. Salvia miltiorrhiza has been used as a promotion for blood circulation to remove blood stasis. Numerous previous studies have demonstrated that S. miltiorrhiza extracts (SME) decrease lipid levels and inhibit inflammation. However, the mechanism behind the effect of SME on breast cancer invasion has not been identified. The inhibitory effects of SME on 12-O-tetradecanoylphorbol-13-acetate (TPA)-induced MMP-9 expression were assessed using western blotting, reverse transcription-quantitative polymerase chain reaction and zymography assays. MMP-9 upstream signal proteins, including mitogen-activated protein kinases and activator protein 1 (AP-1) were also investigated. Cell invasion was assessed using a matrigel invasion assay. The present study demonstrated the inhibitory effects of the SME ethanol solution on MMP-9 expression and cell invasion in TPA-treated MCF-7 breast cancer cells. SME suppressed TPA-induced MMP-9 expression and MCF-7 cell invasion by blocking the transcriptional activation of AP-1. SME may possess therapeutic potential for inhibiting breast cancer cell invasiveness.

  20. Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging

    PubMed Central

    Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.

    2014-01-01

    Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321

  1. An integrated environmental tracer approach to characterizing groundwater circulation in a mountain block

    USGS Publications Warehouse

    Manning, Andrew H.; Solomon, D. Kip

    2005-01-01

    The subsurface transfer of water from a mountain block to an adjacent basin (mountain block recharge (MBR)) is a commonly invoked mechanism of recharge to intermountain basins. However, MBR estimates are highly uncertain. We present an approach to characterize bulk fluid circulation in a mountain block and thus MBR that utilizes environmental tracers from the basin aquifer. Noble gas recharge temperatures, groundwater ages, and temperature data combined with heat and fluid flow modeling are used to identify clearly improbable flow regimes in the southeastern Salt Lake Valley, Utah, and adjacent Wasatch Mountains. The range of possible MBR rates is reduced by 70%. Derived MBR rates (5.5–12.6 × 104 m3 d−1) are on the same order of magnitude as previous large estimates, indicating that significant MBR to intermountain basins is plausible. However, derived rates are 50–100% of the lowest previous estimate, meaning total recharge is probably less than previously thought.

  2. Gamma-Ray Simulated Spectrum Deconvolution of a LaBr₃ 1-in. x 1-in. Scintillator for Nondestructive ATR Fuel Burnup On-Site Predictions

    DOE PAGES

    Navarro, Jorge; Ring, Terry A.; Nigg, David W.

    2015-03-01

    A deconvolution method for a LaBr₃ 1"x1" detector for nondestructive Advanced Test Reactor (ATR) fuel burnup applications was developed. The method consisted of obtaining the detector response function, applying a deconvolution algorithm to 1”x1” LaBr₃ simulated, data along with evaluating the effects that deconvolution have on nondestructively determining ATR fuel burnup. The simulated response function of the detector was obtained using MCNPX as well with experimental data. The Maximum-Likelihood Expectation Maximization (MLEM) deconvolution algorithm was selected to enhance one-isotope source-simulated and fuel- simulated spectra. The final evaluation of the study consisted of measuring the performance of the fuel burnup calibrationmore » curve for the convoluted and deconvoluted cases. The methodology was developed in order to help design a reliable, high resolution, rugged and robust detection system for the ATR fuel canal capable of collecting high performance data for model validation, along with a system that can calculate burnup and using experimental scintillator detector data.« less

  3. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    NASA Astrophysics Data System (ADS)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  4. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  5. Malvinas Current variations: dissipation of mesoscale activity over the Malvinas Plateau and recurrent blocking events in the Argentine Basin.

    NASA Astrophysics Data System (ADS)

    Provost, C.; Artana, C.; Ferrari, R.; Koenig, Z.; Saraceno, M.; Piola, A. R.

    2016-12-01

    The Malvinas Current (MC) is an offshoot of the Antarctic Circumpolar Current (ACC). Downstream of Drake Passage, the northern fronts of the ACC veer northward, cross over the North Scotia Ridge (NSR) and the Malvinas Plateau and enter the Argentine Basin. We investigate the variations of the MC circulation between the NSR and 41°S and their possible relations with the ACC circulation using data from Argo floats and satellite altimetry. The data depict meandering and eddy-shedding of the northern ACC jets as they cross the NSR. The satellite fields (altimetry and high resolution sea surface temperature images) show that these eddies are trapped, break down and dissipate over the Malvinas Plateau, suggesting that this region is a hot spot for dissipation of mesoscale variability. Variations of sea level anomalies (SLA) across the NSR do not impact the MC further north, except for intra-seasonal variability associated with coastal trapped waves. Altimetry and float trajectories show events during which a large fraction of the MC is cut off from the ACC. During these blocking events, the MC does not collapse as a robust cyclonic cell is established to the north of the cut-off. The MC becomes the western boundary current of the cell and small cyclonic eddies locally reinforce the circulation. Blocking events at around 48.5°S are a recurrent feature of the MC circulation. Over the 23 year altimetry record, we detected 26 events during which the MC surface transport at 48.5°S was reduced to less than half its long term mean. Blocking events last from 10 to 35 days and do not present any significant trend. These events were tracked back to positive SLA that built up over the Argentine Abyssal Plain.

  6. Iterative and function-continuation Fourier deconvolution methods for enhancing mass spectrometer resolution

    NASA Technical Reports Server (NTRS)

    Ioup, J. W.; Ioup, G. E.; Rayborn, G. H., Jr.; Wood, G. M., Jr.; Upchurch, B. T.

    1984-01-01

    Mass spectrometer data in the form of ion current versus mass-to-charge ratio often include overlapping mass peaks, especially in low- and medium-resolution instruments. Numerical deconvolution of such data effectively enhances the resolution by decreasing the overlap of mass peaks. In this paper two approaches to deconvolution are presented: a function-domain iterative technique and a Fourier transform method which uses transform-domain function-continuation. Both techniques include data smoothing to reduce the sensitivity of the deconvolution to noise. The efficacy of these methods is demonstrated through application to representative mass spectrometer data and the deconvolved results are discussed and compared to data obtained from a spectrometer with sufficient resolution to achieve separation of the mass peaks studied. A case for which the deconvolution is seriously affected by Gibbs oscillations is analyzed.

  7. Eclogitic breccia from the Monviso ophiolite complex: new field and petrographic data

    NASA Astrophysics Data System (ADS)

    Locatelli, Michele; Verlaguet, Anne; Federico, Laura; Agard, Philippe

    2015-04-01

    The Monviso meta-ophiolite complex (Northern Italy, Western Alps) represents a coherent portion of oceanic lithosphere metamorphosed under eclogite facies conditions during the Alpine orogeny (2.6 GPa - 550°C, Lago Superiore Unit, Angiboust et al., 2011), and exhibits from bottom to top a thick serpentinite sole locally capped by metasediments, Mg-Al-rich metagabbros, then Fe-Ti-metagabbros capped by metabasalts. This section is disrupted by three main shear zones. Our study focusses on the Lower Shear Zone (LSZ), situated between the serpentinite sole (to the East) and the Mg-metagabbro bodies (to the West), and composed of blocks of both Fe-Ti and Mg-Al metagabbros embedded in a talc and tremolite-rich serpentinite matrix. Among these blocks, some were described as eclogitic breccias and interpreted as the result of a seismic rupture plane (Angiboust et al., 2012). These breccias correspond to blocks of Fe-Ti-metagabbros that were brecciated in eclogitic facies conditions (as attested by the omphacite + garnet ± lawsonite cement of the breccia) in a fluid-rich environment, as suggested by the abundance of lawsonite in the cement. Here we present new field data on the distribution and petrographic characterization of these eclogitic blocks in the LSZ. The aim of this work is twofold: (I) detailed mapping of the eclogitic block distribution along the LSZ, in order to determine precisely the extent and representativity of the breccias and (II) characterization of the brecciated blocks, at the outcrop scale, to explore the brecciation processes and structures. Between Pian del Re and Colle di Luca localities, the occurrence of eclogite blocks is uniform along the strike of the shear-zone, resulting in a 16 km-long belt of outcropping eclogitic bodies embedded in serpentinite matrix. The shear-zone width, by contrast, varies from 1.3 km to 0.8 km. Three types of eclogitic blocks can be distinguished: (1) intact (i.e., not brecciated) blocks of Fe-Ti-metagabbros restricted to the lower part of the shear zone, close to the serpentinite sole; (2) numerous brecciated Fe-Ti-metagabbros scattered in the intermediate to upper levels of the LSZ; (3) blocks showing compositional variations and complex structures, with boudins of intact Fe-Ti-metagabbros embedded in highly foliated and folded Mg-rich rocks bounded on one side by Fe-Ti-breccia planes. In some cases the full transition from intact to highly brecciated rock is recorded in the same block. Here, the contacts between intact metagabbros and breccia are characterized by about 1m-wide zones of non rotated clasts with diameter up to 80 cm, almost matrix-absent. The amount of matrix vs clast increases, associated with a reduction in the clast size and increasing clast rotation, over a few meters up to the end of the bodies. These particular blocks give us a unique opportunity to better characterize the brecciation processes. Different kinds of measurements were realized on the brecciated blocks: (1) block size, (2) clasts vs. matrix relative volumetric abundances, (3) dimension and shape ratio of clasts, and angle of misorientation between their elongation axis or internal foliations (for five selected blocks). Preliminary results show that the majority (82%) of mapped blocks have a diameter of less than 10 meters, with only 8% being larger than 20 meters. In the brecciated Fe-Ti gabbros the average content of matrix is 28%, while for blocks showing compositional variation it varies from zero to 30%. The angle of misorientation between clasts' foliation shows, instead, a chaotic distribution. Preliminary field data thus demonstrate that breccia blocks have to be considered as a constant feature along the LSZ rather than as an exception, and that further work is needed to determine whether they formed through pervasive brecciation (and potentially multiple events) or through a localized event and were later disrupted by ductile deformation along the LSZ.

  8. Novel mechanism of antibodies to hepatitis B virus in blocking viral particle release from cells.

    PubMed

    Neumann, Avidan U; Phillips, Sandra; Levine, Idit; Ijaz, Samreen; Dahari, Harel; Eren, Rachel; Dagan, Shlomo; Naoumov, Nikolai V

    2010-09-01

    Antibodies are thought to exert antiviral activities by blocking viral entry into cells and/or accelerating viral clearance from circulation. In particular, antibodies to hepatitis B virus (HBV) surface antigen (HBsAg) confer protection, by binding circulating virus. Here, we used mathematical modeling to gain information about viral dynamics during and after single or multiple infusions of a combination of two human monoclonal anti-HBs (HepeX-B) antibodies in patients with chronic hepatitis B. The antibody HBV-17 recognizes a conformational epitope, whereas antibody HBV-19 recognizes a linear epitope on the HBsAg. The kinetic profiles of the decline of serum HBV DNA and HBsAg revealed partial blocking of virion release from infected cells as a new antiviral mechanism, in addition to acceleration of HBV clearance from the circulation. We then replicated this approach in vitro, using cells secreting HBsAg, and compared the prediction of the mathematical modeling obtained from the in vivo kinetics. In vitro, HepeX-B treatment of HBsAg-producing cells showed cellular uptake of antibodies, resulting in intracellular accumulation of viral particles. Blocking of HBsAg secretion also continued after HepeX-B was removed from the cell culture supernatants. These results identify a novel antiviral mechanism of antibodies to HBsAg (anti-HBs) involving prolonged blocking of the HBV and HBsAg subviral particles release from infected cells. This may have implications in designing new therapies for patients with chronic HBV infection and may also be relevant in other viral infections.

  9. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    PubMed Central

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  10. Broadband ion mobility deconvolution for rapid analysis of complex mixtures.

    PubMed

    Pettit, Michael E; Brantley, Matthew R; Donnarumma, Fabrizio; Murray, Kermit K; Solouki, Touradj

    2018-05-04

    High resolving power ion mobility (IM) allows for accurate characterization of complex mixtures in high-throughput IM mass spectrometry (IM-MS) experiments. We previously demonstrated that pure component IM-MS data can be extracted from IM unresolved post-IM/collision-induced dissociation (CID) MS data using automated ion mobility deconvolution (AIMD) software [Matthew Brantley, Behrooz Zekavat, Brett Harper, Rachel Mason, and Touradj Solouki, J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. In our previous reports, we utilized a quadrupole ion filter for m/z-isolation of IM unresolved monoisotopic species prior to post-IM/CID MS. Here, we utilize a broadband IM-MS deconvolution strategy to remove the m/z-isolation requirement for successful deconvolution of IM unresolved peaks. Broadband data collection has throughput and multiplexing advantages; hence, elimination of the ion isolation step reduces experimental run times and thus expands the applicability of AIMD to high-throughput bottom-up proteomics. We demonstrate broadband IM-MS deconvolution of two separate and unrelated pairs of IM unresolved isomers (viz., a pair of isomeric hexapeptides and a pair of isomeric trisaccharides) in a simulated complex mixture. Moreover, we show that broadband IM-MS deconvolution improves high-throughput bottom-up characterization of a proteolytic digest of rat brain tissue. To our knowledge, this manuscript is the first to report successful deconvolution of pure component IM and MS data from an IM-assisted data-independent analysis (DIA) or HDMSE dataset.

  11. Telemedicine optoelectronic biomedical data processing system

    NASA Astrophysics Data System (ADS)

    Prosolovska, Vita V.

    2010-08-01

    The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.

  12. Identification of tectonically controlled serpentinite intrusion: Examples from Franciscan serpentinites, Gorda, California

    NASA Astrophysics Data System (ADS)

    Hirauchi, K.

    2006-12-01

    Serpentinite bodies, zonally occurring as a component of fault zones, without any association with ophiolitic rocks might be a mantle in origin tectonically intruded from a considerable depth. Typical occurrences of serpentinites that experienced a unique emplacement process different from surrounding rocks are found in the Sand Dollar Beach, Gorda, California. The serpentinite bodies are widely outcropped in the Franciscan Complex. All the serpentinites exhibit a block-in-matrix fabric, the blocks of which are classified into either massive or schistose types. The former retains relict minerals such as olivine, orthopyroxene and clinopyroxene and chromian spinel, and has serpentine minerals (lizardite and chrysotile) of mesh texture and bastite. The latter is characterized by ribbon textures as ductilely deformed mesh textures. The matrix is composed of aligned tabular lizardite, penetrating into the interior core of the blocks. The schistosities in the blocks and the attitude of the foliated matrix are both consistent with the elongate direction of the larger serpentinite bodies. The massive mesh textures is converted by the schistose ribbon textures with ductile deformation, further penetrated by tabular lizardite of the matrix. These series of the continuous deformation and recrystallization may occur along a regional deep fault zone, after undergoing partial serpentinization at lower crust and upper mantle.

  13. Synthetic Division and Matrix Factorization

    ERIC Educational Resources Information Center

    Barabe, Samuel; Dubeau, Franc

    2007-01-01

    Synthetic division is viewed as a change of basis for polynomials written under the Newton form. Then, the transition matrices obtained from a sequence of changes of basis are used to factorize the inverse of a bidiagonal matrix or a block bidiagonal matrix.

  14. Using deconvolution to improve the metrological performance of the grid method

    NASA Astrophysics Data System (ADS)

    Grédiac, Michel; Sur, Frédéric; Badulescu, Claudiu; Mathias, Jean-Denis

    2013-06-01

    The use of various deconvolution techniques to enhance strain maps obtained with the grid method is addressed in this study. Since phase derivative maps obtained with the grid method can be approximated by their actual counterparts convolved by the envelope of the kernel used to extract phases and phase derivatives, non-blind restoration techniques can be used to perform deconvolution. Six deconvolution techniques are presented and employed to restore a synthetic phase derivative map, namely direct deconvolution, regularized deconvolution, the Richardson-Lucy algorithm and Wiener filtering, the last two with two variants concerning their practical implementations. Obtained results show that the noise that corrupts the grid images must be thoroughly taken into account to limit its effect on the deconvolved strain maps. The difficulty here is that the noise on the grid image yields a spatially correlated noise on the strain maps. In particular, numerical experiments on synthetic data show that direct and regularized deconvolutions are unstable when noisy data are processed. The same remark holds when Wiener filtering is employed without taking into account noise autocorrelation. On the other hand, the Richardson-Lucy algorithm and Wiener filtering with noise autocorrelation provide deconvolved maps where the impact of noise remains controlled within a certain limit. It is also observed that the last technique outperforms the Richardson-Lucy algorithm. Two short examples of actual strain fields restoration are finally shown. They deal with asphalt and shape memory alloy specimens. The benefits and limitations of deconvolution are presented and discussed in these two cases. The main conclusion is that strain maps are correctly deconvolved when the signal-to-noise ratio is high and that actual noise in the actual strain maps must be more specifically characterized than in the current study to address higher noise levels with Wiener filtering.

  15. Least-squares (LS) deconvolution of a series of overlapping cortical auditory evoked potentials: a simulation and experimental study

    NASA Astrophysics Data System (ADS)

    Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert

    2014-08-01

    Objective. To evaluate the viability of disentangling a series of overlapping ‘cortical auditory evoked potentials’ (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Approach. Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (<20 dB HL) were assessed. Main results. Results showed disentangling of a series of overlapping responses using LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. Significance. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.

  16. Implementing the SU(2) Symmetry for the DMRG

    NASA Astrophysics Data System (ADS)

    Alvarez, Gonzalo

    2010-03-01

    In the Density Matrix Renormalization Group (DMRG) algorithm (White, 1992), Hamiltonian symmetries play an important role. Using symmetries, the matrix representation of the Hamiltonian can be blocked. Diagonalizing each matrix block is more efficient than diagonalizing the original matrix. This talk will explain how the DMRG++ codefootnotetextarXiv:0902.3185 or Computer Physics Communications 180 (2009) 1572-1578. has been extended to handle the non-local SU(2) symmetry in a model independent way. Improvements in CPU times compared to runs with only local symmetries will be discussed for typical tight-binding models of strongly correlated electronic systems. The computational bottleneck of the algorithm, and the use of shared memory parallelization will also be addressed. Finally, a roadmap for future work on DMRG++ will be presented.

  17. Implementation of the SU(2) Hamiltonian Symmetry for the DMRG Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, Gonzalo

    2012-01-01

    In the Density Matrix Renormalization Group (DMRG) algorithm (White, 1992, 1993) and Hamiltonian symmetries play an important role. Using symmetries, the matrix representation of the Hamiltonian can be blocked. Diagonalizing each matrix block is more efficient than diagonalizing the original matrix. This paper explains how the the DMRG++ code (Alvarez, 2009) has been extended to handle the non-local SU(2) symmetry in a model independent way. Improvements in CPU times compared to runs with only local symmetries are discussed for the one-orbital Hubbard model, and for a two-orbital Hubbard model for iron-based superconductors. The computational bottleneck of the algorithm and themore » use of shared memory parallelization are also addressed.« less

  18. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  19. Experimental pencil beam kernels derivation for 3D dose calculation in flattening filter free modulated fields

    NASA Astrophysics Data System (ADS)

    Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier

    2016-01-01

    This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.

  20. Conjugate-gradient preconditioning methods for shift-variant PET image reconstruction.

    PubMed

    Fessler, J A; Booth, S D

    1999-01-01

    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.

  1. Effects of nifedipine and captopril on vascular capacitance of ganglion-blocked anesthetized dogs.

    PubMed

    Ogilvie, R I; Zborowska-Sluis, D

    1990-03-01

    The hemodynamic effects of nifedipine and captopril at doses producing similar reductions in arterial pressure were studied in pentobarbital-anesthetized ventilated dogs after splenectomy during ganglion blockade with hexamethonium. Mean circulatory filling pressure (Pmcf) was determined during transient circulatory arrest induced by acetylcholine at baseline circulating blood volumes and after increases of 5 and 10 mL/kg. Central blood volumes (pulmonary artery to aortic root) were determined from transit times, and separately determined cardiac outputs (right atrium to pulmonary artery) were estimated by thermodilution. Nifedipine (n = 5) increased Pmcf at all circulating blood volumes and reduced total vascular capacitance without a change in total vascular compliance. Central blood volume, right atrial pressure, and cardiac output were increased with induced increases in circulating blood volume. In contrast, captopril (n = 5) did not alter total vascular capacitance, central blood volume, right atrial pressure, or cardiac output at baseline or with increased circulating volume. Thus, at doses producing similar reductions in arterial pressure, nifedipine but not captopril increased venous return and cardiac output in ganglion-blocked dogs.

  2. Sandstone provenance and tectonic evolution of the Xiukang Mélange from Neotethyan subduction to India-Asia collision (Yarlung-Zangbo suture, south Tibet)

    NASA Astrophysics Data System (ADS)

    An, Wei; Hu, Xiumian; Garzanti, Eduardo

    2016-04-01

    The Xiukang Mélange of the Yarlung-Zangbo suture zone in south Tibet documents low efficiency of accretion along the southern active margin of Asia during Cretaceous Neotethyan subduction, followed by final development during the early Paleogene stages of the India-Asia collision. Here we investigate four transverses in the Xigaze area (Jiding, Cuola Pass, Riwuqi and Saga), inquiry the composition in each transverse, and present integrated petrologic, U-Pb detrital-zircon geochronology and Hf isotope data on sandstone blocks. In fault contact with the Yarlung-Zangbo Ophiolite to the north and the Tethyan Himalaya to the south, the Xiukang mélange can be divided into three types: serpentinite-matrix mélange composed by broken Yarlung-Zangbo Ophiolite, thrust-sheets consisting mainly chert, quartzose or limestone sheets(>100m) with little intervening marix, and mudstone-matrix mélange displaying typical blocks-in-matrix texture. While serpentinite-matrix mélange is exposed adjacent to the ophiolite, distributions of thrust-sheets and blocks in mudstone-matrix mélange show along-strike diversities. For example, Jiding transverse is dominant by chert sheets and basalt blocks with scarcely sandstone blocks, while Cuola Pass and Saga transverses expose large amounts of limestone/quartzarenite sheets in the north and volcaniclastic blocks in the south. However, turbidite sheets and volcaniclastic blocks are outcropped in the north Riwuqi transverse with quartzarenite blocks preserved in the south. Three groups of sandstone blocks/sheets with different provenance and depositional setting are distinguished by their petrographic, geochronological and isotopic fingerprints. Sheets of turbiditic quartzarenite originally sourced from the Indian continent were deposited in pre-Cretaceous time on the northernmost edge of the Indian passive margin and eventually involved into the mélange at the early stage of the India-Asia collision. Two distinct groups of volcaniclastic-sandstone blocks were derived from the central Lhasa block and Gangdese magmatic arc. One group was deposited in the trench and/or on the trench slope of the Asian margin during the early Late Cretaceous, and the other group in a syn-collisional basin just after the onset of the India-Asia collision in the Early Eocene. The largely erosional character of the Asian active margin in the Late Cretaceous is indicated by the scarcity of off-scraped trench-fill deposits and the relatively small subduction complex developed during limited episodes of accretion. The Xiukang Mélange was finally structured in the Late Paleocene/Eocene, when sandstone of both Indian and Asian origin were progressively incorporated tectonically in the suture zone of the nascent Himalayan Orogen.

  3. Indomethacin Inhibits Circulating PGE2 and Reverses Postexercise Suppression of Natural Killer Cell Activity

    DTIC Science & Technology

    1999-01-01

    after the oral administration of a placebo, the PG inhibitor indomethacin (75 mg/day for 5 days), or naltrexone (reported elsewhere). Circulating...which blocks PGE2 biosynthe- sis via inhibition of cyclooxygenase activity (57). Maxi- mal suppression of PG production occurs with doses between 50...and 150 mg (1). In addition to the indepen- dent effects of PGE2 on NKCA, low circulating levels of PGE2 can synergize with endogenous glucocorticoids

  4. Approximate solutions for diffusive fracture-matrix transfer: Application to storage of dissolved CO 2 in fractured rocks

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...

    2017-01-05

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  5. Minimum entropy deconvolution and blind equalisation

    NASA Technical Reports Server (NTRS)

    Satorius, E. H.; Mulligan, J. J.

    1992-01-01

    Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.

  6. Scalar flux modeling in turbulent flames using iterative deconvolution

    NASA Astrophysics Data System (ADS)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  7. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  8. How Properties of Kenaf Fibers from Burkina Faso Contribute to the Reinforcement of Earth Blocks

    PubMed Central

    Millogo, Younoussa; Aubert, Jean-Emmanuel; Hamard, Erwan; Morel, Jean-Claude

    2015-01-01

    Physicochemical characteristics of Hibiscus cannabinus (kenaf) fibers from Burkina Faso were studied using X-ray diffraction (XRD), infrared spectroscopy, thermal gravimetric analysis (TGA), chemical analysis and video microscopy. Kenaf fibers (3 cm long) were used to reinforce earth blocks, and the mechanical properties of reinforced blocks, with fiber contents ranging from 0.2 to 0.8 wt%, were investigated. The fibers were mainly composed of cellulose type I (70.4 wt%), hemicelluloses (18.9 wt%) and lignin (3 wt%) and were characterized by high tensile strength (1 ± 0.25 GPa) and Young’s modulus (136 ± 25 GPa), linked to their high cellulose content. The incorporation of short fibers of kenaf reduced the propagation of cracks in the blocks, through the good adherence of fibers to the clay matrix, and therefore improved their mechanical properties. Fiber incorporation was particularly beneficial for the bending strength of earth blocks because it reinforces these blocks after the failure of soil matrix observed for unreinforced blocks. Blocks reinforced with such fibers had a ductile tensile behavior that made them better building materials for masonry structures than unreinforced blocks.

  9. Compressed normalized block difference for object tracking

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  10. Impact of atmospheric blocking events on the decrease of precipitation in the Selenga River basin

    NASA Astrophysics Data System (ADS)

    Antokhina, O.; Antokhin, P.; Devyatova, E.; Vladimir, M.

    2017-12-01

    The periods of prolonged deficiency of hydropower potential (HP) of Angara cascade hydroelectric plant related to low-inflow in Baikal and Angara basins threaten to energy sector of Siberia. Since 1901 was recorded five such periods. Last period began in 1996 and continues today. This period attracts the special attention, because it is the longest and coincided with the observed climate change. In our previous works we found that the reason of observed decrease of HP is low water content of Selenga River (main river in Baikal Basin). We also found that the variations of Selenga water-content almost totally depend of summer atmospheric precipitation. Most dramatic decrease of summer precipitation observed in July. In turn, precipitation in July depends on location and intensity of atmospheric frontal zone which separates mid-latitude circulation and East Asia monsoon system. Recently occur reduction this frontal zone and decrease of East Asia summer monsoon intensity. We need in the understanding of the reasons leading to these changes. In the presented work we investigate the influence of atmospheric blocking over Asia on the East Asian summer monsoon circulation in the period its maximum (July). Based on the analysis of large number of blocking events we identified the main mechanisms of blocking influence on the monsoon and studied the properties of cyclones formed by the interaction of air masses from mid latitude and tropics. It turned out that the atmospheric blockings play a fundamental role in the formation of the East Asia monsoon moisture transport and in the precipitation anomalies redistribution. In the absence of blockings over Asia East Asian monsoon moisture does not extend to the north, and in the presence of blockings their spatial configuration and localization completely determines the precipitation anomalies configuration in the northern part of East Asia. We also found that the weakening monsoon circulation in East Asia is associated with decrease in the frequency of atmospheric blocking events in the longitudinal sector width of about 30° with the center of the lake Baikal. The study was supported by the Russian Scientific Foundation Project No. 17-77-10035.

  11. Block matrix based LU decomposition to analyze kinetic damping in active plasma resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Roehl, Jan Hendrik; Oberrath, Jens

    2016-09-01

    ``Active plasma resonance spectroscopy'' (APRS) is a widely used diagnostic method to measure plasma parameter like electron density. Measurements with APRS probes in plasmas of a few Pa typically show a broadening of the spectrum due to kinetic effects. To analyze the broadening a general kinetic model in electrostatic approximation based on functional analytic methods has been presented [ 1 ] . One of the main results is, that the system response function Y(ω) is given in terms of the matrix elements of the resolvent of the dynamic operator evaluated for values on the imaginary axis. To determine the response function of a specific probe the resolvent has to be approximated by a huge matrix which is given by a banded block structure. Due to this structure a block based LU decomposition can be implemented. It leads to a solution of Y(ω) which is given only by products of matrices of the inner block size. This LU decomposition allows to analyze the influence of kinetic effects on the broadening and saves memory and calculation time. Gratitude is expressed to the internal funding of Leuphana University.

  12. Performance of low-rank QR approximation of the finite element Biot-Savart law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D; Fasenfest, B

    2006-10-16

    In this paper we present a low-rank QR method for evaluating the discrete Biot-Savart law. Our goal is to develop an algorithm that is easily implemented on parallel computers. It is assumed that the known current density and the unknown magnetic field are both expressed in a finite element expansion, and we wish to compute the degrees-of-freedom (DOF) in the basis function expansion of the magnetic field. The matrix that maps the current DOF to the field DOF is full, but if the spatial domain is properly partitioned the matrix can be written as a block matrix, with blocks representingmore » distant interactions being low rank and having a compressed QR representation. While an octree partitioning of the matrix may be ideal, for ease of parallel implementation we employ a partitioning based on number of processors. The rank of each block (i.e. the compression) is determined by the specific geometry and is computed dynamically. In this paper we provide the algorithmic details and present computational results for large-scale computations.« less

  13. Dynamic mechanical properties of hydroxyapatite/polyethylene oxide nanocomposites: characterizing isotropic and post-processing microstructures

    NASA Astrophysics Data System (ADS)

    Shofner, Meisha; Lee, Ji Hoon

    2012-02-01

    Compatible component interfaces in polymer nanocomposites can be used to facilitate a dispersed morphology and improved physical properties as has been shown extensively in experimental results concerning amorphous matrix nanocomposites. In this research, a block copolymer compatibilized interface is employed in a semi-crystalline matrix to prevent large scale nanoparticle clustering and enable microstructure construction with post-processing drawing. The specific materials used are hydroxyapatite nanoparticles coated with a polyethylene oxide-b-polymethacrylic acid block copolymer and a polyethylene oxide matrix. Two particle shapes are used: spherical and needle-shaped. Characterization of the dynamic mechanical properties indicated that the two nanoparticle systems provided similar levels of reinforcement to the matrix. For the needle-shaped nanoparticles, the post-processing step increased matrix crystallinity and changed the thermomechanical reinforcement trends. These results will be used to further refine the post-processing parameters to achieve a nanocomposite microstructure with triangulated arrays of nanoparticles.

  14. Direct Solve of Electrically Large Integral Equations for Problem Sizes to 1M Unknowns

    NASA Technical Reports Server (NTRS)

    Shaeffer, John

    2008-01-01

    Matrix methods for solving integral equations via direct solve LU factorization are presently limited to weeks to months of very expensive supercomputer time for problems sizes of several hundred thousand unknowns. This report presents matrix LU factor solutions for electromagnetic scattering problems for problem sizes to one million unknowns with thousands of right hand sides that run in mere days on PC level hardware. This EM solution is accomplished by utilizing the numerical low rank nature of spatially blocked unknowns using the Adaptive Cross Approximation for compressing the rank deficient blocks of the system Z matrix, the L and U factors, the right hand side forcing function and the final current solution. This compressed matrix solution is applied to a frequency domain EM solution of Maxwell's equations using standard Method of Moments approach. Compressed matrix storage and operations count leads to orders of magnitude reduction in memory and run time.

  15. Variability simulations with a steady, linearized primitive equations model

    NASA Technical Reports Server (NTRS)

    Kinter, J. L., III; Nigam, S.

    1985-01-01

    Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.

  16. Seismic interferometry by crosscorrelation and by multidimensional deconvolution: a systematic comparison

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Ruigrok, Elmer; Draganov, Deyan; Hunziker, Juerg; Slob, Evert; Thorbecke, Jan; Snieder, Roel

    2010-05-01

    In recent years, seismic interferometry (or Green's function retrieval) has led to many applications in seismology (exploration, regional and global), underwater acoustics and ultrasonics. One of the explanations for this broad interest lies in the simplicity of the methodology. In passive data applications a simple crosscorrelation of responses at two receivers gives the impulse response (Green's function) at one receiver as if there were a source at the position of the other. In controlled-source applications the procedure is similar, except that it involves in addition a summation along the sources. It has also been recognized that the simple crosscorrelation approach has its limitations. From the various theoretical models it follows that there are a number of underlying assumptions for retrieving the Green's function by crosscorrelation. The most important assumptions are that the medium is lossless and that the waves are equipartitioned. In heuristic terms the latter condition means that the receivers are illuminated isotropically from all directions, which is for example achieved when the sources are regularly distributed along a closed surface, the sources are mutually uncorrelated and their power spectra are identical. Despite the fact that in practical situations these conditions are at most only partly fulfilled, the results of seismic interferometry are generally quite robust, but the retrieved amplitudes are unreliable and the results are often blurred by artifacts. Several researchers have proposed to address some of the shortcomings by replacing the correlation process by deconvolution. In most cases the employed deconvolution procedure is essentially 1-D (i.e., trace-by-trace deconvolution). This compensates the anelastic losses, but it does not account for the anisotropic illumination of the receivers. To obtain more accurate results, seismic interferometry by deconvolution should acknowledge the 3-D nature of the seismic wave field. Hence, from a theoretical point of view, the trace-by-trace process should be replaced by a full 3-D wave field deconvolution process. Interferometry by multidimensional deconvolution is more accurate than the trace-by-trace correlation and deconvolution approaches but the processing is more involved. In the presentation we will give a systematic analysis of seismic interferometry by crosscorrelation versus multi-dimensional deconvolution and discuss applications of both approaches.

  17. Calculating Path-Dependent Travel Time Prediction Variance and Covariance for the SALSA3D Global Tomographic P-Velocity Model with a Distributed Parallel Multi-Core Computer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.

    2011-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Path-Dependent Travel Time Prediction Variance and Covariance for a Global Tomographic P- and S-Velocity Model

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.

    2015-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  19. Early disrupted neurovascular coupling and changed event level hemodynamic response function in type 2 diabetes: an fMRI study.

    PubMed

    Duarte, João V; Pereira, João M S; Quendera, Bruno; Raimundo, Miguel; Moreno, Carolina; Gomes, Leonor; Carrilho, Francisco; Castelo-Branco, Miguel

    2015-10-01

    Type 2 diabetes (T2DM) patients develop vascular complications and have increased risk for neurophysiological impairment. Vascular pathophysiology may alter the blood flow regulation in cerebral microvasculature, affecting neurovascular coupling. Reduced fMRI signal can result from decreased neuronal activation or disrupted neurovascular coupling. The uncertainty about pathophysiological mechanisms (neurodegenerative, vascular, or both) underlying brain function impairments remains. In this cross-sectional study, we investigated if the hemodynamic response function (HRF) in lesion-free brains of patients is altered by measuring BOLD (Blood Oxygenation Level-Dependent) response to visual motion stimuli. We used a standard block design to examine the BOLD response and an event-related deconvolution approach. Importantly, the latter allowed for the first time to directly extract the true shape of HRF without any assumption and probe neurovascular coupling, using performance-matched stimuli. We discovered a change in HRF in early stages of diabetes. T2DM patients show significantly different fMRI response profiles. Our visual paradigm therefore demonstrated impaired neurovascular coupling in intact brain tissue. This implies that functional studies in T2DM require the definition of HRF, only achievable with deconvolution in event-related experiments. Further investigation of the mechanisms underlying impaired neurovascular coupling is needed to understand and potentially prevent the progression of brain function decrements in diabetes.

  20. Processes and mechanisms of persistent extreme precipitation events in East China

    NASA Astrophysics Data System (ADS)

    Zhai, Panmao; Chen, Yang

    2014-11-01

    This study mainly presents recent progresses on persistent extreme precipitation events (PEPEs) in East China. A definition focusing both persistence and extremity of daily precipitation is firstly proposed. An identification method for quasi-stationary regional PEPEs is then designed. By utilizing the identified PEPEs in East China, typical circulation configurations from the lower to the upper troposphere are confirmed, followed by investigations of synoptic precursors for key components with lead time of 1-2 weeks. Two characteristic circulation patterns responsible for PEPEs in East China are identified: a double blocking high type and a single blocking high type. They may account for occurrence of nearly 80% PEPEs during last 60 years. For double blocking high type, about two weeks prior to PEPEs, two blockings developed and progressed towards the Ural Mountains and the Sea of Okhotsk, respectively. A northwestward progressive anomalous anticyclone conveying abundant moisture and eastward-extended South Asia High favoring divergence can be detected about one week in advance. A dominant summertime teleconnection over East Asia, East Asia/ Pacific (EAP) pattern, is deemed as another typical regime inducing PEPEs in the East China. Key elements of the EAP pattern initiated westward movement since one week prior to PEPEs. Eastward energy dispersion and poleward energy dispersion contributed to early development and subsequent maintenance of this teleconnection pattern, respectively. These typical circulation patterns and significant precursors may offer local forecasters some useful clues in identifying and predicting such high-impact precipitation events about 1-2 weeks in advance.

  1. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glicken, H.

    Large volcanic debris avalanches are among the world's largest mass movements. The rockslide-debris avalanche of the May 18, 1980, eruption of Mount St. Helens produced a 2.8 km/sup 3/ deposit and is the largest historic mass movement. A Pleistocene debris avalanche at Mount Shasta produced a 26 km/sup 3/ deposit that may be the largest Quaternary mass movement. The hummocky deposits at both volcanoes consist of rubble divided into (1) block facies that comprises unconsolidated pieces of the old edifice transported relatively intact, and (2) matrix facies that comprises a mixture of rocks from the old mountain and material pickedmore » up from the surrounding terrain. At Mount St. Helens, the juvenile dacite is found in the matrix facies, indicating that matrix facies formed from explosions of the erupting magma as well as from disaggregation and mixing of blocks. The block facies forms both hummocks and interhummock areas in the proximal part of the St. Helens avalanche deposit. At Mount St. Helens, the density of the old cone is 21% greater than the density of the avalanche deposit. Block size decreases with distance. Clast size, measured in the field and by sieving, coverages about a mean with distance, which suggests that blocks disaggregated and mixed together during transport.« less

  3. Mesofauna Influence on Humification Process of Vegetable Oddments with Participation Microarthropod

    ERIC Educational Resources Information Center

    Simonov, Yuriy V.; Svetkina, Irina A.; Kryuchkov, Konstantin V.

    2016-01-01

    Relevance of the studied problem is caused by the fact that stability of natural ecosystems strongly depends on functioning of their destructive block which closes a biological circulation. The organisms that ensure functioning of the destructive block are very different and numerous. All of them partly supplement, partly duplicate functions of…

  4. ALLocator: an interactive web platform for the analysis of metabolomic LC-ESI-MS datasets, enabling semi-automated, user-revised compound annotation and mass isotopomer ratio analysis.

    PubMed

    Kessler, Nikolas; Walter, Frederik; Persicke, Marcus; Albaum, Stefan P; Kalinowski, Jörn; Goesmann, Alexander; Niehaus, Karsten; Nattkemper, Tim W

    2014-01-01

    Adduct formation, fragmentation events and matrix effects impose special challenges to the identification and quantitation of metabolites in LC-ESI-MS datasets. An important step in compound identification is the deconvolution of mass signals. During this processing step, peaks representing adducts, fragments, and isotopologues of the same analyte are allocated to a distinct group, in order to separate peaks from coeluting compounds. From these peak groups, neutral masses and pseudo spectra are derived and used for metabolite identification via mass decomposition and database matching. Quantitation of metabolites is hampered by matrix effects and nonlinear responses in LC-ESI-MS measurements. A common approach to correct for these effects is the addition of a U-13C-labeled internal standard and the calculation of mass isotopomer ratios for each metabolite. Here we present a new web-platform for the analysis of LC-ESI-MS experiments. ALLocator covers the workflow from raw data processing to metabolite identification and mass isotopomer ratio analysis. The integrated processing pipeline for spectra deconvolution "ALLocatorSD" generates pseudo spectra and automatically identifies peaks emerging from the U-13C-labeled internal standard. Information from the latter improves mass decomposition and annotation of neutral losses. ALLocator provides an interactive and dynamic interface to explore and enhance the results in depth. Pseudo spectra of identified metabolites can be stored in user- and method-specific reference lists that can be applied on succeeding datasets. The potential of the software is exemplified in an experiment, in which abundance fold-changes of metabolites of the l-arginine biosynthesis in C. glutamicum type strain ATCC 13032 and l-arginine producing strain ATCC 21831 are compared. Furthermore, the capability for detection and annotation of uncommon large neutral losses is shown by the identification of (γ-)glutamyl dipeptides in the same strains. ALLocator is available online at: https://allocator.cebitec.uni-bielefeld.de. A login is required, but freely available.

  5. 3D model of a matrix source of negative ions: RF driving by a large area planar coil

    NASA Astrophysics Data System (ADS)

    Demerdzhiev, A.; Lishev, St.; Tarnev, Kh.; Shivarova, A.

    2015-04-01

    Based on three-dimensional (3D) modeling, different manners of a planar-coil inductive discharge driving of a plasma source completed as a matrix of small-radius hydrogen discharges are studied regarding a proper choice of an efficient and alike rf power deposition into the separate discharges of the matrix. Driving the whole matrix by a single coil and splitting it to blocks of discharge tubes, with single coil driving of each block, are the two cases considered. The results from the self-consistent model presented for a block of discharge tubes show its reliability in ensuring the same spatial distribution of the plasma parameters in the discharges completing the block. Since regarding the construction of the matrix, its driving as a whole by a single coil is the most reasonable decision, three modifications of the coil design have been tested: two zigzag coils with straight conductors passing, respectively, between and through the bottoms of the discharge tubes and a coil with an "omega" shaped conductor on the bottom of each tube. Among these three configurations, the latter ‒ a coil with an Ω-shaped conductor on the bottom of each tube ‒ shows up with the highest rf efficiency of an inductive discharge driving, shown by results for the rf current induced in the discharges obtained from an electrodynamical description. In all the cases considered the spatial distribution of the induced current density is analysed based on the manner of the penetration into the plasma of the wave field sustaining the inductive discharges.

  6. An optimized algorithm for multiscale wideband deconvolution of radio astronomical images

    NASA Astrophysics Data System (ADS)

    Offringa, A. R.; Smirnov, O.

    2017-10-01

    We describe a new multiscale deconvolution algorithm that can also be used in a multifrequency mode. The algorithm only affects the minor clean loop. In single-frequency mode, the minor loop of our improved multiscale algorithm is over an order of magnitude faster than the casa multiscale algorithm, and produces results of similar quality. For multifrequency deconvolution, a technique named joined-channel cleaning is used. In this mode, the minor loop of our algorithm is two to three orders of magnitude faster than casa msmfs. We extend the multiscale mode with automated scale-dependent masking, which allows structures to be cleaned below the noise. We describe a new scale-bias function for use in multiscale cleaning. We test a second deconvolution method that is a variant of the moresane deconvolution technique, and uses a convex optimization technique with isotropic undecimated wavelets as dictionary. On simple well-calibrated data, the convex optimization algorithm produces visually more representative models. On complex or imperfect data, the convex optimization algorithm has stability issues.

  7. New regularization scheme for blind color image deconvolution

    NASA Astrophysics Data System (ADS)

    Chen, Li; He, Yu; Yap, Kim-Hui

    2011-01-01

    This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.

  8. Spheroid Culture of Head and Neck Cancer Cells Reveals an Important Role of EGFR Signalling in Anchorage Independent Survival.

    PubMed

    Braunholz, Diana; Saki, Mohammad; Niehr, Franziska; Öztürk, Merve; Borràs Puértolas, Berta; Konschak, Robert; Budach, Volker; Tinhofer, Ingeborg

    2016-01-01

    In solid tumours millions of cells are shed into the blood circulation each day. Only a subset of these circulating tumour cells (CTCs) survive, many of them presumable because of their potential to form multi-cellular clusters also named spheroids. Tumour cells within these spheroids are protected from anoikis, which allows them to metastasize to distant organs or re-seed at the primary site. We used spheroid cultures of head and neck squamous cell carcinoma (HNSCC) cell lines as a model for such CTC clusters for determining the role of the epidermal growth factor receptor (EGFR) in cluster formation ability and cell survival after detachment from the extra-cellular matrix. The HNSCC cell lines FaDu, SCC-9 and UT-SCC-9 (UT-SCC-9P) as well as its cetuximab (CTX)-resistant sub-clone (UT-SCC-9R) were forced to grow in an anchorage-independent manner by coating culture dishes with the anti-adhesive polymer poly-2-hydroxyethylmethacrylate (poly-HEMA). The extent of apoptosis, clonogenic survival and EGFR signalling under such culture conditions was evaluated. The potential of spheroid formation in suspension culture was found to be positively correlated with the proliferation rate of HNSCC cell lines as well as their basal EGFR expression levels. CTX and gefitinib blocked, whereas the addition of EGFR ligands promoted anchorage-independent cell survival and spheroid formation. Increased spheroid formation and growth were associated with persistent activation of EGFR and its downstream signalling component (MAPK/ERK). Importantly, HNSCC cells derived from spheroid cultures retained their clonogenic potential in the absence of cell-matrix contact. Addition of CTX under these conditions strongly inhibited colony formation in CTX-sensitive cell lines but not their resistant subclones. Altogether, EGFR activation was identified as crucial factor for anchorage-independent survival of HNSCC cells. Targeting EGFR in CTC cluster formation might represent an attractive anti-metastatic treatment approach in HNSCC.

  9. METHOD OF AND APPARATUS FOR WITHDRAWING LIGHT ISOTOPIC PRODUCT FROM A LIQUID THERMAL DIFFUSION PLANT

    DOEpatents

    Dole, M.

    1959-09-22

    An improved process and apparatus are described for removing enriched product from the columns of a thermal diffusion plant for separation of isotopes. In the removal cycle, light product at the top cf the diffusion columns is circulated through the column tops and a shipping cylinder connected thereto unttl the concertation of enriched product in the cylinder reaches the desired point. During the removal, circulation through the bottoms is blocked bv freezing. in the diffusion cycle, the bottom portion is unfrozen, fresh feed is distributed to the bottoms of the columns, ard heavy product is withdrawn from the bottoms, while the tops of the columns are blocked by freezing.

  10. Rice Protein Matrix Enhances Circulating Levels of Xanthohumol Following Acute Oral Intake of Spent Hops in Humans.

    PubMed

    O'Connor, Annalouise; Konda, Veera; Reed, Ralph L; Christensen, J Mark; Stevens, Jan F; Contractor, Nikhat

    2018-03-01

    Xanthohumol (XN), a prenylated flavonoid found in hops, exhibits anti-inflammatory and antioxidant properties. However, poor bioavailability may limit therapeutic applications. As food components are known to modulate polyphenol absorption, the objective is to determine whether a protein matrix could enhance the bioavailability of XN post oral consumption in humans. This is a randomized, double-blind, crossover study in healthy participants (n = 6) evaluating XN and its major metabolites (isoxanthohumol [IX], 6- and 8-prenylnaringenin [6-PN, 8-PN]) for 6 h following consumption of 12.4 mg of XN delivered via a spent hops-rice protein matrix preparation or a control spent hops preparation. Plasma XN and metabolites are measured by LC-MS/MS. C max , T max , and area-under-the-curve (AUC) values were determined. Circulating XN and metabolite response to each treatment was not bioequivalent. Plasma concentrations of XN and XN + metabolites (AUC) are greater with consumption of the spent hops-rice protein matrix preparation. Compared to a standard spent hops powder, a protein-rich spent hops matrix demonstrates enhanced plasma levels of XN and metabolites following acute oral intake. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Methods and Apparatus for Reducing Multipath Signal Error Using Deconvolution

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor); Lau, Kenneth H. (Inventor)

    1999-01-01

    A deconvolution approach to adaptive signal processing has been applied to the elimination of signal multipath errors as embodied in one preferred embodiment in a global positioning system receiver. The method and receiver of the present invention estimates then compensates for multipath effects in a comprehensive manner. Application of deconvolution, along with other adaptive identification and estimation techniques, results in completely novel GPS (Global Positioning System) receiver architecture.

  12. Modeling CO2 Storage in Fractured Reservoirs: Fracture-Matrix Interactions of Free-Phase and Dissolved CO2

    NASA Astrophysics Data System (ADS)

    Oldenburg, C. M.; Zhou, Q.; Birkholzer, J. T.

    2017-12-01

    The injection of supercritical CO2 (scCO2) in fractured reservoirs has been conducted at several storage sites. However, no site-specific dual-continuum modeling for fractured reservoirs has been reported and modeling studies have generally underestimated the fracture-matrix interactions. We developed a conceptual model for enhanced CO2 storage to take into account global scCO2 migration in the fracture continuum, local storage of scCO2 and dissolved CO2 (dsCO2) in the matrix continuum, and driving forces for scCO2 invasion and dsCO2 diffusion from fractures. High-resolution discrete fracture-matrix models were developed for a column of idealized matrix blocks bounded by vertical and horizontal fractures and for a km-scale fractured reservoir. The column-scale simulation results show that equilibrium storage efficiency strongly depends on matrix entry capillary pressure and matrix-matrix connectivity while the time scale to reach equilibrium is sensitive to fracture spacing and matrix flow properties. The reservoir-scale modeling results shows that the preferential migration of scCO2 through fractures is coupled with bulk storage in the rock matrix that in turn retards the fracture scCO2 plume. We also developed unified-form diffusive flux equations to account for dsCO2 storage in brine-filled matrix blocks and found solubility trapping is significant in fractured reservoirs with low-permeability matrix.

  13. Improving space debris detection in GEO ring using image deconvolution

    NASA Astrophysics Data System (ADS)

    Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta

    2015-07-01

    In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.

  14. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Dense deconvolution net: Multi path fusion and dense deconvolution for high resolution skin lesion segmentation.

    PubMed

    He, Xinzi; Yu, Zhen; Wang, Tianfu; Lei, Baiying; Shi, Yiyan

    2018-01-01

    Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s.

  16. An accelerated non-Gaussianity based multichannel predictive deconvolution method with the limited supporting region of filters

    NASA Astrophysics Data System (ADS)

    Li, Zhong-xiao; Li, Zhen-chun

    2016-09-01

    The multichannel predictive deconvolution can be conducted in overlapping temporal and spatial data windows to solve the 2D predictive filter for multiple removal. Generally, the 2D predictive filter can better remove multiples at the cost of more computation time compared with the 1D predictive filter. In this paper we first use the cross-correlation strategy to determine the limited supporting region of filters where the coefficients play a major role for multiple removal in the filter coefficient space. To solve the 2D predictive filter the traditional multichannel predictive deconvolution uses the least squares (LS) algorithm, which requires primaries and multiples are orthogonal. To relax the orthogonality assumption the iterative reweighted least squares (IRLS) algorithm and the fast iterative shrinkage thresholding (FIST) algorithm have been used to solve the 2D predictive filter in the multichannel predictive deconvolution with the non-Gaussian maximization (L1 norm minimization) constraint of primaries. The FIST algorithm has been demonstrated as a faster alternative to the IRLS algorithm. In this paper we introduce the FIST algorithm to solve the filter coefficients in the limited supporting region of filters. Compared with the FIST based multichannel predictive deconvolution without the limited supporting region of filters the proposed method can reduce the computation burden effectively while achieving a similar accuracy. Additionally, the proposed method can better balance multiple removal and primary preservation than the traditional LS based multichannel predictive deconvolution and FIST based single channel predictive deconvolution. Synthetic and field data sets demonstrate the effectiveness of the proposed method.

  17. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    NASA Astrophysics Data System (ADS)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  18. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  19. Multiscale Architecture of a Subduction Complex and Insight into Large-scale Material Movement in Subduction Systems

    NASA Astrophysics Data System (ADS)

    Wakabayashi, J.

    2014-12-01

    The >1000 km by >100 km Franciscan complex of California records >100 Ma of subduction history that terminated with conversion to a transform margin. It affords an ideal natural laboratory to study the rock record of subduction-interface and related processes exhumed from 10-70 km. The Franciscan comprises coherent and block-in-matrix (mélange) units forming a nappe stack that youngs structurally downward in accretion age, indicating progressive subduction accretion. Gaps in accretion ages indicate periods of non-accretion or subduction erosion. The Franciscan comprises siliciclastic trench fill rocks, with lesser volcanic and pelagic rocks and serpentinite derived from the downgoing plate, as well as serpentinite and felsic-intermediate igneous blocks derived as detritus from the upper plate. The Franciscan records subduction, accretion, and metamorphism (including HP), spanning an extended period of subduction, rather than a single event superimposed on pre-formed stratigraphy. Melanges (serpentinite and siliciclastic matrix) with exotic blocks, that include high-grade metamorphic blocks, and felsic-intermediate igneous blocks from the upper plate, are mostly/entirely of sedimentary origin, whereas block-in-matrix rocks formed by tectonism lack exotic blocks and comprise disrupted ocean plate stratigraphy. Mélanges with exotic blocks are interbedded with coherent sandstones. Many blocks-in-melange record two HP burial events followed by surface exposure, and some record three. Paleomegathrust horizons, separating nappes accreted at different times, appear restricted to narrow fault zones of <100's of m thickness, and <50 m in best constrained cases; these zones lack exotic blocks. Large-scale displacements, whether paleomegathrust horizons, shortening within accreted nappes, or exhumation structures, are accommodated by discrete faults or narrow shear zones, rather than by significant penetrative strain. Exhumation of Franciscan HP units, both coherent and mélange, was accommodated by significant extension of the overlying plate, and possibly extension within the subduction complex, with cross-sectional extrusion, and like subduction burial, took place at different times.

  20. Nano-structured polymer composites and process for preparing same

    DOEpatents

    Hillmyer, Marc; Chen, Liang

    2013-04-16

    A process for preparing a polymer composite that includes reacting (a) a multi-functional monomer and (b) a block copolymer comprising (i) a first block and (ii) a second block that includes a functional group capable of reacting with the multi-functional monomer, to form a crosslinked, nano-structured, bi-continuous composite. The composite includes a continuous matrix phase and a second continuous phase comprising the first block of the block copolymer.

  1. New self-assembly strategies for next generation lithography

    NASA Astrophysics Data System (ADS)

    Schwartz, Evan L.; Bosworth, Joan K.; Paik, Marvin Y.; Ober, Christopher K.

    2010-04-01

    Future demands of the semiconductor industry call for robust patterning strategies for critical dimensions below twenty nanometers. The self assembly of block copolymers stands out as a promising, potentially lower cost alternative to other technologies such as e-beam or nanoimprint lithography. One approach is to use block copolymers that can be lithographically patterned by incorporating a negative-tone photoresist as the majority (matrix) phase of the block copolymer, paired with photoacid generator and a crosslinker moiety. In this system, poly(α-methylstyrene-block-hydroxystyrene)(PαMS-b-PHOST), the block copolymer is spin-coated as a thin film, processed to a desired microdomain orientation with long-range order, and then photopatterned. Therefore, selfassembly of the block copolymer only occurs in select areas due to the crosslinking of the matrix phase, and the minority phase polymer can be removed to produce a nanoporous template. Using bulk TEM analysis, we demonstrate how the critical dimension of this block copolymer is shown to scale with polymer molecular weight using a simple power law relation. Enthalpic interactions such as hydrogen bonding are used to blend inorganic additives in order to enhance the etch resistance of the PHOST block. We demonstrate how lithographically patternable block copolymers might fit in to future processing strategies to produce etch-resistant self-assembled features at length scales impossible with conventional lithography.

  2. Data enhancement and analysis through mathematical deconvolution of signals from scientific measuring instruments

    NASA Technical Reports Server (NTRS)

    Wood, G. M.; Rayborn, G. H.; Ioup, J. W.; Ioup, G. E.; Upchurch, B. T.; Howard, S. J.

    1981-01-01

    Mathematical deconvolution of digitized analog signals from scientific measuring instruments is shown to be a means of extracting important information which is otherwise hidden due to time-constant and other broadening or distortion effects caused by the experiment. Three different approaches to deconvolution and their subsequent application to recorded data from three analytical instruments are considered. To demonstrate the efficacy of deconvolution, the use of these approaches to solve the convolution integral for the gas chromatograph, magnetic mass spectrometer, and the time-of-flight mass spectrometer are described. Other possible applications of these types of numerical treatment of data to yield superior results from analog signals of the physical parameters normally measured in aerospace simulation facilities are suggested and briefly discussed.

  3. Parallelization of a blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  4. Improved deconvolution of very weak confocal signals.

    PubMed

    Day, Kasey J; La Rivière, Patrick J; Chandler, Talon; Bindokas, Vytas P; Ferrier, Nicola J; Glick, Benjamin S

    2017-01-01

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.

  5. New Approaches to the Parameterization of Gravity-Wave and Flow-Blocking Drag due to Unresolved Mesoscale Orography Guided by Mesoscale Model Predictability Research

    DTIC Science & Technology

    2012-09-30

    oscillation (SAO) and quasi-biennial oscillation ( QBO ) of stratospheric equatorial winds in long-term (10-year) nature runs. The ability of these new schemes...to generate and maintain tropical SAO and QBO circulations in Navy models for the first time is an important breakthrough, since these circulations

  6. Nanocomposites based on self-assembly poly(hydroxypropyl methacrylate)-block-poly(N-phenylmaleimide) and Fe3O4-NPs. Thermal stability, morphological characterization and optical properties

    NASA Astrophysics Data System (ADS)

    Pizarro, Guadalupe del C.; Marambio, Oscar G.; Jeria-Orell, Manuel; Sánchez, Julio; Oyarzún, Diego P.

    2018-02-01

    The current work presents the synthesis, characterization and preparation of organic-inorganic hybrid polymer films that contain inorganic magnetic nanoparticles (NPs). The block copolymer, prepared by Atom-Transfer Radical Polymerization (ATRP), was used as a nanoreactor for iron oxide NPs. The NPs were embedded in poly(hydroxypropyl methacrylate)-block-poly(N-phenylmaleimide) matrix. The following topographical modifications of the surface of the film were specially analyzed: control of pore features and changes in surface roughness. Finally, the NPs functionality inside the polymer matrix and how it may affect the thermal and optical properties of the films were assessed.

  7. Feasibility Study to Evaluate Candidate Materials of Nanofilled Block Copolymers for Use in Ultra High Density Pulsed Power Capacitors

    DTIC Science & Technology

    2015-10-26

    grafting block copolymer (BCP) to nanoparticles (BCP-g-NPs) to chemically match the corona of NPs with BCP matrix has resulted in a highly dispersed BCP...strategy of grafting BCP to nanoparticles in order to chemically match the corona of nanoparticles with BCP matrix has resulted in a highly dispersed...fast energy storage and discharge capabilities. However, the energy storage density of these capacitors is limited by the dielectric properties of

  8. Using redundancy of round-trip ultrasound signal for non-continuous arrays: Application to gap and blockage compensation.

    PubMed

    Robert, Jean-Luc; Erkamp, Ramon; Korukonda, Sanghamithra; Vignon, François; Radulescu, Emil

    2015-11-01

    In ultrasound imaging, an array of elements is used to image a medium. If part of the array is blocked by an obstacle, or if the array is made from several sub-arrays separated by a gap, grating lobes appear and the image is degraded. The grating lobes are caused by missing spatial frequencies, corresponding to the blocked or non-existing elements. However, in an active imaging system, where elements are used both for transmitting and receiving, the round trip signal is redundant: different pairs of transmit and receive elements carry similar information. It is shown here that, if the gaps are smaller than the active sub-apertures, this redundancy can be used to compensate for the missing signals and recover full resolution. Three algorithms are proposed: one is based on a synthetic aperture method, a second one uses dual-apodization beamforming, and the third one is a radio frequency (RF) data based deconvolution. The algorithms are evaluated on simulated and experimental data sets. An application could be imaging through ribs with a large aperture.

  9. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  10. Inverting pump-probe spectroscopy for state tomography of excitonic systems.

    PubMed

    Hoyer, Stephan; Whaley, K Birgitta

    2013-04-28

    We propose a two-step protocol for inverting ultrafast spectroscopy experiments on a molecular aggregate to extract the time-evolution of the excited state density matrix. The first step is a deconvolution of the experimental signal to determine a pump-dependent response function. The second step inverts this response function to obtain the quantum state of the system, given a model for how the system evolves following the probe interaction. We demonstrate this inversion analytically and numerically for a dimer model system, and evaluate the feasibility of scaling it to larger molecular aggregates such as photosynthetic protein-pigment complexes. Our scheme provides a direct alternative to the approach of determining all Hamiltonian parameters and then simulating excited state dynamics.

  11. ERK-regulated αB-crystallin induction by matrix detachment inhibits anoikis and promotes lung metastasis in vivo.

    PubMed

    Malin, D; Strekalova, E; Petrovic, V; Rajanala, H; Sharma, B; Ugolkov, A; Gradishar, W J; Cryns, V L

    2015-11-05

    Evasion of extracellular matrix detachment-induced apoptosis ('anoikis') is a defining characteristic of metastatic tumor cells. The ability of metastatic carcinoma cells to survive matrix detachment and escape anoikis enables them to disseminate as viable circulating tumor cells and seed distant organs. Here we report that αB-crystallin, an antiapoptotic molecular chaperone implicated in the pathogenesis of diverse poor-prognosis solid tumors, is induced by matrix detachment and confers anoikis resistance. Specifically, we demonstrate that matrix detachment downregulates extracellular signal-regulated kinase (ERK) activity and increases αB-crystallin protein and messenger RNA (mRNA) levels. Moreover, we show that ERK inhibition in adherent cancer cells mimics matrix detachment by increasing αB-crystallin protein and mRNA levels, whereas constitutive ERK activation suppresses αB-crystallin induction during matrix detachment. These findings indicate that ERK inhibition is both necessary and sufficient for αB-crystallin induction by matrix detachment. To examine the functional consequences of αB-crystallin induction in anoikis, we stably silenced αB-crystallin in two different metastatic carcinoma cell lines. Strikingly, silencing αB-crystallin increased matrix detachment-induced caspase activation and apoptosis but did not affect cell viability of adherent cancer cells. In addition, silencing αB-crystallin in metastatic carcinoma cells reduced the number of viable circulating tumor cells and inhibited lung metastasis in two orthotopic models, but had little or no effect on primary tumor growth. Taken together, our findings point to αB-crystallin as a novel regulator of anoikis resistance that is induced by matrix detachment-mediated suppression of ERK signaling and promotes lung metastasis. Our results also suggest that αB-crystallin represents a promising molecular target for antimetastatic therapies.

  12. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  13. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    NASA Astrophysics Data System (ADS)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  14. Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition

    PubMed Central

    Ong, Frank; Lustig, Michael

    2016-01-01

    We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978

  15. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    PubMed

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  16. Source Pulse Estimation of Mine Shock by Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Makowski, R.

    The objective of seismic signal deconvolution is to extract from the signal information concerning the rockmass or the signal in the source of the shock. In the case of blind deconvolution, we have to extract information regarding both quantities. Many methods of deconvolution made use of in prospective seismology were found to be of minor utility when applied to shock-induced signals recorded in the mines of the Lubin Copper District. The lack of effectiveness should be attributed to the inadequacy of the model on which the methods are based, with respect to the propagation conditions for that type of signal. Each of the blind deconvolution methods involves a number of assumptions; hence, only if these assumptions are fulfilled, we may expect reliable results.Consequently, we had to formulate a different model for the signals recorded in the copper mines of the Lubin District. The model is based on the following assumptions: (1) The signal emitted by the sh ock source is a short-term signal. (2) The signal transmitting system (rockmass) constitutes a parallel connection of elementary systems. (3) The elementary systems are of resonant type. Such a model seems to be justified by the geological structure as well as by the positions of the shock foci and seismometers. The results of time-frequency transformation also support the dominance of resonant-type propagation.Making use of the model, a new method for the blind deconvolution of seismic signals has been proposed. The adequacy of the new model, as well as the efficiency of the proposed method, has been confirmed by the results of blind deconvolution. The slight approximation errors obtained with a small number of approximating elements additionally corroborate the adequacy of the model.

  17. Multipoint Optimal Minimum Entropy Deconvolution and Convolution Fix: Application to vibration fault detection

    NASA Astrophysics Data System (ADS)

    McDonald, Geoff L.; Zhao, Qing

    2017-01-01

    Minimum Entropy Deconvolution (MED) has been applied successfully to rotating machine fault detection from vibration data, however this method has limitations. A convolution adjustment to the MED definition and solution is proposed in this paper to address the discontinuity at the start of the signal - in some cases causing spurious impulses to be erroneously deconvolved. A problem with the MED solution is that it is an iterative selection process, and will not necessarily design an optimal filter for the posed problem. Additionally, the problem goal in MED prefers to deconvolve a single-impulse, while in rotating machine faults we expect one impulse-like vibration source per rotational period of the faulty element. Maximum Correlated Kurtosis Deconvolution was proposed to address some of these problems, and although it solves the target goal of multiple periodic impulses, it is still an iterative non-optimal solution to the posed problem and only solves for a limited set of impulses in a row. Ideally, the problem goal should target an impulse train as the output goal, and should directly solve for the optimal filter in a non-iterative manner. To meet these goals, we propose a non-iterative deconvolution approach called Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). MOMEDA proposes a deconvolution problem with an infinite impulse train as the goal and the optimal filter solution can be solved for directly. From experimental data on a gearbox with and without a gear tooth chip, we show that MOMEDA and its deconvolution spectrums according to the period between the impulses can be used to detect faults and study the health of rotating machine elements effectively.

  18. Improving Range Estimation of a 3-Dimensional Flash Ladar via Blind Deconvolution

    DTIC Science & Technology

    2010-09-01

    12 2.1.4 Optical Imaging as a Linear and Nonlinear System 15 2.1.5 Coherence Theory and Laser Light Statistics . . . 16 2.2 Deconvolution...rather than deconvolution. 2.1.5 Coherence Theory and Laser Light Statistics. Using [24] and [25], this section serves as background on coherence theory...the laser light incident on the detector surface. The image intensity related to different types of coherence is governed by the laser light’s spatial

  19. Fast iterative solution of the Bethe-Salpeter eigenvalue problem using low-rank and QTT tensor approximation

    NASA Astrophysics Data System (ADS)

    Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.

    2017-04-01

    In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No

  20. Evaluation of deconvolution modelling applied to numerical combustion

    NASA Astrophysics Data System (ADS)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  1. Faceting for direction-dependent spectral deconvolution

    NASA Astrophysics Data System (ADS)

    Tasse, C.; Hugo, B.; Mirmont, M.; Smirnov, O.; Atemkeng, M.; Bester, L.; Hardcastle, M. J.; Lakhoo, R.; Perkins, S.; Shimwell, T.

    2018-04-01

    The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.

  2. Intrinsic fluorescence spectroscopy of glutamate dehydrogenase: Integrated behavior and deconvolution analysis

    NASA Astrophysics Data System (ADS)

    Pompa, P. P.; Cingolani, R.; Rinaldi, R.

    2003-07-01

    In this paper, we present a deconvolution method aimed at spectrally resolving the broad fluorescence spectra of proteins, namely, of the enzyme bovine liver glutamate dehydrogenase (GDH). The analytical procedure is based on the deconvolution of the emission spectra into three distinct Gaussian fluorescing bands Gj. The relative changes of the Gj parameters are directly related to the conformational changes of the enzyme, and provide interesting information about the fluorescence dynamics of the individual emitting contributions. Our deconvolution method results in an excellent fitting of all the spectra obtained with GDH in a number of experimental conditions (various conformational states of the protein) and describes very well the dynamics of a variety of phenomena, such as the dependence of hexamers association on protein concentration, the dynamics of thermal denaturation, and the interaction process between the enzyme and external quenchers. The investigation was carried out by means of different optical experiments, i.e., native enzyme fluorescence, thermal-induced unfolding, and fluorescence quenching studies, utilizing both the analysis of the “average” behavior of the enzyme and the proposed deconvolution approach.

  3. Proposed Standard For Variable Format Picture Processing And A Codec Approach To Match Diverse Imaging Devices

    NASA Astrophysics Data System (ADS)

    Wendler, Th.; Meyer-Ebrecht, D.

    1982-01-01

    Picture archiving and communication systems, especially those for medical applications, will offer the potential to integrate the various image sources of different nature. A major problem, however, is the incompatibility of the different matrix sizes and data formats. This may be overcome by a novel hierarchical coding process, which could lead to a unified picture format standard. A picture coding scheme is described, which decomposites a given (2n)2 picture matrix into a basic (2m)2 coarse information matrix (representing lower spatial frequencies) and a set of n-m detail matrices, containing information of increasing spatial resolution. Thus, the picture is described by an ordered set of data blocks rather than by a full resolution matrix of pixels. The blocks of data are transferred and stored using data formats, which have to be standardized throughout the system. Picture sources, which produce pictures of different resolution, will provide the coarse-matrix datablock and additionally only those detail matrices that correspond to their required resolution. Correspondingly, only those detail-matrix blocks need to be retrieved from the picture base, that are actually required for softcopy or hardcopy output. Thus, picture sources and retrieval terminals of diverse nature and retrieval processes for diverse purposes are easily made compatible. Furthermore this approach will yield an economic use of storage space and transmission capacity: In contrast to fixed formats, redundand data blocks are always skipped. The user will get a coarse representation even of a high-resolution picture almost instantaneously with gradually added details, and may abort transmission at any desired detail level. The coding scheme applies the S-transform, which is a simple add/substract algorithm basically derived from the Hadamard Transform. Thus, an additional data compression can easily be achieved especially for high-resolution pictures by applying appropriate non-linear and/or adaptive quantizing.

  4. Multi-color incomplete Cholesky conjugate gradient methods for vector computers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Poole, E. L.

    1986-01-01

    In this research, we are concerned with the solution on vector computers of linear systems of equations, Ax = b, where A is a larger, sparse symmetric positive definite matrix. We solve the system using an iterative method, the incomplete Cholesky conjugate gradient method (ICCG). We apply a multi-color strategy to obtain p-color matrices for which a block-oriented ICCG method is implemented on the CYBER 205. (A p-colored matrix is a matrix which can be partitioned into a pXp block matrix where the diagonal blocks are diagonal matrices). This algorithm, which is based on a no-fill strategy, achieves O(N/p) length vector operations in both the decomposition of A and in the forward and back solves necessary at each iteration of the method. We discuss the natural ordering of the unknowns as an ordering that minimizes the number of diagonals in the matrix and define multi-color orderings in terms of disjoint sets of the unknowns. We give necessary and sufficient conditions to determine which multi-color orderings of the unknowns correpond to p-color matrices. A performance model is given which is used both to predict execution time for ICCG methods and also to compare an ICCG method to conjugate gradient without preconditioning or another ICCG method. Results are given from runs on the CYBER 205 at NASA's Langley Research Center for four model problems.

  5. A novel image encryption algorithm based on the chaotic system and DNA computing

    NASA Astrophysics Data System (ADS)

    Chai, Xiuli; Gan, Zhihua; Lu, Yang; Chen, Yiran; Han, Daojun

    A novel image encryption algorithm using the chaotic system and deoxyribonucleic acid (DNA) computing is presented. Different from the traditional encryption methods, the permutation and diffusion of our method are manipulated on the 3D DNA matrix. Firstly, a 3D DNA matrix is obtained through bit plane splitting, bit plane recombination, DNA encoding of the plain image. Secondly, 3D DNA level permutation based on position sequence group (3DDNALPBPSG) is introduced, and chaotic sequences generated from the chaotic system are employed to permutate the positions of the elements of the 3D DNA matrix. Thirdly, 3D DNA level diffusion (3DDNALD) is given, the confused 3D DNA matrix is split into sub-blocks, and XOR operation by block is manipulated to the sub-DNA matrix and the key DNA matrix from the chaotic system. At last, by decoding the diffused DNA matrix, we get the cipher image. SHA 256 hash of the plain image is employed to calculate the initial values of the chaotic system to avoid chosen plaintext attack. Experimental results and security analyses show that our scheme is secure against several known attacks, and it can effectively protect the security of the images.

  6. 4Pi microscopy deconvolution with a variable point-spread function.

    PubMed

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  7. Improved deconvolution of very weak confocal signals

    PubMed Central

    Day, Kasey J.; La Rivière, Patrick J.; Chandler, Talon; Bindokas, Vytas P.; Ferrier, Nicola J.; Glick, Benjamin S.

    2017-01-01

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal of background noise. This approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage. PMID:28868135

  8. Improved deconvolution of very weak confocal signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less

  9. Improved deconvolution of very weak confocal signals

    DOE PAGES

    Day, Kasey J.; La Riviere, Patrick J.; Chandler, Talon; ...

    2017-06-06

    Deconvolution is typically used to sharpen fluorescence images, but when the signal-to-noise ratio is low, the primary benefit is reduced noise and a smoother appearance of the fluorescent structures. 3D time-lapse (4D) confocal image sets can be improved by deconvolution. However, when the confocal signals are very weak, the popular Huygens deconvolution software erases fluorescent structures that are clearly visible in the raw data. We find that this problem can be avoided by prefiltering the optical sections with a Gaussian blur. Analysis of real and simulated data indicates that the Gaussian blur prefilter preserves meaningful signals while enabling removal ofmore » background noise. Here, this approach is very simple, and it allows Huygens to be used with 4D imaging conditions that minimize photodamage.« less

  10. Multi-population Genomic Relationships for Estimating Current Genetic Variances Within and Genetic Correlations Between Populations.

    PubMed

    Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L

    2017-10-01

    Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.

  11. Interannual variability of cut-off low systems over the European sector: The role of blocking and the Northern Hemisphere circulation modes

    NASA Astrophysics Data System (ADS)

    Nieto, R.; Gimeno, L.; de La Torre, L.; Ribera, P.; Barriopedro, D.; García-Herrera, R.; Serrano, A.; Gordillo, A.; Redaño, A.; Lorente, J.

    2007-04-01

    An earlier developed multidecadal database of Northern Hemisphere cut-off low systems (COLs), covering a 41 years period (from 1958 to 1998) is used to study COLs interannual variability in the European sector (25°-47.5° N, 50° W-40° E) and the major factors controlling it. The study focus on the influence on COLs interannual variability, of larger scale phenomena such as blocking events and other main circulation modes defined over the Euro-Atlantic region. It is shown that there is a very large interannual variability in the COLs occurrence at the annual and seasonal scales, although without significant trends. The influence of larger scale phenomena is seasonal dependent, with the positive phase of the NAO favoring autumn COL development, while winter COL occurrence is mostly related to blocking events. During summer, the season when more COLs occur, no significant influences were found.

  12. Linking Low-Frequency Large-Scale Circulation Patterns to Cold Air Outbreak Formation in the Northeastern North Atlantic

    NASA Astrophysics Data System (ADS)

    Papritz, L.; Grams, C. M.

    2018-03-01

    The regional variability of wintertime marine cold air outbreaks (CAOs) in the northeastern North Atlantic is studied focusing on the role of weather regimes in modulating the large-scale circulation. Each regime is characterized by a typical CAO frequency anomaly pattern and a corresponding imprint in air-sea heat fluxes. Cyclonically dominated regimes, Greenland blocking and the Atlantic ridge regime are found to provide favorable conditions for CAO formation in at least one major sea of the study region; CAO occurrence is suppressed, however, by blocked regimes whose associated anticyclones are centered over northern Europe (European / Scandinavian blocking). Kinematic trajectories reveal that strength and location of the storm tracks are closely linked to the pathways of CAO air masses and, thus, CAO occurrence. Finally, CAO frequencies are also linked to the strength of the stratospheric polar vortex, which is understood in terms of associated variations in the frequency of weather regimes.

  13. Matrix metalloproteinase-9 and tissue inhibitor of metalloproteinase-1 in hypertension and their relationship to cardiovascular risk and treatment: a substudy of the Anglo-Scandinavian Cardiac Outcomes Trial (ASCOT).

    PubMed

    Tayebjee, Muzahir H; Nadar, Sunil; Blann, Andrew D; Gareth Beevers, D; MacFadyen, Robert J; Lip, Gregory Y H

    2004-09-01

    Hypertension results in structural changes to the cardiac and vascular extracellular matrix (ECM). Matrix metalloproteinases (MMP) and their inhibitors (TIMP) may play a central role in the modulation of this matrix. We hypothesized that both MMP-9 and TIMP-1 would be abnormal in hypertension, reflecting alterations in ECM turnover, and that their circulating levels should be linked to cardiovascular (CHD) and stroke (CVA) risk scores using the Framingham equation. Second, we hypothesized that treatment would result in changes in ECM indices. Plasma MMP-9 and TIMP-1 were measured before and after treatment (median 3 years) from 96 patients with uncontrolled hypertension participating in the Anglo-Scandinavian Cardiac Outcomes Trial (ASCOT). Pretreatment values were compared to circulating MMP-9 and TIMP-1 levels in 45 age- and sex-matched healthy controls. Circulating pretreatment MMP-9 and TIMP-1 levels were significantly higher in patients with hypertension than in the normotensive controls (P =.0041 and P =.0166, respectively). Plasma MMP-9 levels decreased, and TIMP-1 levels increased after treatment (P =.035 and P =.005, respectively). Levels of MMP-9 correlated with CHD risk (r = 0.317, P =.007) and HDL cholesterol (r = -0.237, P =.022), but not CVA risk. There were no significant correlations between TIMP-1 and CVA or CHD scores. Increased circulating MMP-9 and TIMP-1 at baseline in patients with hypertension could reflect an increased deposition and retention of type I collagen at the expense of other components of ECM within the cardiac and vascular ECM. After cardiovascular risk management, MMP-9 levels decreased and TIMP-1 levels increased. Elevated levels of MMP-9 also appeared to be associated with higher Framingham cardiovascular risk scores. Our observations suggest a possible role for these surrogate markers of tissue ECM composition and the prognosis of cardiovascular events in hypertension. Copyright 2004 American Journal of Hypertension, Ltd.

  14. Blind deconvolution post-processing of images corrected by adaptive optics

    NASA Astrophysics Data System (ADS)

    Christou, Julian C.

    1995-08-01

    Experience with the adaptive optics system at the Starfire Optical Range has shown that the point spread function is non-uniform and varies both spatially and temporally as well as being object dependent. Because of this, the application of a standard linear and non-linear deconvolution algorithms make it difficult to deconvolve out the point spread function. In this paper we demonstrate the application of a blind deconvolution algorithm to adaptive optics compensated data where a separate point spread function is not needed.

  15. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    PubMed

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  16. Deconvolution of noisy transient signals: a Kalman filtering application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.V.; Zicker, J.E.

    The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.

  17. Linkages between atmospheric blocking, sea ice export through Fram Strait and the Atlantic Meridional Overturning Circulation

    PubMed Central

    Ionita, M.; Scholz, P.; Lohmann, G.; Dima, M.; Prange, M.

    2016-01-01

    As a key persistent component of the atmospheric dynamics, the North Atlantic blocking activity has been linked to extreme climatic phenomena in the European sector. It has also been linked to Atlantic multidecadal ocean variability, but its potential links to rapid oceanic changes have not been investigated. Using a global ocean-sea ice model forced with atmospheric reanalysis data, here it is shown that the 1962–1966 period of enhanced blocking activity over Greenland resulted in anomalous sea ice accumulation in the Arctic and ended with a sea ice flush from the Arctic into the North Atlantic Ocean through Fram Strait. This event induced a significant decrease of Labrador Sea water surface salinity and an abrupt weakening of the Atlantic Meridional Overturning Circulation (AMOC) during the 1970s. These results have implications for the prediction of rapid AMOC changes and indicate that an important part of the atmosphere-ocean dynamics at mid- and high latitudes requires a proper representation of the Fram Strait sea ice transport and of the synoptic scale variability such as atmospheric blocking, which is a challenge for current coupled climate models. PMID:27619955

  18. Linkages between atmospheric blocking, sea ice export through Fram Strait and the Atlantic Meridional Overturning Circulation.

    PubMed

    Ionita, M; Scholz, P; Lohmann, G; Dima, M; Prange, M

    2016-09-13

    As a key persistent component of the atmospheric dynamics, the North Atlantic blocking activity has been linked to extreme climatic phenomena in the European sector. It has also been linked to Atlantic multidecadal ocean variability, but its potential links to rapid oceanic changes have not been investigated. Using a global ocean-sea ice model forced with atmospheric reanalysis data, here it is shown that the 1962-1966 period of enhanced blocking activity over Greenland resulted in anomalous sea ice accumulation in the Arctic and ended with a sea ice flush from the Arctic into the North Atlantic Ocean through Fram Strait. This event induced a significant decrease of Labrador Sea water surface salinity and an abrupt weakening of the Atlantic Meridional Overturning Circulation (AMOC) during the 1970s. These results have implications for the prediction of rapid AMOC changes and indicate that an important part of the atmosphere-ocean dynamics at mid- and high latitudes requires a proper representation of the Fram Strait sea ice transport and of the synoptic scale variability such as atmospheric blocking, which is a challenge for current coupled climate models.

  19. Compressed multi-block local binary pattern for object tracking

    NASA Astrophysics Data System (ADS)

    Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao

    2018-04-01

    Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.

  20. Preliminary results in implementing a model of the world economy on the CYBER 205: A case of large sparse nonsymmetric linear equations

    NASA Technical Reports Server (NTRS)

    Szyld, D. B.

    1984-01-01

    A brief description of the Model of the World Economy implemented at the Institute for Economic Analysis is presented, together with our experience in converting the software to vector code. For each time period, the model is reduced to a linear system of over 2000 variables. The matrix of coefficients has a bordered block diagonal structure, and we show how some of the matrix operations can be carried out on all diagonal blocks at once.

  1. Range resolution improvement in passive bistatic radars using nested FM channels and least squares approach

    NASA Astrophysics Data System (ADS)

    Arslan, Musa T.; Tofighi, Mohammad; Sevimli, Rasim A.; ćetin, Ahmet E.

    2015-05-01

    One of the main disadvantages of using commercial broadcasts in a Passive Bistatic Radar (PBR) system is the range resolution. Using multiple broadcast channels to improve the radar performance is offered as a solution to this problem. However, it suffers from detection performance due to the side-lobes that matched filter creates for using multiple channels. In this article, we introduce a deconvolution algorithm to suppress the side-lobes. The two-dimensional matched filter output of a PBR is further analyzed as a deconvolution problem. The deconvolution algorithm is based on making successive projections onto the hyperplanes representing the time delay of a target. Resulting iterative deconvolution algorithm is globally convergent because all constraint sets are closed and convex. Simulation results in an FM based PBR system are presented.

  2. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    NASA Astrophysics Data System (ADS)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  3. Imaging resolution and properties analysis of super resolution microscopy with parallel detection under different noise, detector and image restoration conditions

    NASA Astrophysics Data System (ADS)

    Yu, Zhongzhi; Liu, Shaocong; Sun, Shiyi; Kuang, Cuifang; Liu, Xu

    2018-06-01

    Parallel detection, which can use the additional information of a pinhole plane image taken at every excitation scan position, could be an efficient method to enhance the resolution of a confocal laser scanning microscope. In this paper, we discuss images obtained under different conditions and using different image restoration methods with parallel detection to quantitatively compare the imaging quality. The conditions include different noise levels and different detector array settings. The image restoration methods include linear deconvolution and pixel reassignment with Richard-Lucy deconvolution and with maximum-likelihood estimation deconvolution. The results show that the linear deconvolution share properties such as high-efficiency and the best performance under all different conditions, and is therefore expected to be of use for future biomedical routine research.

  4. The Sleep-inducing Lipid Oleamide Deconvolutes Gap Junction Communication and Calcium Wave Transmission in Glial Cells

    PubMed Central

    Guan, Xiaojun; Cravatt, Benjamin F.; Ehring, George R.; Hall, James E.; Boger, Dale L.; Lerner, Richard A.; Gilula, Norton B.

    1997-01-01

    Oleamide is a sleep-inducing lipid originally isolated from the cerebrospinal fluid of sleep-deprived cats. Oleamide was found to potently and selectively inactivate gap junction–mediated communication between rat glial cells. In contrast, oleamide had no effect on mechanically stimulated calcium wave transmission in this same cell type. Other chemical compounds traditionally used as inhibitors of gap junctional communication, like heptanol and 18β-glycyrrhetinic acid, blocked not only gap junctional communication but also intercellular calcium signaling. Given the central role for intercellular small molecule and electrical signaling in central nervous system function, oleamide- induced inactivation of glial cell gap junction channels may serve to regulate communication between brain cells, and in doing so, may influence higher order neuronal events like sleep induction. PMID:9412472

  5. Long-term magnetic field stability of Vega

    NASA Astrophysics Data System (ADS)

    Alina, D.; Petit, P.; Lignières, F.; Wade, G. A.; Fares, R.; Aurière, M.; Böhm, T.; Carfantan, H.

    2012-05-01

    We present new spectropolarimetric observations of the normal A-type star Vega, obtained during the summer of 2010 with NARVAL at Télescope Bernard Lyot (Pic du Midi Observatory). This new time-series is constituted of 615 spectra collected over 6 different nights. We use the Least-Square-Deconvolution technique to compute, from each spectrum, a mean line profile with a signal-to-noise ratio close to 20,000. After averaging all 615 polarized observations, we detect a circularly polarized Zeeman signature consistent in shape and amplitude with the signatures previously reported from our observations of 2008 and 2009. The surface magnetic geometry of the star, reconstructed using the technique of Zeeman-Doppler Imaging, agrees with the maps obtained in 2008 and 2009, showing that most recognizable features of the photospheric field of Vega are only weakly distorted by large-scale surface flows (differential rotation or meridional circulation).

  6. Mechanical design

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Design concepts for a 1000 mw thermal stationary power plant employing the UF6 fueled gas core breeder reactor are examined. Three design combinations-gaseous UF6 core with a solid matrix blanket, gaseous UF6 core with a liquid blanket, and gaseous UF6 core with a circulating blanket were considered. Results show the gaseous UF6 core with a circulating blanket was best suited to the power plant concept.

  7. A mélange of subduction temperatures: Evidence from Zr-in-rutile thermometry for strengthening of the subduction interface

    NASA Astrophysics Data System (ADS)

    Penniston-Dorland, Sarah C.; Kohn, Matthew J.; Piccoli, Philip M.

    2018-01-01

    The Catalina Schist contains a spectacular, km-scale amphibolite facies mélange zone, thought to be part of a Cretaceous convergent margin plate interface. In this setting, blocks ranging from centimeters up to ≥100 m in diameter are surrounded by finer-grained matrix that is derived from the blocks. Blocks throughout the mélange represent a diversity of protoliths derived from basalts, cherts and other sediments, and hydrated mantle, but all contain assemblages consistent with upper amphibolite-facies conditions, suggesting a relatively restricted range of depths and temperatures over which material within the mélange was metamorphosed. This apparent uniformity of metamorphic grade contrasts with other mélanges, such as the Franciscan Complex, where coexisting rocks with highly variable peak metamorphic grade suggest extensive mixing of materials along the subduction interface. This mixing has been ascribed to flow of material within relatively low viscosity matrix. The Zr content of rutile in samples from across the amphibolite facies mélange of the Catalina Schist was measured to determine peak metamorphic temperatures, identify whether these temperatures were different among blocks, and whether the spatial distribution of temperatures throughout the mélange was systematic or random. Resolvably different Zr contents, between 290 and 720 (±10-40) ppm, are found among the blocks, corresponding to different peak metamorphic temperatures of 650 to 730 (±2-16) °C at an assumed pressure of 1 GPa. These results are broadly consistent with previous thermobarometric estimates. No systematic distribution of temperatures was found, however. Like other mélange zones, material flow within the Catalina Schist mélange was likely chaotic, but appears to have occurred on a more restricted scale compared to some other localities. Progressive metamorphism of mélange matrix is expected to produce rheologically stiffer matrix minerals (such as amphiboles and pyroxenes) at the expense of weaker matrix minerals (sheet silicates), affecting the overall rheological behavior of the mélange, and dictating the scale of flow. The Catalina Schist amphibolite facies mélange matrix appears to provide a snapshot of hotter, stiffer portions of a subduction interface, perhaps more representative of rheological behavior at depths approaching the subarc than is found in some other exhumed mélange zones.

  8. An alternative method for sampling and petrographically characterizing an Eocene coal bed, southeast Kalimantan, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, T.A.

    1990-01-01

    A study undertaken on an Eocene age coal bed in southeast Kalimantan, Indonesia determined that there was a relationship between megascopically determined coal types and kinds and sizes of organic components. The study also concluded that the most efficient way to characterize the seam was from collection of two 3 cm blocks from each layer or bench defined by megascopic character and that a maximum of 125 point counts was needed on each block. Microscopic examination of uncrushed block samples showed the coal to be composed of plant parts and tissues set in a matrix of both fine-grained and amorphousmore » material. The particulate matrix is composed of cell wall and liptinite fragments, resins, spores, algae, and fungal material. The amorphous matrix consists of unstructured (at 400x) huminite and liptinite. Size measurements showed that each particulate component possessed its own size distribution which approached normality when transformed to a log{sub 2} or phi scale. Degradation of the plant material during peat accumulation probably controlled grain size in the coal types. This notion is further supported by the increased concentration of decay resistant resin and cell fillings in the nonbanded and dull coal types. In the sampling design experiment, two blocks from each layer and two layers from each coal type were collected. On each block, 2 to 4 traverses totaling 500 point counts per block were performed to test the minimum number of points needed to characterize a block. A hierarchical analysis of variance showed that most of the petrographic variation occurred between coal types. The results from these analyses also indicated that, within a coal type, sampling should concentrate on the layer level and that only 250 point counts, split between two blocks, were needed to characterize a layer.« less

  9. A receiver function investigation of the Lithosphere beneath Southern California using Wavefield Iterative Deconvolution(WID)

    NASA Astrophysics Data System (ADS)

    Ainiwaer, A.; Gurrola, H.

    2017-12-01

    In traditional Ps receiver functions (RFs) imaging, PPs and PSs phases from the shallow layers (near surface and crust) can be miss stacked as Ps phases or interfere with deeper Ps phases. To overcome interference between phases, we developed a method to produce phase specific Ps, PPs and PSs receiver functions (wavefield iterative deconvolution or WID). Rather than preforming a separate deconvolution of each seismogram recorded at a station, WID processes all the seismograms from a seismic station in a single run. Each iteration of WID identifies the most prominent phase remaining in the data set, based on the shape of its wavefield (or moveout curve), and then places this phase on the appropriate phase specific RF. As a result, we produce PsRFs that are free of PPs and PSs phase; and reverberations thereof. We also produce phase specific PPsRFs and PSsRFs but moveout curves for these phases and their higher order reverberations are not as distinct from one another. So the PPsRFs and the PSsRFs are not as clean as the PsRFs. These phase specific RFs can be stacked to image 2-D or 3-D Earth structure using common conversion point (CCP) stacking or migration. We applied WID to 524 Southern California seismic stations to construct 3-D PsRF image of lithosphere beneath southern California. These CCP images exhibit a Ps phases from the Moho and the lithosphere asthenosphere boundary (LAB) that are free of interference from the crustal reverberations. The Moho and LAB were found to be deepest beneath the Sierra Nevada, Tansverse Range and Peninsular Range. Shallow Moho and Lab is apparent beneath the Inner Borderland and Salton Trough. The LAB depth that we estimate is in close agreement to recent published results that used Sp imaging (Lekic et al., 2011). We also found complicated structure beneath Mojave Block where mid crustal features are apparent and anomalous Ps phases at 60 km depth are observed beneath Western Mojave dessert.

  10. Application of deconvolution interferometry with both Hi-net and KiK-net data

    NASA Astrophysics Data System (ADS)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  11. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode/SP Data

    NASA Astrophysics Data System (ADS)

    Oba, T.; Riethmüller, T. L.; Solanki, S. K.; Iida, Y.; Quintero Noda, C.; Shimizu, T.

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode/SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of -3.0 km s-1 and +3.0 km s-1 at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  12. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  13. Peptide de novo sequencing of mixture tandem mass spectra

    PubMed Central

    Hotta, Stéphanie Yuki Kolbeck; Verano‐Braga, Thiago; Kjeldsen, Frank

    2016-01-01

    The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co‐isolation and thus prone to false identifications. The deconvolution approach matched complementary b‐, y‐ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co‐isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20–35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. PMID:27329701

  14. Estimation of geopotential from satellite-to-satellite range rate data: Numerical results

    NASA Technical Reports Server (NTRS)

    Thobe, Glenn E.; Bose, Sam C.

    1987-01-01

    A technique for high-resolution geopotential field estimation by recovering the harmonic coefficients from satellite-to-satellite range rate data is presented and tested against both a controlled analytical simulation of a one-day satellite mission (maximum degree and order 8) and then against a Cowell method simulation of a 32-day mission (maximum degree and order 180). Innovations include: (1) a new frequency-domain observation equation based on kinetic energy perturbations which avoids much of the complication of the usual Keplerian element perturbation approaches; (2) a new method for computing the normalized inclination functions which unlike previous methods is both efficient and numerically stable even for large harmonic degrees and orders; (3) the application of a mass storage FFT to the entire mission range rate history; (4) the exploitation of newly discovered symmetries in the block diagonal observation matrix which reduce each block to the product of (a) a real diagonal matrix factor, (b) a real trapezoidal factor with half the number of rows as before, and (c) a complex diagonal factor; (5) a block-by-block least-squares solution of the observation equation by means of a custom-designed Givens orthogonal rotation method which is both numerically stable and tailored to the trapezoidal matrix structure for fast execution.

  15. Exact solution of corner-modified banded block-Toeplitz eigensystems

    NASA Astrophysics Data System (ADS)

    Cobanera, Emilio; Alase, Abhijeet; Ortiz, Gerardo; Viola, Lorenza

    2017-05-01

    Motivated by the challenge of seeking a rigorous foundation for the bulk-boundary correspondence for free fermions, we introduce an algorithm for determining exactly the spectrum and a generalized-eigenvector basis of a class of banded block quasi-Toeplitz matrices that we call corner-modified. Corner modifications of otherwise arbitrary banded block-Toeplitz matrices capture the effect of boundary conditions and the associated breakdown of translational invariance. Our algorithm leverages the interplay between a non-standard, projector-based method of kernel determination (physically, a bulk-boundary separation) and families of linear representations of the algebra of matrix Laurent polynomials. Thanks to the fact that these representations act on infinite-dimensional carrier spaces in which translation symmetry is restored, it becomes possible to determine the eigensystem of an auxiliary projected block-Laurent matrix. This results in an analytic eigenvector Ansatz, independent of the system size, which we prove is guaranteed to contain the full solution of the original finite-dimensional problem. The actual solution is then obtained by imposing compatibility with a boundary matrix, whose shape is also independent of system size. As an application, we show analytically that eigenvectors of short-ranged fermionic tight-binding models may display power-law corrections to exponential behavior, and demonstrate the phenomenon for the paradigmatic Majorana chain of Kitaev.

  16. A Comparative Study of Charges Made Through an Automated Circulation System in the Colorado State University Libraries.

    ERIC Educational Resources Information Center

    Burns, Robert W.

    To determine use of portions of the collections at Colorado State University libraries and to identify heavily used sections, the collections were divided into 204 blocks according to Library of Congress classification letters. The number of charges made in each block was counted during a 1975 quarter for patrons, charges made to the reserve desk,…

  17. Deconvolution of gas chromatographic data

    NASA Technical Reports Server (NTRS)

    Howard, S.; Rayborn, G. H.

    1980-01-01

    The use of deconvolution methods on gas chromatographic data to obtain an accurate determination of the relative amounts of each material present by mathematically separating the merged peaks is discussed. Data were obtained on a gas chromatograph with a flame ionization detector. Chromatograms of five xylenes with differing degrees of separation were generated by varying the column temperature at selected rates. The merged peaks were then successfully separated by deconvolution. The concept of function continuation in the frequency domain was introduced in striving to reach the theoretical limit of accuracy, but proved to be only partially successful.

  18. Detailed interpretation of aeromagnetic data from the Patagonia Mountains area, southeastern Arizona

    USGS Publications Warehouse

    Bultman, Mark W.

    2015-01-01

    Euler deconvolution depth estimates derived from aeromagnetic data with a structural index of 0 show that mapped faults on the northern margin of the Patagonia Mountains generally agree with the depth estimates in the new geologic model. The deconvolution depth estimates also show that the concealed Patagonia Fault southwest of the Patagonia Mountains is more complex than recent geologic mapping represents. Additionally, Euler deconvolution depth estimates with a structural index of 2 locate many potential intrusive bodies that might be associated with known and unknown mineralization.

  19. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  20. Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.

    2016-03-01

    Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.

  1. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    NASA Astrophysics Data System (ADS)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  2. Measurement of Diffusion in Entangled Rod-Coil Triblock Copolymers

    NASA Astrophysics Data System (ADS)

    Olsen, B. D.; Wang, M.

    2012-02-01

    Although rod-coil block copolymers have attracted increasing attention for functional nanomaterials, their dynamics relevant to self-assembly and processing have not been widely investigated. Because the rod and coil blocks have different reptation behavior and persistence lengths, the mechanism by which block copolymers will diffuse is unclear. In order to understand the effect of the rigid block on reptation, tracer diffusion of a coil-rod-coil block copolymer through an entangled coil polymer matrix was experimentally measured. A monodisperse, high molecular weight coil-rod-coil triblock was synthesized using artificial protein engineering to prepare the helical rod and bioconjugaiton of poly(ethylene glycol) coils to produce the final triblock. Diffusion measurements were performed using Forced Rayleigh scattering (FRS), at varying ratios of the rod length to entanglement length, where genetic engineering is used to control the protein rod length and the polymer matrix concentration controls the entanglement length. As compared to PEO homopolymer tracers, the coil-rod-coil triblocks show markedly slower diffusion, suggesting that the mismatch between rod and coil reptation mechanisms results in hindered diffusion of these molecules in the entangled state.

  3. Recovering hidden diagonal structures via non-negative matrix factorization with multiple constraints.

    PubMed

    Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan

    2017-03-31

    Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.

  4. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Ruixing; Yang, LV; Xu, Kele

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape -more » to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.« less

  5. Locating and Quantifying Broadband Fan Sources Using In-Duct Microphones

    NASA Technical Reports Server (NTRS)

    Dougherty, Robert P.; Walker, Bruce E.; Sutliff, Daniel L.

    2010-01-01

    In-duct beamforming techniques have been developed for locating broadband noise sources on a low-speed fan and quantifying the acoustic power in the inlet and aft fan ducts. The NASA Glenn Research Center's Advanced Noise Control Fan was used as a test bed. Several of the blades were modified to provide a broadband source to evaluate the efficacy of the in-duct beamforming technique. Phased arrays consisting of rings and line arrays of microphones were employed. For the imaging, the data were mathematically resampled in the frame of reference of the rotating fan. For both the imaging and power measurement steps, array steering vectors were computed using annular duct modal expansions, selected subsets of the cross spectral matrix elements were used, and the DAMAS and CLEAN-SC deconvolution algorithms were applied.

  6. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    NASA Astrophysics Data System (ADS)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.

  7. A Distributed-Memory Package for Dense Hierarchically Semi-Separable Matrix Computations Using Randomization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter

    In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less

  8. A Distributed-Memory Package for Dense Hierarchically Semi-Separable Matrix Computations Using Randomization

    DOE PAGES

    Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...

    2016-06-30

    In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less

  9. Solid oxide fuel cell matrix and modules

    DOEpatents

    Riley, B.

    1988-04-22

    Porous refractory ceramic blocks arranged in an abutting, stacked configuration and forming a three dimensional array provide a support structure and coupling means for a plurality of solid oxide fuel cells (SOFCs). The stack of ceramic blocks is self-supporting, with a plurality of such stacked arrays forming a matrix enclosed in an insulating refractory brick structure having an outer steel layer. The necessary connections for air, fuel, burnt gas, and anode and cathode connections are provided through the brick and steel outer shell. The ceramic blocks are so designed with respect to the strings of modules that by simple and logical design the strings could be replaced by hot reloading if one should fail. The hot reloading concept has not been included in any previous designs. 11 figs.

  10. A frequency-domain seismic blind deconvolution based on Gini correlations

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing

    2018-02-01

    In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.

  11. Circulating matrix metalloproteinase-9 and tissue inhibitors of metalloproteinases-1 and -2 levels in gestational hypertension.

    PubMed

    Tayebjee, Muzahir H; Karalis, Ioannis; Nadar, Sunil K; Beevers, D Gareth; MacFadyen, Robert J; Lip, Gregory Y H

    2005-03-01

    Gestational hypertension (GH) is dangerous to both mother and child. Arterial invasiveness and growth are dependent on successful extracellular matrix (ECM) breakdown, which may be abnormal in GH. We hypothesized abnormalities in circulating matrix metalloproteinase-9 (MMP-9) and tissue inhibitors of metalloproteinases-1 and -2 (TIMP-1 and TIMP-2, respectively) in patients with GH, when compared with normotensive women with normal pregnancies and healthy nonpregnant control subjects. Plasma MMP-9, TIMP-1, and TIMP-2 were measured by ELISA in 23 women with GH, 30 normotensive pregnant women, and 28 nonpregnant women who were matched for age, gestational age, and parity. Levels of circulating MMP-9, TIMP-1 and TIMP-2, and the MMP-9/TIMP-1 and MMP-9/TIMP-2 ratios were significantly different among the three groups (P = .026, P = .006, P = .007, P = .001 and P = .008 respectively). Within the GH group, MMP-9 and the MMP-9/TIMP-1 ratio correlated negatively with age (r = -0.581, P = .004 and r = -0.563, P = .005, respectively) and levels of diastolic blood pressure (r = -0.432, P = .040 and r = -0.461, P = .027, respectively). With multiple regression analysis, only age independently correlated with circulating levels of MMP-9 (P = .010); neither age nor levels of diastolic blood pressure had any effect on the MMP-9/TIMP-1 ratio. We have demonstrated altered MMP/TIMP ratios in maternal blood during GH. These observations suggest pregnancy-related changes in ECM breakdown and turnover. Given the importance of changes in ECM composition to vascular and cardiac structure in hypertension, we suggest that these observations may be related to the pathophysiology of human GH.

  12. Visualizing Matrix Multiplication

    ERIC Educational Resources Information Center

    Daugulis, Peteris; Sondore, Anita

    2018-01-01

    Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…

  13. Uncarboxylated matrix Gla protein (ucMGP) is associated with coronary artery calcification in haemodialysis patients.

    PubMed

    Cranenburg, Ellen C M; Brandenburg, Vincent M; Vermeer, Cees; Stenger, Melanie; Mühlenbruch, Georg; Mahnken, Andreas H; Gladziwa, Ulrich; Ketteler, Markus; Schurgers, Leon J

    2009-02-01

    Matrix gamma-carboxyglutamate (Gla) protein (MGP) is a potent local inhibitor of cardiovascular calcification and accumulates at areas of calcification in its uncarboxylated form (ucMGP). We previously found significantly lower circulating ucMGP levels in patients with a high vascular calcification burden. Here we report on the potential of circulating ucMGP to serve as a biomarker for vascular calcification in haemodialysis (HD) patients. Circulating ucMGP levels were measured with an ELISA-based assay in 40 HD patients who underwent multi-slice computed tomography (MSCT) scanning to quantify the extent of coronary artery calcification (CAC). The mean ucMGP level in HD patients (193 +/- 65 nM) was significantly lower as compared to apparently healthy subjects of the same age (441 +/- 97 nM; p < 0.001) and patients with rheumatoid arthritis (RA) without CAC (560 +/- 140 nM; p < 0.001). Additionally, ucMGP levels correlated inversely with CAC scores (r = -0.41; p = 0.009), and this correlation persisted after adjustment for age, dialysis vintage and high-sensitivity C-reactive protein (hs-CRP). Since circulating ucMGP levels are significantly and inversely correlated with the extent of CAC in HD patients, ucMGP may become a tool for identifying HD patients with a high probability of cardiovascular calcification.

  14. Processing strategy for water-gun seismic data from the Gulf of Mexico

    USGS Publications Warehouse

    Lee, Myung W.; Hart, Patrick E.; Agena, Warren F.

    2000-01-01

    In order to study the regional distribution of gas hydrates and their potential relationship to a large-scale sea-fl oor failures, more than 1,300 km of near-vertical-incidence seismic profi les were acquired using a 15-in3 water gun across the upper- and middle-continental slope in the Garden Banks and Green Canyon regions of the Gulf of Mexico. Because of the highly mixed phase water-gun signature, caused mainly by a precursor of the source arriving about 18 ms ahead of the main pulse, a conventional processing scheme based on the minimum phase assumption is not suitable for this data set. A conventional processing scheme suppresses the reverberations and compresses the main pulse, but the failure to suppress precursors results in complex interference between the precursors and primary refl ections, thus obscuring true refl ections. To clearly image the subsurface without interference from the precursors, a wavelet deconvolution based on the mixedphase assumption using variable norm is attempted. This nonminimum- phase wavelet deconvolution compresses a longwave- train water-gun signature into a simple zero-phase wavelet. A second-zero-crossing predictive deconvolution followed by a wavelet deconvolution suppressed variable ghost arrivals attributed to the variable depths of receivers. The processing strategy of using wavelet deconvolution followed by a secondzero- crossing deconvolution resulted in a sharp and simple wavelet and a better defi nition of the polarity of refl ections. Also, the application of dip moveout correction enhanced lateral resolution of refl ections and substantially suppressed coherent noise.

  15. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis.

    PubMed

    Carnevale Neto, Fausto; Pilon, Alan C; Selegato, Denise M; Freire, Rafael T; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  16. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification by Spectral Deconvolution Ratio Analysis

    PubMed Central

    Carnevale Neto, Fausto; Pilon, Alan C.; Selegato, Denise M.; Freire, Rafael T.; Gu, Haiwei; Raftery, Daniel; Lopes, Norberto P.; Castro-Gamboa, Ian

    2016-01-01

    Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, thereby avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY) with Automated Mass Spectral Deconvolution and Identification System software (AMDIS). Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential, and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication was initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor) was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts. PMID:27747213

  17. A method of PSF generation for 3D brightfield deconvolution.

    PubMed

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  18. Grape seed extracts inhibit dentin matrix degradation by MMP-3

    PubMed Central

    Khaddam, Mayssam; Salmon, Benjamin; Le Denmat, Dominique; Tjaderhane, Leo; Menashi, Suzanne; Chaussain, Catherine; Rochefort, Gaël Y.; Boukpessi, Tchilalo

    2014-01-01

    Since Matrix metalloproteinases (MMPs) have been suggested to contribute to dentin caries progression, the hypothesis that MMP inhibition would affect the progression of dentin caries is clinically relevant. Grape seed extracts (GSE) have been previously reported to be natural inhibitors of MMPs. Objective: To evaluate the capacity of a GSE mouthrinse to prevent the degradation of demineralized dentin matrix by MMP-3 (stromelysin-1). Materials and Methods: Standardized blocks of dentin obtained from sound permanent teeth extracted for orthodontic reasons were demineralized with Ethylenediaminetetraacetic acid (EDTA) and pretreated either with (A) GSE (0.2% w/v), (B) amine fluoride (AmF) (20% w/v), (C) a mouthrinse which contains both, (D) placebo, (E) sodium fluoride (0.15 mg.ml−1), (F) PBS, (G) Chlorhexidine digluconate (CHX), or (H) zinc chloride (ZnCl2). The dentin blocks were then incubated with activated recombinant MMP-3. The supernatants were analyzed by Western Blot for several dentin matrix proteins known to be MMP-3 substrate. In parallel, scanning electron microscopy (SEM) was performed on resin replica of the dentin blocks. Results: Western blot analysis of the supernatants revealed that MMP-3 released from the dentin matrix small proteoglycans (decorin and biglycan) and dentin sialoprotein (DSP) in the AmF, sodium fluoride, PBS and placebo pretreated groups, but not in the GSE and mouthrinse pretreated groups. SEM examination of resin replica showed that the mouthrinse and its active components not only had an anti-MMP action but also modified the dentin surface accessibility. Conclusion: This study shows that GSE either alone or combined with AmF as in the evaluated mouthrinse limits dentin matrix degradation. This association may be promising to prevent the progression of caries within dentin. However, the procedure should be adapted to clinically relevant durations. PMID:25400590

  19. Pushing Memory Bandwidth Limitations Through Efficient Implementations of Block-Krylov Space Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro

    Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less

  20. Inboard seal mounting

    DOEpatents

    Hayes, John R.

    1983-01-01

    A regenerator assembly for a gas turbine engine has a hot side seal assembly formed in part by a cast metal engine block having a seal recess formed therein that is configured to supportingly receive ceramic support blocks including an inboard face thereon having a regenerator seal face bonded thereto. A pressurized leaf seal is interposed between the ceramic support block and the cast metal engine block to bias the seal wear face into sealing engagement with a hot side surface of a rotary regenerator matrix.

  1. Fast live cell imaging at nanometer scale using annihilating filter-based low-rank Hankel matrix approach

    NASA Astrophysics Data System (ADS)

    Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2015-09-01

    Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.

  2. Fast methodology for the reliable determination of nonylphenol in water samples by minimal labeling isotope dilution mass spectrometry.

    PubMed

    Fabregat-Cabello, Neus; Castillo, Ángel; Sancho, Juan V; González, Florenci V; Roig-Navarro, Antoni Francesc

    2013-08-02

    In this work we have developed and validated an accurate and fast methodology for the determination of 4-nonylphenol (technical mixture) in complex matrix water samples by UHPLC-ESI-MS/MS. The procedure is based on isotope dilution mass spectrometry (IDMS) in combination with isotope pattern deconvolution (IPD), which provides the concentration of the analyte directly from the spiked sample without requiring any methodological calibration graph. To avoid any possible isotopic effect during the analytical procedure the in-house synthesized (13)C1-4-(3,6-dimethyl-3-heptyl)phenol was used as labeled compound. This proposed surrogate was able to compensate the matrix effect even from wastewater samples. A SPE pre-concentration step together with exhaustive efforts to avoid contamination were included to reach the signal-to-noise ratio necessary to detect the endogenous concentrations present in environmental samples. Calculations were performed acquiring only three transitions, achieving limits of detection lower than 100ng/g for all water matrix assayed. Recoveries within 83-108% and coefficients of variation ranging from 1.5% to 9% were obtained. On the contrary a considerable overestimation was obtained with the most usual classical calibration procedure using 4-n-nonylphenol as internal standard, demonstrating the suitability of the minimal labeling approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Effects of partitioning and scheduling sparse matrix factorization on communication and load balance

    NASA Technical Reports Server (NTRS)

    Venugopal, Sesh; Naik, Vijay K.

    1991-01-01

    A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.

  4. Density matrix renormalization group for a highly degenerate quantum system: Sliding environment block approach

    NASA Astrophysics Data System (ADS)

    Schmitteckert, Peter

    2018-04-01

    We present an infinite lattice density matrix renormalization group sweeping procedure which can be used as a replacement for the standard infinite lattice blocking schemes. Although the scheme is generally applicable to any system, its main advantages are the correct representation of commensurability issues and the treatment of degenerate systems. As an example we apply the method to a spin chain featuring a highly degenerate ground-state space where the new sweeping scheme provides an increase in performance as well as accuracy by many orders of magnitude compared to a recently published work.

  5. Sedimentological and geochimical features of chaotic deposits in the Ventimiglia Flysch (Roya-Argentina valley- NW Italy)

    NASA Astrophysics Data System (ADS)

    Perotti, Elena; Bertok, Carlo; D'Atri, Anna; Martire, Luca; Musso, Alessia; Piana, Fabrizio; Varrone, Dario

    2010-05-01

    The Ventimiglia Flysch is a Upper Eocene turbidite succession deposited in the SE part of the Eocene Alpine foreland basin, truncated at the top by the basal thrust of the Helminthoides Flysch, a Ligurian tectonic unit that presently covers part of the Dauphinois and Briançonnais successions of Western Ligurian Alps. The Ventimiglia Flysch is made of alternations of sandstones and shales. The upper part is characterized by chaotic deposits. The chaotic deposits are constituted by: - km to hm-sized intrabasinal blocks (Ventimiglia Flysch) and extrabasinal blocks (Cretaceous sediments of Dauphinois Domain, Nummulite Limestone of the Alpine foreland basin and Helminthoides Flysch ); - conglomerates with block-in-matrix fabric interpreted as debris flow deposits. They occur as m-thick beds interbedded with the normal turbidite succession or locally as matrix of the larger blocks. Debris flow clasts show: - different sizes, ranging from metre to centimetre; - different shapes, from rounded to subangular; - different lithologies, such as fine-grained quartz-arenites, marls, dark shales and fine-grained calcisiltites. They may be referred to both coeval, intrabasinal lithologies (Ventimiglia Flysch), and extrabasinal formations (Nummulite Limestone, Globigerina Marl and Helminthoides Flysch). The clasts are disposed randomly into a chaotic matrix that consists of a dark mudstone in which submillimetre- to millimetre-sized lithic grains, with the same compositions of larger clasts, are present. Locally matrix consists of sandstones with quartz and feldspar grains and fragments of nummulitids that suggest reworking of unlithified Eocene sediments. Cathodoluminescence observations allow the distinction of two kinds of clasts: dull clasts that underwent a cementation before the formation of conglomerates, and clasts with the same orange luminescence as the matrix that may be interpreted as soft mud clasts that were cemented together with the matrix. Debris flow deposits are cross-cut by a network of crumpled and broken veins, 10's mm to cm-large, filled with orange luminescing calcite and locally with quartz. Their complex cross-cutting relationships with clasts and matrix show that several systems of veins are present, that may be referred to different fracturing events. Some clasts are crossed or bordered by veins that end at the edge of the clasts. These veins show the same features as those that crosscut the whole rock. This indicates reworking of plastic sediments crossed by calcite-filled veins by mass gravity flows. Polyphase debris flow processes, proceeding along with fluid expulsion and veining, are thus documented. Ellipsoidal, dm-large concretions of cemented pelites also occur. They represent a previous phase of concretionary growth within homogenous pelites subsequently involved in the mass gravity flow. Stable O and C isotope analyses, performed on matrix, clasts, concretions and veins, show: - δ13C close to normal marine values (-3 to 0 δ13C ‰ PDB) - δ18O markedly negative (-9 to -7 δ18O ‰ PDB) that could be related to precipitation from relatively hot waters (60-70 ° C). The block-in-matrix fabric and the variable composition and size of blocks show that these sediments are a sedimentary mélange related to mass wasting processes involving both extrabasinal and intrabasinal sediments. These gravitational movements took place along slopes of submarine tectonic ridges created by transpressional faults (Piana et al., 2009) that juxtaposed tectonic slices of different paleogeographic domains (Dauphinois, Briançonnais, Ligurian Units) in Late Eocene times, and involved both rock fall processes of huge blocks of lithified, older formations, and debris flows of unlithified intrabasinal sediment. Faults also acted as conduits for an upward flow of hot fluids supersaturated in calcium carbonate. These fluids crossed unlithified sediments close to the sea floor resulting in localized concretionary cementation and formation of vein swarms within unlithified sediments prone to subsequent mass wasting.

  6. Percutaneous transluminal coronary angioplasty (PTCA)

    MedlinePlus Videos and Cool Tools

    ... angioplasty (PTCA) is a minimally invasive procedure to open up blocked coronary arteries, allowing blood to circulate ... within the coronary artery to keep the vessel open. Once the compression has been performed, contrast media ...

  7. Switching Matrix For Optical Signals

    NASA Technical Reports Server (NTRS)

    Grove, Charles H.

    1990-01-01

    Proposed matrix of electronically controlled shutters switches signals in optical fibers between multiple input and output channels. Size, weight, and power consumption reduced. Device serves as building block for small, low-power, broad-band television- and data-signal-switching systems providing high isolation between nominally disconnected channels.

  8. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  9. Compressive Properties of Metal Matrix Syntactic Foams in Free and Constrained Compression

    NASA Astrophysics Data System (ADS)

    Orbulov, Imre Norbert; Májlinger, Kornél

    2014-06-01

    Metal matrix syntactic foam (MMSF) blocks were produced by an inert gas-assisted pressure infiltration technique. MMSFs are advanced hollow sphere reinforced-composite materials having promising application in the fields of aviation, transport, and automotive engineering, as well as in civil engineering. The produced blocks were investigated in free and constrained compression modes, and besides the characteristic mechanical properties, their deformation mechanisms and failure modes were studied. In the tests, the chemical composition of the matrix material, the size of the reinforcing ceramic hollow spheres, the applied heat treatment, and the compression mode were considered as investigation parameters. The monitored mechanical properties were the compressive strength, the fracture strain, the structural stiffness, the fracture energy, and the overall absorbed energy. These characteristics were strongly influenced by the test parameters. By the proper selection of the matrix and the reinforcement and by proper design, the mechanical properties of the MMSFs can be effectively tailored for specific and given applications.

  10. A digital algorithm for spectral deconvolution with noise filtering and peak picking: NOFIPP-DECON

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.; Settle, G. L.; Knight, R. D.

    1975-01-01

    Noise-filtering, peak-picking deconvolution software incorporates multiple convoluted convolute integers and multiparameter optimization pattern search. The two theories are described and three aspects of the software package are discussed in detail. Noise-filtering deconvolution was applied to a number of experimental cases ranging from noisy, nondispersive X-ray analyzer data to very noisy photoelectric polarimeter data. Comparisons were made with published infrared data, and a man-machine interactive language has evolved for assisting in very difficult cases. A modified version of the program is being used for routine preprocessing of mass spectral and gas chromatographic data.

  11. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oba, T.; Riethmüller, T. L.; Solanki, S. K.

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data inmore » an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.« less

  12. Toxoplasma Modulates Signature Pathways of Human Epilepsy, Neurodegeneration & Cancer.

    PubMed

    Ngô, Huân M; Zhou, Ying; Lorenzi, Hernan; Wang, Kai; Kim, Taek-Kyun; Zhou, Yong; El Bissati, Kamal; Mui, Ernest; Fraczek, Laura; Rajagopala, Seesandra V; Roberts, Craig W; Henriquez, Fiona L; Montpetit, Alexandre; Blackwell, Jenefer M; Jamieson, Sarra E; Wheeler, Kelsey; Begeman, Ian J; Naranjo-Galvis, Carlos; Alliey-Rodriguez, Ney; Davis, Roderick G; Soroceanu, Liliana; Cobbs, Charles; Steindler, Dennis A; Boyer, Kenneth; Noble, A Gwendolyn; Swisher, Charles N; Heydemann, Peter T; Rabiah, Peter; Withers, Shawn; Soteropoulos, Patricia; Hood, Leroy; McLeod, Rima

    2017-09-13

    One third of humans are infected lifelong with the brain-dwelling, protozoan parasite, Toxoplasma gondii. Approximately fifteen million of these have congenital toxoplasmosis. Although neurobehavioral disease is associated with seropositivity, causality is unproven. To better understand what this parasite does to human brains, we performed a comprehensive systems analysis of the infected brain: We identified susceptibility genes for congenital toxoplasmosis in our cohort of infected humans and found these genes are expressed in human brain. Transcriptomic and quantitative proteomic analyses of infected human, primary, neuronal stem and monocytic cells revealed effects on neurodevelopment and plasticity in neural, immune, and endocrine networks. These findings were supported by identification of protein and miRNA biomarkers in sera of ill children reflecting brain damage and T. gondii infection. These data were deconvoluted using three systems biology approaches: "Orbital-deconvolution" elucidated upstream, regulatory pathways interconnecting human susceptibility genes, biomarkers, proteomes, and transcriptomes. "Cluster-deconvolution" revealed visual protein-protein interaction clusters involved in processes affecting brain functions and circuitry, including lipid metabolism, leukocyte migration and olfaction. Finally, "disease-deconvolution" identified associations between the parasite-brain interactions and epilepsy, movement disorders, Alzheimer's disease, and cancer. This "reconstruction-deconvolution" logic provides templates of progenitor cells' potentiating effects, and components affecting human brain parasitism and diseases.

  13. Peptide de novo sequencing of mixture tandem mass spectra.

    PubMed

    Gorshkov, Vladimir; Hotta, Stéphanie Yuki Kolbeck; Verano-Braga, Thiago; Kjeldsen, Frank

    2016-09-01

    The impact of mixture spectra deconvolution on the performance of four popular de novo sequencing programs was tested using artificially constructed mixture spectra as well as experimental proteomics data. Mixture fragmentation spectra are recognized as a limitation in proteomics because they decrease the identification performance using database search engines. De novo sequencing approaches are expected to be even more sensitive to the reduction in mass spectrum quality resulting from peptide precursor co-isolation and thus prone to false identifications. The deconvolution approach matched complementary b-, y-ions to each precursor peptide mass, which allowed the creation of virtual spectra containing sequence specific fragment ions of each co-isolated peptide. Deconvolution processing resulted in equally efficient identification rates but increased the absolute number of correctly sequenced peptides. The improvement was in the range of 20-35% additional peptide identifications for a HeLa lysate sample. Some correct sequences were identified only using unprocessed spectra; however, the number of these was lower than those where improvement was obtained by mass spectral deconvolution. Tight candidate peptide score distribution and high sensitivity to small changes in the mass spectrum introduced by the employed deconvolution method could explain some of the missing peptide identifications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A primitive study on unsupervised anomaly detection with an autoencoder in emergency head CT volumes

    NASA Astrophysics Data System (ADS)

    Sato, Daisuke; Hanaoka, Shouhei; Nomura, Yukihiro; Takenaga, Tomomi; Miki, Soichiro; Yoshikawa, Takeharu; Hayashi, Naoto; Abe, Osamu

    2018-02-01

    Purpose: The target disorders of emergency head CT are wide-ranging. Therefore, people working in an emergency department desire a computer-aided detection system for general disorders. In this study, we proposed an unsupervised anomaly detection method in emergency head CT using an autoencoder and evaluated the anomaly detection performance of our method in emergency head CT. Methods: We used a 3D convolutional autoencoder (3D-CAE), which contains 11 layers in the convolution block and 6 layers in the deconvolution block. In the training phase, we trained the 3D-CAE using 10,000 3D patches extracted from 50 normal cases. In the test phase, we calculated abnormalities of each voxel in 38 emergency head CT volumes (22 abnormal cases and 16 normal cases) for evaluation and evaluated the likelihood of lesion existence. Results: Our method achieved a sensitivity of 68% and a specificity of 88%, with an area under the curve of the receiver operating characteristic curve of 0.87. It shows that this method has a moderate accuracy to distinguish normal CT cases to abnormal ones. Conclusion: Our method has potentialities for anomaly detection in emergency head CT.

  15. Optimization of Single- and Dual-Color Immunofluorescence Protocols for Formalin-Fixed, Paraffin-Embedded Archival Tissues.

    PubMed

    Kajimura, Junko; Ito, Reiko; Manley, Nancy R; Hale, Laura P

    2016-02-01

    Performance of immunofluorescence staining on archival formalin-fixed paraffin-embedded human tissues is generally not considered to be feasible, primarily due to problems with tissue quality and autofluorescence. We report the development and application of procedures that allowed for the study of a unique archive of thymus tissues derived from autopsies of individuals exposed to atomic bomb radiation in Hiroshima, Japan in 1945. Multiple independent treatments were used to minimize autofluorescence and maximize fluorescent antibody signals. Treatments with NH3/EtOH and Sudan Black B were particularly useful in decreasing autofluorescent moieties present in the tissue. Deconvolution microscopy was used to further enhance the signal-to-noise ratios. Together, these techniques provide high-quality single- and dual-color fluorescent images with low background and high contrast from paraffin blocks of thymus tissue that were prepared up to 60 years ago. The resulting high-quality images allow the application of a variety of image analyses to thymus tissues that previously were not accessible. Whereas the procedures presented remain to be tested for other tissue types and archival conditions, the approach described may facilitate greater utilization of older paraffin block archives for modern immunofluorescence studies. © 2016 The Histochemical Society.

  16. Strategies to avoid false negative findings in residue analysis using liquid chromatography coupled to time-of-flight mass spectrometry.

    PubMed

    Kaufmann, Anton; Butcher, Patrick

    2006-01-01

    Liquid chromatography coupled to orthogonal acceleration time-of-flight mass spectrometry (LC/TOF) provides an attractive alternative to liquid chromatography coupled to triple quadrupole mass spectrometry (LC/MS/MS) in the field of multiresidue analysis. The sensitivity and selectivity of LC/TOF approach those of LC/MS/MS. TOF provides accurate mass information and a significantly higher mass resolution than quadrupole analyzers. The available mass resolution of commercial TOF instruments ranging from 10 000 to 18 000 full width at half maximum (FWHM) is not, however, sufficient to completely exclude the problem of isobaric interferences (co-elution of analyte ions with matrix compounds of very similar mass). Due to the required data storage capacity, TOF raw data is commonly centroided before being electronically stored. However, centroiding can lead to a loss of data quality. The co-elution of a low intensity analyte peak with an isobaric, high intensity matrix compound can cause problems. Some centroiding algorithms might not be capable of deconvoluting such partially merged signals, leading to incorrect centroids.Co-elution of isobaric compounds has been deliberately simulated by injecting diluted binary mixtures of isobaric model substances at various relative intensities. Depending on the mass differences between the two isobaric compounds and the resolution provided by the TOF instrument, significant deviations in exact mass measurements and signal intensities were observed. The extraction of a reconstructed ion chromatogram based on very narrow mass windows can even result in the complete loss of the analyte signal. Guidelines have been proposed to avoid such problems. The use of sub-2 microm HPLC packing materials is recommended to improve chromatographic resolution and to reduce the risk of co-elution. The width of the extraction mass windows for reconstructed ion chromatograms should be defined according to the resolution of the TOF instrument. Alternative approaches include the spiking of the sample with appropriate analyte concentrations. Furthermore, enhanced software, capable of deconvoluting partially merged mass peaks, may become available. Copyright (c) 2006 John Wiley & Sons, Ltd.

  17. A Parallel Framework with Block Matrices of a Discrete Fourier Transform for Vector-Valued Discrete-Time Signals.

    PubMed

    Soto-Quiros, Pablo

    2015-01-01

    This paper presents a parallel implementation of a kind of discrete Fourier transform (DFT): the vector-valued DFT. The vector-valued DFT is a novel tool to analyze the spectra of vector-valued discrete-time signals. This parallel implementation is developed in terms of a mathematical framework with a set of block matrix operations. These block matrix operations contribute to analysis, design, and implementation of parallel algorithms in multicore processors. In this work, an implementation and experimental investigation of the mathematical framework are performed using MATLAB with the Parallel Computing Toolbox. We found that there is advantage to use multicore processors and a parallel computing environment to minimize the high execution time. Additionally, speedup increases when the number of logical processors and length of the signal increase.

  18. Achieving Continuous Anion Transport Domains Using Block Copolymers Containing Phosphonium Cations

    DOE PAGES

    Zhang, Wenxu; Liu, Ye; Jackson, Aaron C.; ...

    2016-06-22

    Triblock and diblock copolymers based on isoprene (Ip) and chloromethylstyrene (CMS) were synthesized in this paper by sequential polymerization using reversible addition–fragmentation chain transfer radical polymerization (RAFT). The block copolymers were quaternized with tris(2,4,6-trimethoxyphenyl)phosphine (Ar 3P) to prepare soluble ionomers. The ionomers were cast from chloroform to form anion exchange membranes (AEMs) with highly ordered morphologies. At low volume fractions of ionic blocks, the ionomers formed lamellar morphologies, while at moderate volume fractions (≥30% for triblock and ≥22% for diblock copolymers) hexagonal phases with an ionic matrix were observed. Ion conductivities were higher through the hexagonal phase matrix than inmore » the lamellar phases. Finally, promising chloride conductivities (20 mS/cm) were achieved at elevated temperatures and humidified conditions.« less

  19. Novel protective role of kallistatin in obesity by limiting adipose tissue low grade inflammation and oxidative stress.

    PubMed

    Frühbeck, Gema; Gómez-Ambrosi, Javier; Rodríguez, Amaia; Ramírez, Beatriz; Valentí, Víctor; Moncada, Rafael; Becerril, Sara; Unamuno, Xabier; Silva, Camilo; Salvador, Javier; Catalán, Victoria

    2018-04-18

    Kallistatin plays an important role in the inhibition of inflammation, oxidative stress, fibrosis and angiogenesis. We aimed to determine the impact of kallistatin on obesity and its associated metabolic alterations as well as its role in adipocyte inflammation and oxidative stress. Samples obtained from 95 subjects were used in a case-control study. Circulating concentrations and expression levels of kallistatin as well as key inflammation, oxidative stress and extracellular matrix remodelling-related genes were analyzed. Circulating kallistatin concentrations were measured before and after weight loss achieved by Roux-en-Y gastric bypass (RYGB). The impact of kallistatin on lipopolysaccharide (LPS)- and tumour necrosis factor (TNF)-α-mediated inflammatory as well as oxidative stress signalling pathways was evaluated. We show that the reduced (P < 0.00001) circulating levels of kallistatin in obese patients increased (P < 0.00001) after RYGB. Moreover, gene expression levels of SERPINA4, the gene coding for kallistatin, were downregulated (P < 0.01) in the liver from obese subjects with non-alcoholic fatty liver disease. Additionally, we revealed that kallistatin reduced (P < 0.05) the expression of inflammation-related genes (CCL2, IL1B, IL6, IL8, TNFA, TGFB) and, conversely, upregulated (P < 0.05) mRNA levels of ADIPOQ and KLF4 in human adipocytes in culture. Kallistatin inhibited (P < 0.05) LPS- and TNF-α-induced inflammation in human adipocytes via downregulating the expression and secretion of key inflammatory markers. Furthermore, kallistatin also blocked (P < 0.05) TNF-α-mediated lipid peroxidation as well as NOX2 and HIF1A expression while stimulating (P < 0.05) the expression of SIRT1 and FOXO1. These findings provide, for the first time, evidence of a novel role of kallistatin in obesity and its associated comorbidities by limiting adipose tissue inflammation and oxidative stress. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  1. Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent.

    PubMed

    Hu, En-Liang; Kwok, James T

    2015-09-01

    Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD). Low-rank approximation avoids the expensive positive semidefinite constraint in the SDP by replacing the kernel matrix variable with V(T)V, where V is a low-rank matrix. The resultant nonlinear optimization problem is then solved by BCD, which optimizes each column of V sequentially. It can be shown that the proposed algorithm has nice convergence properties and low computational complexities. Experiments on a number of real-world data sets show that the proposed algorithm outperforms state-of-the-art NPKL solvers.

  2. Convergence to Diagonal Form of Block Jacobi-type Processes

    NASA Astrophysics Data System (ADS)

    Hari, Vjeran

    2008-09-01

    The main result of recent research on convergence to diagonal form of block Jacobi-type processes is presented. For this purpose, all notions needed to describe the result are introduced. In particular, elementary block transformation matrices, simple and non-simple algorithms, block pivot strategies together with the appropriate equivalence relations are defined. The general block Jacobi-type process considered here can be specialized to take the form of almost any known Jacobi-type method for solving the ordinary or the generalized matrix eigenvalue and singular value problems. The assumptions used in the result are satisfied by many concrete methods.

  3. Recovering of images degraded by atmosphere

    NASA Astrophysics Data System (ADS)

    Lin, Guang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2017-08-01

    Remote sensing images are seriously degraded by multiple scattering and bad weather. Through the analysis of the radiative transfer procedure in atmosphere, an image atmospheric degradation model considering the influence of atmospheric absorption multiple scattering and non-uniform distribution is proposed in this paper. Based on the proposed model, a novel recovering method is presented to eliminate atmospheric degradation. Mean-shift image segmentation and block-wise deconvolution are used to reduce time cost, retaining a good result. The recovering results indicate that the proposed method can significantly remove atmospheric degradation and effectively improve contrast compared with other removal methods. The results also illustrate that our method is suitable for various degraded remote sensing, including images with large field of view (FOV), images taken in side-glance situations, image degraded by atmospheric non-uniform distribution and images with various forms of clouds.

  4. Influence of additional heat exchanger block on directional solidification system for growing multi-crystalline silicon ingot - A simulation investigation

    NASA Astrophysics Data System (ADS)

    Nagarajan, S. G.; Srinivasan, M.; Aravinth, K.; Ramasamy, P.

    2018-04-01

    Transient simulation has been carried out for analyzing the heat transfer properties of Directional Solidification (DS) furnace. The simulation results revealed that the additional heat exchanger block under the bottom insulation on the DS furnace has enhanced the control of solidification of the silicon melt. Controlled Heat extraction rate during the solidification of silicon melt is requisite for growing good quality ingots which has been achieved by the additional heat exchanger block. As an additional heat exchanger block, the water circulating plate has been placed under the bottom insulation. The heat flux analysis of DS system and the temperature distribution studies of grown ingot confirm that the established additional heat exchanger block on the DS system gives additional benefit to the mc-Si ingot.

  5. Single-cell mRNA profiling reveals transcriptional heterogeneity among pancreatic circulating tumour cells.

    PubMed

    Lapin, Morten; Tjensvoll, Kjersti; Oltedal, Satu; Javle, Milind; Smaaland, Rune; Gilje, Bjørnar; Nordgård, Oddmund

    2017-05-31

    Single-cell mRNA profiling of circulating tumour cells may contribute to a better understanding of the biology of these cells and their role in the metastatic process. In addition, such analyses may reveal new knowledge about the mechanisms underlying chemotherapy resistance and tumour progression in patients with cancer. Single circulating tumour cells were isolated from patients with locally advanced or metastatic pancreatic cancer with immuno-magnetic depletion and immuno-fluorescence microscopy. mRNA expression was analysed with single-cell multiplex RT-qPCR. Hierarchical clustering and principal component analysis were performed to identify expression patterns. Circulating tumour cells were detected in 33 of 56 (59%) examined blood samples. Single-cell mRNA profiling of intact isolated circulating tumour cells revealed both epithelial-like and mesenchymal-like subpopulations, which were distinct from leucocytes. The profiled circulating tumour cells also expressed elevated levels of stem cell markers, and the extracellular matrix protein, SPARC. The expression of SPARC might correspond to an epithelial-mesenchymal transition in pancreatic circulating tumour cells. The analysis of single pancreatic circulating tumour cells identified distinct subpopulations and revealed elevated expression of transcripts relevant to the dissemination of circulating tumour cells to distant organ sites.

  6. Discovery of the early Jurassic Gajia mélange in the Bangong-Nujiang suture zone: Southward subduction of the Bangong-Nujiang Ocean?

    NASA Astrophysics Data System (ADS)

    Lai, Wen; Hu, Xiumian; Zhu, Dicheng; An, Wei; Ma, Anlin

    2017-06-01

    Mélange records a series of geological processes associated with oceanic subduction and continental collision. This paper reports for the first time the presence of Early Jurassic mélange from NW Nagqu in the southern margin of the Bangong-Nujiang suture zone, termed as the Gajia mélange. It shows typically blocks-in-matrix structure with matrix of black shale and siliceous mudstone, and several centimeters to several meters sized blocks of sandstone, silicalite, limestone and basalt. The sandstone blocks consist of homologous sandstone and two types of exotic sandstone, with different modal compositions. The Group 1 of exotic sandstone blocks consists of mainly of feldspar and quartz, whereas the Group 2 is rich in volcanic detritus. The Group 3 of homologous sandstone blocks is rich in feldspar and volcanic detritus with rare occurrence of quartz. U-Pb age data and in situ Hf isotopic compositions of detrital zircons from sandstone blocks are similar to those from the Lhasa terrane, suggesting that the sandstone blocks in the Gajia mélange most probably came from the Lhasa terrane. The YC1σ(2+) age of homologous sandstone blocks is 177 ± 2.4 Ma, suggesting an Early Jurassic depositional age for the sandstones within the Gajia mélange. The Gajia mélange likely records the southward subduction of the Bangong-Nujiang Ocean during the Early Jurassic.

  7. Solid oxide fuel cell matrix and modules

    DOEpatents

    Riley, Brian

    1990-01-01

    Porous refractory ceramic blocks arranged in an abutting, stacked configuration and forming a three dimensional array provide a support structure and coupling means for a plurality of solid oxide fuel cells (SOFCs). Each of the blocks includes a square center channel which forms a vertical shaft when the blocks are arranged in a stacked array. Positioned within the channel is a SOFC unit cell such that a plurality of such SOFC units disposed within a vertical shaft form a string of SOFC units coupled in series. A first pair of facing inner walls of each of the blocks each include an interconnecting channel hole cut horizontally and vertically into the block walls to form gas exit channels. A second pair of facing lateral walls of each block further include a pair of inner half circular grooves which form sleeves to accommodate anode fuel and cathode air tubes. The stack of ceramic blocks is self-supporting, with a plurality of such stacked arrays forming a matrix enclosed in an insulating refractory brick structure having an outer steel layer. The necessary connections for air, fuel, burnt gas, and anode and cathode connections are provided through the brick and steel outer shell. The ceramic blocks are so designed with respect to the strings of modules that by simple and logical design the strings could be replaced by hot reloading if one should fail. The hot reloading concept has not been included in any previous designs.

  8. Simulation of naturally fractured reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saidi, A.M.

    1983-11-01

    A three-dimensional, three-phase reservoir simulator was developed to study the behavior of fully or partially fractured reservoirs. It is also demonstrated, that when a fractured reservoir is subject to a relatively large rate of pressure drop and/or it composed of relatively large blocks, the pseudo steady-state pressure concept gives large errors as compared with transient fromulation. In addition, when gravity drainage and imbibitum processes, which is the most important mechanism in the fractured reservoirs, are represented by a ''lumped parameter'' even larger errors can be produced in exchange flow between matrix and fractures. For these reasons, the matrix blocks aremore » gridded and the transfer between matrix and fractures are calculated using pressure and diffusion transient concept. In this way the gravity drainage is also calculated accurately. As the matrix-fracture exchange flow depends on the location of each matrix grid relative to the GOC and/or WOC in fracture, the exchange flow equation are derived and given for each possible case. The differential equation describing the flow of water, oil, and gas within the matrix and fracture system, each of which may contain six unknowns, are presented. The two sets of equations are solved implicitly for pressure water, and gas stauration in both matrix and fractures. The first twenty two years of the history of Haft Kel field was successfully matched with this model and the results are included.« less

  9. Integrated Circuit For Simulation Of Neural Network

    NASA Technical Reports Server (NTRS)

    Thakoor, Anilkumar P.; Moopenn, Alexander W.; Khanna, Satish K.

    1988-01-01

    Ballast resistors deposited on top of circuit structure. Cascadable, programmable binary connection matrix fabricated in VLSI form as basic building block for assembly of like units into content-addressable electronic memory matrices operating somewhat like networks of neurons. Connections formed during storage of data, and data recalled from memory by prompting matrix with approximate or partly erroneous signals. Redundancy in pattern of connections causes matrix to respond with correct stored data.

  10. A tight and explicit representation of Q in sparse QR factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, E.G.; Peyton, B.W.

    1992-05-01

    In QR factorization of a sparse m{times}n matrix A (m {ge} n) the orthogonal factor Q is often stored implicitly as a lower trapezoidal matrix H known as the Householder matrix. This paper presents a simple characterization of the row structure of Q, which could be used as the basis for a sparse data structure that can store Q explicitly. The new characterization is a simple extension of a well known row-oriented characterization of the structure of H. Hare, Johnson, Olesky, and van den Driessche have recently provided a complete sparsity analysis of the QR factorization. Let U be themore » matrix consisting of the first n columns of Q. Using results from, we show that the data structures for H and U resulting from our characterizations are tight when A is a strong Hall matrix. We also show that H and the lower trapezoidal part of U have the same sparsity characterization when A is strong Hall. We then show that this characterization can be extended to any weak Hall matrix that has been permuted into block upper triangular form. Finally, we show that permuting to block triangular form never increases the fill incurred during the factorization.« less

  11. Fibronectin matrix assembly is essential for cell condensation during chondrogenesis

    PubMed Central

    Singh, Purva; Schwarzbauer, Jean E.

    2014-01-01

    ABSTRACT Mesenchymal cell condensation is the initiating event in endochondral bone formation. Cell condensation is followed by differentiation into chondrocytes, which is accompanied by induction of chondrogenic gene expression. Gene mutations involved in chondrogenesis cause chondrodysplasias and other skeletal defects. Using mesenchymal stem cells (MSCs) in an in vitro chondrogenesis assay, we found that knockdown of the diastrophic dysplasia (DTD) sulfate transporter (DTDST, also known as SLC26A2), which is required for normal cartilage development, blocked cell condensation and caused a significant reduction in fibronectin matrix. Knockdown of fibronectin with small interfering RNAs (siRNAs) also blocked condensation. Fibrillar fibronectin matrix was detected prior to cell condensation, and its levels increased during and after condensation. Inhibition of fibronectin matrix assembly by use of the functional upstream domain (FUD) of adhesin F1 from Streptococcus pyogenes prevented cell condensation by MSCs and also by the chondrogenic cell line ATDC5. Our data show that cell condensation and induction of chondrogenesis depend on fibronectin matrix assembly and DTDST, and indicate that this transporter is required earlier in chondrogenesis than previously appreciated. They also raise the possibility that certain of the skeletal defects in DTD patients might derive from the link between DTDST, fibronectin matrix and condensation. PMID:25146392

  12. Fibronectin matrix assembly is essential for cell condensation during chondrogenesis.

    PubMed

    Singh, Purva; Schwarzbauer, Jean E

    2014-10-15

    Mesenchymal cell condensation is the initiating event in endochondral bone formation. Cell condensation is followed by differentiation into chondrocytes, which is accompanied by induction of chondrogenic gene expression. Gene mutations involved in chondrogenesis cause chondrodysplasias and other skeletal defects. Using mesenchymal stem cells (MSCs) in an in vitro chondrogenesis assay, we found that knockdown of the diastrophic dysplasia (DTD) sulfate transporter (DTDST, also known as SLC26A2), which is required for normal cartilage development, blocked cell condensation and caused a significant reduction in fibronectin matrix. Knockdown of fibronectin with small interfering RNAs (siRNAs) also blocked condensation. Fibrillar fibronectin matrix was detected prior to cell condensation, and its levels increased during and after condensation. Inhibition of fibronectin matrix assembly by use of the functional upstream domain (FUD) of adhesin F1 from Streptococcus pyogenes prevented cell condensation by MSCs and also by the chondrogenic cell line ATDC5. Our data show that cell condensation and induction of chondrogenesis depend on fibronectin matrix assembly and DTDST, and indicate that this transporter is required earlier in chondrogenesis than previously appreciated. They also raise the possibility that certain of the skeletal defects in DTD patients might derive from the link between DTDST, fibronectin matrix and condensation. © 2014. Published by The Company of Biologists Ltd.

  13. Towards robust deconvolution of low-dose perfusion CT: sparse perfusion deconvolution using online dictionary learning.

    PubMed

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C

    2013-05-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Deconvolution of azimuthal mode detection measurements

    NASA Astrophysics Data System (ADS)

    Sijtsma, Pieter; Brouwer, Harry

    2018-05-01

    Unequally spaced transducer rings make it possible to extend the range of detectable azimuthal modes. The disadvantage is that the response of the mode detection algorithm to a single mode is distributed over all detectable modes, similarly to the Point Spread Function of Conventional Beamforming with microphone arrays. With multiple modes the response patterns interfere, leading to a relatively high "noise floor" of spurious modes in the detected mode spectrum, in other words, to a low dynamic range. In this paper a deconvolution strategy is proposed for increasing this dynamic range. It starts with separating the measured sound into shaft tones and broadband noise. For broadband noise modes, a standard Non-Negative Least Squares solver appeared to be a perfect deconvolution tool. For shaft tones a Matching Pursuit approach is proposed, taking advantage of the sparsity of dominant modes. The deconvolution methods were applied to mode detection measurements in a fan rig. An increase in dynamic range of typically 10-15 dB was found.

  15. Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1991-01-01

    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.

  16. Joint deconvolution and classification with applications to passive acoustic underwater multipath.

    PubMed

    Anderson, Hyrum S; Gupta, Maya R

    2008-11-01

    This paper addresses the problem of classifying signals that have been corrupted by noise and unknown linear time-invariant (LTI) filtering such as multipath, given labeled uncorrupted training signals. A maximum a posteriori approach to the deconvolution and classification is considered, which produces estimates of the desired signal, the unknown channel, and the class label. For cases in which only a class label is needed, the classification accuracy can be improved by not committing to an estimate of the channel or signal. A variant of the quadratic discriminant analysis (QDA) classifier is proposed that probabilistically accounts for the unknown LTI filtering, and which avoids deconvolution. The proposed QDA classifier can work either directly on the signal or on features whose transformation by LTI filtering can be analyzed; as an example a classifier for subband-power features is derived. Results on simulated data and real Bowhead whale vocalizations show that jointly considering deconvolution with classification can dramatically improve classification performance over traditional methods over a range of signal-to-noise ratios.

  17. Application of Fourier-wavelet regularized deconvolution for improving image quality of free space propagation x-ray phase contrast imaging.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2012-11-21

    New x-ray phase contrast imaging techniques without using synchrotron radiation confront a common problem from the negative effects of finite source size and limited spatial resolution. These negative effects swamp the fine phase contrast fringes and make them almost undetectable. In order to alleviate this problem, deconvolution procedures should be applied to the blurred x-ray phase contrast images. In this study, three different deconvolution techniques, including Wiener filtering, Tikhonov regularization and Fourier-wavelet regularized deconvolution (ForWaRD), were applied to the simulated and experimental free space propagation x-ray phase contrast images of simple geometric phantoms. These algorithms were evaluated in terms of phase contrast improvement and signal-to-noise ratio. The results demonstrate that the ForWaRD algorithm is most appropriate for phase contrast image restoration among above-mentioned methods; it can effectively restore the lost information of phase contrast fringes while reduce the amplified noise during Fourier regularization.

  18. A new scoring function for top-down spectral deconvolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kou, Qiang; Wu, Si; Liu, Xiaowen

    2014-12-18

    Background: Top-down mass spectrometry plays an important role in intact protein identification and characterization. Top-down mass spectra are more complex than bottom-up mass spectra because they often contain many isotopomer envelopes from highly charged ions, which may overlap with one another. As a result, spectral deconvolution, which converts a complex top-down mass spectrum into a monoisotopic mass list, is a key step in top-down spectral interpretation. Results: In this paper, we propose a new scoring function, L-score, for evaluating isotopomer envelopes. By combining L-score with MS-Deconv, a new software tool, MS-Deconv+, was developed for top-down spectral deconvolution. Experimental results showedmore » that MS-Deconv+ outperformed existing software tools in top-down spectral deconvolution. Conclusions: L-score shows high discriminative ability in identification of isotopomer envelopes. Using L-score, MS-Deconv+ reports many correct monoisotopic masses missed by other software tools, which are valuable for proteoform identification and characterization.« less

  19. Bayesian Deconvolution for Angular Super-Resolution in Forward-Looking Scanning Radar

    PubMed Central

    Zha, Yuebo; Huang, Yulin; Sun, Zhichao; Wang, Yue; Yang, Jianyu

    2015-01-01

    Scanning radar is of notable importance for ground surveillance, terrain mapping and disaster rescue. However, the angular resolution of a scanning radar image is poor compared to the achievable range resolution. This paper presents a deconvolution algorithm for angular super-resolution in scanning radar based on Bayesian theory, which states that the angular super-resolution can be realized by solving the corresponding deconvolution problem with the maximum a posteriori (MAP) criterion. The algorithm considers that the noise is composed of two mutually independent parts, i.e., a Gaussian signal-independent component and a Poisson signal-dependent component. In addition, the Laplace distribution is used to represent the prior information about the targets under the assumption that the radar image of interest can be represented by the dominant scatters in the scene. Experimental results demonstrate that the proposed deconvolution algorithm has higher precision for angular super-resolution compared with the conventional algorithms, such as the Tikhonov regularization algorithm, the Wiener filter and the Richardson–Lucy algorithm. PMID:25806871

  20. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    PubMed Central

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  1. Task Parallel Incomplete Cholesky Factorization using 2D Partitioned-Block Layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyungjoo; Rajamanickam, Sivasankaran; Stelle, George Widgery

    We introduce a task-parallel algorithm for sparse incomplete Cholesky factorization that utilizes a 2D sparse partitioned-block layout of a matrix. Our factorization algorithm follows the idea of algorithms-by-blocks by using the block layout. The algorithm-byblocks approach induces a task graph for the factorization. These tasks are inter-related to each other through their data dependences in the factorization algorithm. To process the tasks on various manycore architectures in a portable manner, we also present a portable tasking API that incorporates different tasking backends and device-specific features using an open-source framework for manycore platforms i.e., Kokkos. A performance evaluation is presented onmore » both Intel Sandybridge and Xeon Phi platforms for matrices from the University of Florida sparse matrix collection to illustrate merits of the proposed task-based factorization. Experimental results demonstrate that our task-parallel implementation delivers about 26.6x speedup (geometric mean) over single-threaded incomplete Choleskyby- blocks and 19.2x speedup over serial Cholesky performance which does not carry tasking overhead using 56 threads on the Intel Xeon Phi processor for sparse matrices arising from various application problems.« less

  2. Distribution and character of upper mesozoic subduction complexes along the west coast of North America

    USGS Publications Warehouse

    Jones, D.L.; Blake, M.C.; Bailey, E.H.; McLaughlin, R.J.

    1978-01-01

    Structurally complex sequences of sedimentary, volcanic, and intrusive igneous rocks characterize a nearly continuous narrow band along the Pacific coast of North America from Baja California, Mexico to southern Alaska. They occur in two modes: (1) as complexly folded but coherent sequences of graywacke and argillite that locally exhibit blueschist-grade metamorphism, and (2) as melanges containing large blocks of graywacke, chert, volcanic and plutonic rocks, high-grade schist, and limestone in a highly sheared pelitic, cherty, or sandstone matrix. Fossils from the coherent graywacke sequences range in age from late Jurassic to Eocene; fossils from limestone blocks in the melanges range in age from mid-Paleozoic to middle Cretaceous. Fossils from the matrix surrounding the blocks, however, are of Jurassic, Cretaceous, and rarely, Tertiary age, indicating that fossils from the blocks cannot be used to date the time of formation of the melanges. Both the deformation of the graywacke, with accompanying blueschist metamorphism, as well as the formation of the melanges, are believed to be the result of late Mesozoic and early Tertiary subduction. The origin of the melanges, particularly the emplacement of exotic tectonic blocks, is not understood. ?? 1978.

  3. Development of Ordered, Porous (Sub-25 nm Dimensions) Surface Membrane Structures Using a Block Copolymer Approach.

    PubMed

    Ghoshal, Tandra; Holmes, Justin D; Morris, Michael A

    2018-05-08

    In an effort to develop block copolymer lithography to create high aspect vertical pore arrangements in a substrate surface we have used a microphase separated poly(ethylene oxide) -b- polystyrene (PEO-b-PS) block copolymer (BCP) thin film where (and most unusually) PS not PEO is the cylinder forming phase and PEO is the majority block. Compared to previous work, we can amplify etch contrast by inclusion of hard mask material into the matrix block allowing the cylinder polymer to be removed and the exposed substrate subject to deep etching thereby generating uniform, arranged, sub-25 nm cylindrical nanopore arrays. Briefly, selective metal ion inclusion into the PEO matrix and subsequent processing (etch/modification) was applied for creating iron oxide nanohole arrays. The oxide nanoholes (22 nm diameter) were cylindrical, uniform diameter and mimics the original BCP nanopatterns. The oxide nanohole network is demonstrated as a resistant mask to fabricate ultra dense, well ordered, good sidewall profile silicon nanopore arrays on substrate surface through the pattern transfer approach. The Si nanopores have uniform diameter and smooth sidewalls throughout their depth. The depth of the porous structure can be controlled via the etch process.

  4. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  5. Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Horne, William C.

    2015-01-01

    An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.

  6. Properties Of Carbon/Carbon and Carbon/Phenolic Composites

    NASA Technical Reports Server (NTRS)

    Mathis, John R.; Canfield, A. R.

    1993-01-01

    Report presents data on physical properties of carbon-fiber-reinforced carbon-matrix and phenolic-matrix composite materials. Based on tests conducted on panels, cylinders, blocks, and formed parts. Data used by designers to analyze thermal-response and stress levels and develop structural systems ensuring high reliability at minimum weight.

  7. Processing of single channel air and water gun data for imaging an impact structure at the Chesapeake Bay

    USGS Publications Warehouse

    Lee, Myung W.

    1999-01-01

    Processing of 20 seismic profiles acquired in the Chesapeake Bay area aided in analysis of the details of an impact structure and allowed more accurate mapping of the depression caused by a bolide impact. Particular emphasis was placed on enhancement of seismic reflections from the basement. Application of wavelet deconvolution after a second zero-crossing predictive deconvolution improved the resolution of shallow reflections, and application of a match filter enhanced the basement reflections. The use of deconvolution and match filtering with a two-dimensional signal enhancement technique (F-X filtering) significantly improved the interpretability of seismic sections.

  8. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    PubMed

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khin, J.A.

    Since reopening to foreign operators in 1989, companies have secured concessions and begun active exploration programs. This paper reports on: Yukong Oil (Block C) spudded well Indaw YK-1 last December and continued drilling below 8,500 ft. Well encountered frequent gas cut mud as well as lost circulation. BHP (Block H) spudded the Kawliya-1 in March this year and drilled to 6,500 ft. The well was dry and abandoned BHP plans to drill another well this year. Unocal (Block F) spudded its first well, the Kandaw-1, in May and plans to drill to 14,500 ft. Shell (Block G) began its firstmore » well in June. Shell's drilling program will consist of drilling four to six wells. Idemitsu (Block D) also spudded its first well in June. PetroCanada (Block E) plans to spud a well by December. Target depth is 12,000 ft.« less

  10. GPU-accelerated algorithms for compressed signals recovery with application to astronomical imagery deblurring

    NASA Astrophysics Data System (ADS)

    Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico

    2018-04-01

    Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.

  11. Application of blocking diagnosis methods to General Circulation Models. Part I: a novel detection scheme

    NASA Astrophysics Data System (ADS)

    Barriopedro, D.; García-Herrera, R.; Trigo, R. M.

    2010-12-01

    This paper aims to provide a new blocking definition with applicability to observations and model simulations. An updated review of previous blocking detection indices is provided and some of their implications and caveats discussed. A novel blocking index is proposed by reconciling two traditional approaches based on anomaly and absolute flows. Blocks are considered from a complementary perspective as a signature in the anomalous height field capable of reversing the meridional jet-based height gradient in the total flow. The method succeeds in identifying 2-D persistent anomalies associated to a weather regime in the total flow with blockage of the westerlies. The new index accounts for the duration, intensity, extension, propagation, and spatial structure of a blocking event. In spite of its increased complexity, the detection efficiency of the method is improved without hampering the computational time. Furthermore, some misleading identification problems and artificial assumptions resulting from previous single blocking indices are avoided with the new approach. The characteristics of blocking for 40 years of reanalysis (1950-1989) over the Northern Hemisphere are described from the perspective of the new definition and compared to those resulting from two standard blocking indices and different critical thresholds. As compared to single approaches, the novel index shows a better agreement with reported proxies of blocking activity, namely climatological regions of simultaneous wave amplification and maximum band-pass filtered height standard deviation. An additional asset of the method is its adaptability to different data sets. As critical thresholds are specific of the data set employed, the method is useful for observations and model simulations of different resolutions, temporal lengths and time variant basic states, optimizing its value as a tool for model validation. Special attention has been paid on the devise of an objective scheme easily applicable to General Circulation Models where observational thresholds may be unsuitable due to the presence of model bias. Part II of this study deals with a specific implementation of this novel method to simulations of the ECHO-G global climate model.

  12. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    NASA Astrophysics Data System (ADS)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  13. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  14. Harmony of spinning conformal blocks

    NASA Astrophysics Data System (ADS)

    Schomerus, Volker; Sobko, Evgeny; Isachenkov, Mikhail

    2017-03-01

    Conformal blocks for correlation functions of tensor operators play an increasingly important role for the conformal bootstrap programme. We develop a universal approach to such spinning blocks through the harmonic analysis of certain bundles over a coset of the conformal group. The resulting Casimir equations are given by a matrix version of the Calogero-Sutherland Hamiltonian that describes the scattering of interacting spinning particles in a 1-dimensional external potential. The approach is illustrated in several examples including fermionic seed blocks in 3D CFT where they take a very simple form.

  15. Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization

    PubMed Central

    Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan

    2017-01-01

    In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770

  16. High quality image-pair-based deblurring method using edge mask and improved residual deconvolution

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Zhao, Jufeng; Gao, Xiumin; Feng, Huajun; Chen, Yueting

    2017-04-01

    Image deconvolution problem is a challenging task in the field of image process. Using image pairs could be helpful to provide a better restored image compared with the deblurring method from a single blurred image. In this paper, a high quality image-pair-based deblurring method is presented using the improved RL algorithm and the gain-controlled residual deconvolution technique. The input image pair includes a non-blurred noisy image and a blurred image captured for the same scene. With the estimated blur kernel, an improved RL deblurring method based on edge mask is introduced to obtain the preliminary deblurring result with effective ringing suppression and detail preservation. Then the preliminary deblurring result is served as the basic latent image and the gain-controlled residual deconvolution is utilized to recover the residual image. A saliency weight map is computed as the gain map to further control the ringing effects around the edge areas in the residual deconvolution process. The final deblurring result is obtained by adding the preliminary deblurring result with the recovered residual image. An optical experimental vibration platform is set up to verify the applicability and performance of the proposed algorithm. Experimental results demonstrate that the proposed deblurring framework obtains a superior performance in both subjective and objective assessments and has a wide application in many image deblurring fields.

  17. Windprofiler optimization using digital deconvolution procedures

    NASA Astrophysics Data System (ADS)

    Hocking, W. K.; Hocking, A.; Hocking, D. G.; Garbanzo-Salas, M.

    2014-10-01

    Digital improvements to data acquisition procedures used for windprofiler radars have the potential for improving the height coverage at optimum resolution, and permit improved height resolution. A few newer systems already use this capability. Real-time deconvolution procedures offer even further optimization, and this has not been effectively employed in recent years. In this paper we demonstrate the advantages of combining these features, with particular emphasis on the advantages of real-time deconvolution. Using several multi-core CPUs, we have been able to achieve speeds of up to 40 GHz from a standard commercial motherboard, allowing data to be digitized and processed without the need for any type of hardware except for a transmitter (and associated drivers), a receiver and a digitizer. No Digital Signal Processor chips are needed, allowing great flexibility with analysis algorithms. By using deconvolution procedures, we have then been able to not only optimize height resolution, but also have been able to make advances in dealing with spectral contaminants like ground echoes and other near-zero-Hz spectral contamination. Our results also demonstrate the ability to produce fine-resolution measurements, revealing small-scale structures within the backscattered echoes that were previously not possible to see. Resolutions of 30 m are possible for VHF radars. Furthermore, our deconvolution technique allows the removal of range-aliasing effects in real time, a major bonus in many instances. Results are shown using new radars in Canada and Costa Rica.

  18. Commencement Bay Studies Phase II, Environmental Impacts Assessment.

    DTIC Science & Technology

    1983-10-01

    Approved for public release, distribution unlimited. 17. DISTRIBUTION STATEMENT (of the absirct entered In Block 20. If dlfforent from Report) IS...Matrix (Appendix D). 19. KEY WORDS (Continue n reveres side itnecsewy and identify by block number) Salmonids Wetlands Aesthetics City of Tacoma Marine...Water Quality Land and Water Use Port of Tacoma t AEINACr (Cm as ,.verem ebb N c evesey a - fdoswif by block n mbs) ames and Moore assessed the

  19. A New Cell Block Method for Multiple Immunohistochemical Analysis of Circulating Tumor Cells in Patients with Liver Cancer.

    PubMed

    Nam, Soo Jeong; Yeo, Hyun Yang; Chang, Hee Jin; Kim, Bo Hyun; Hong, Eun Kyung; Park, Joong-Won

    2016-10-01

    We developed a new method of detecting circulating tumor cells (CTCs) in liver cancer patients by constructing cell blocks from peripheral blood cells, including CTCs, followed by multiple immunohistochemical analysis. Cell blockswere constructed from the nucleated cell pellets of peripheral blood afterremoval of red blood cells. The blood cell blocks were obtained from 29 patients with liver cancer, and from healthy donor blood spikedwith seven cell lines. The cell blocks and corresponding tumor tissues were immunostained with antibodies to seven markers: cytokeratin (CK), epithelial cell adhesion molecule (EpCAM), epithelial membrane antigen (EMA), CK18, α-fetoprotein (AFP), Glypican 3, and HepPar1. The average recovery rate of spiked SW620 cells from blood cell blocks was 91%. CTCs were detected in 14 out of 29 patients (48.3%); 11/23 hepatocellular carcinomas (HCC), 1/2 cholangiocarcinomas (CC), 1/1 combined HCC-CC, and 1/3 metastatic cancers. CTCs from 14 patients were positive for EpCAM (57.1%), EMA (42.9%), AFP (21.4%), CK18 (14.3%), Gypican3 and CK (7.1%, each), and HepPar1 (0%). Patients with HCC expressed EpCAM, EMA, CK18, and AFP in tissue and/or CTCs, whereas CK, HepPar1, and Glypican3 were expressed only in tissue. Only EMA was significantly associated with the expressions in CTC and tissue. CTC detection was associated with higher T stage and portal vein invasion in HCC patients. This cell block method allows cytologic detection and multiple immunohistochemical analysis of CTCs. Our results show that tissue biomarkers of HCC may not be useful for the detection of CTC. EpCAM could be a candidate marker for CTCs in patients with HCC.

  20. Robust MR-based approaches to quantifying white matter structure and structure/function alterations in Huntington's disease

    PubMed Central

    Steventon, Jessica J.; Trueman, Rebecca C.; Rosser, Anne E.; Jones, Derek K.

    2016-01-01

    Background Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. Method 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Results Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Conclusion Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. PMID:26335798

  1. Robust MR-based approaches to quantifying white matter structure and structure/function alterations in Huntington's disease.

    PubMed

    Steventon, Jessica J; Trueman, Rebecca C; Rosser, Anne E; Jones, Derek K

    2016-05-30

    Huge advances have been made in understanding and addressing confounds in diffusion MRI data to quantify white matter microstructure. However, there has been a lag in applying these advances in clinical research. Some confounds are more pronounced in HD which impedes data quality and interpretability of patient-control differences. This study presents an optimised analysis pipeline and addresses specific confounds in a HD patient cohort. 15 HD gene-positive and 13 matched control participants were scanned on a 3T MRI system with two diffusion MRI sequences. An optimised post processing pipeline included motion, eddy current and EPI correction, rotation of the B matrix, free water elimination (FWE) and tractography analysis using an algorithm capable of reconstructing crossing fibres. The corpus callosum was examined using both a region-of-interest and a deterministic tractography approach, using both conventional diffusion tensor imaging (DTI)-based and spherical deconvolution analyses. Correcting for CSF contamination significantly altered microstructural metrics and the detection of group differences. Reconstructing the corpus callosum using spherical deconvolution produced a more complete reconstruction with greater sensitivity to group differences, compared to DTI-based tractography. Tissue volume fraction (TVF) was reduced in HD participants and was more sensitive to disease burden compared to DTI metrics. Addressing confounds in diffusion MR data results in more valid, anatomically faithful white matter tract reconstructions with reduced within-group variance. TVF is recommended as a complementary metric, providing insight into the relationship with clinical symptoms in HD not fully captured by conventional DTI metrics. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Combining Biomimetic Block Copolymer Worms with an Ice-Inhibiting Polymer for the Solvent-Free Cryopreservation of Red Blood Cells.

    PubMed

    Mitchell, Daniel E; Lovett, Joseph R; Armes, Steven P; Gibson, Matthew I

    2016-02-18

    The first fully synthetic polymer-based approach for red-blood-cell cryopreservation without the need for any (toxic) organic solvents is reported. Highly hydroxylated block copolymer worms are shown to be a suitable replacement for hydroxyethyl starch as a extracellular matrix for red blood cells. When used alone, the worms are not a particularly effective preservative. However, when combined with poly(vinyl alcohol), a known ice-recrystallization inhibitor, a remarkable additive cryopreservative effect is observed that matches the performance of hydroxyethyl starch. Moreover, these block copolymer worms enable post-thaw gelation by simply warming to 20 °C. This approach offers a new solution for both the storage and transport of red blood cells and also a convenient matrix for subsequent 3D cell cultures. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  3. Wolbachia Blocks Currently Circulating Zika Virus Isolates in Brazilian Aedes aegypti Mosquitoes.

    PubMed

    Dutra, Heverton Leandro Carneiro; Rocha, Marcele Neves; Dias, Fernando Braga Stehling; Mansur, Simone Brutman; Caragata, Eric Pearce; Moreira, Luciano Andrade

    2016-06-08

    The recent association of Zika virus with cases of microcephaly has sparked a global health crisis and highlighted the need for mechanisms to combat the Zika vector, Aedes aegypti mosquitoes. Wolbachia pipientis, a bacterial endosymbiont of insect, has recently garnered attention as a mechanism for arbovirus control. Here we report that Aedes aegypti harboring Wolbachia are highly resistant to infection with two currently circulating Zika virus isolates from the recent Brazilian epidemic. Wolbachia-harboring mosquitoes displayed lower viral prevalence and intensity and decreased disseminated infection and, critically, did not carry infectious virus in the saliva, suggesting that viral transmission was blocked. Our data indicate that the use of Wolbachia-harboring mosquitoes could represent an effective mechanism to reduce Zika virus transmission and should be included as part of Zika control strategies. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Factors affecting the stability of drug-loaded polymeric micelles and strategies for improvement

    NASA Astrophysics Data System (ADS)

    Zhou, Weisai; Li, Caibin; Wang, Zhiyu; Zhang, Wenli; Liu, Jianping

    2016-09-01

    Polymeric micelles (PMs) self-assembled by amphiphilic block copolymers have been used as promising nanocarriers for tumor-targeted delivery due to their favorable properties, such as excellent biocompatibility, prolonged circulation time, favorable particle sizes (10-100 nm) to utilize enhanced permeability and retention effect and the possibility for functionalization. However, PMs can be easily destroyed due to dilution of body fluid and the absorption of proteins in system circulation, which may induce drug leakage from these micelles before reaching the target sites and compromise the therapeutic effect. This paper reviewed the factors that influence stability of micelles in terms of thermodynamics and kinetics consist of the critical micelle concentration of block copolymers, glass transition temperature of hydrophobic segments and polymer-polymer and polymer-cargo interaction. In addition, some effective strategies to improve the stability of micelles were also summarized.

  5. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  6. Treatment of toxic metal aqueous solutions: encapsulation in a phosphate-calcium aluminate matrix.

    PubMed

    Fernández, J M; Navarro-Blasco, I; Duran, A; Sirera, R; Alvarez, J I

    2014-07-01

    Polyphosphate-modified calcium aluminate cement matrices were prepared by using aqueous solutions polluted with toxic metals as mixing water to obtain waste-containing solid blocks with improved management and disposal. Synthetically contaminated waters containing either Pb or Cu or Zn were incorporated into phosphoaluminate cement mortars and the effects of the metal's presence on setting time and mechanical performance were assessed. Sorption and leaching tests were also executed and both retention and release patterns were investigated. For all three metals, high uptake capacities as well as percentages of retention larger than 99.9% were measured. Both Pb and Cu were seen to be largely compatible with this cementitious matrix, rendering the obtained blocks suitable for landfilling or for building purposes. However, Zn spoilt the compressive strength values because of its reaction with hydrogen phosphate anions, hindering the development of the binding matrix. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. On the Dynamics of Austral Heat Waves

    NASA Astrophysics Data System (ADS)

    Risbey, James S.; O'Kane, Terence J.; Monselesan, Didier P.; Franzke, Christian L. E.; Horenko, Illia

    2018-01-01

    This work examines summer heat wave events in four different regions of Australia (southwest, central, southeast, and northeast) to assess similarities and differences in the circulations that precede, accompany, and follow the heat wave events. A series of circulation composites are constructed for days from 10 days prior to 5 days following onset of each heat wave event. The composites of geopotential height anomalies and wave activity flux vectors show that heat waves in southwest and southeast Australia are preceded by coherent wave train structures in the Indian Ocean region, accompanied by blocking in the Australian region (as an amplified node of the wave train structure), and followed by coherent responses of wave train patterns in the Pacific and South America regions. The heat wave blocking high is maintained by convergence of wave activity in a well-defined wave channel. The concentration of wave activity in the block is aided by the formation of a subtropical jet branch and wave barrier on the equatorward side of the block. Heat waves in central and northeast Australia show similar wave train life cycle responses, but with a proximate ridge in the midtroposphere and a trough in the nearby waveguide region. Heat waves in Australia can be viewed as an element of successive expression of the planetary waveguide modes in the Southern Hemisphere and serve as signifiers of organized, active phases of these modes.

  8. Malvinas Current variability from Argo floats and satellite altimetry

    NASA Astrophysics Data System (ADS)

    Artana, Camila; Ferrari, Ramiro; Koenig, Zoé; Saraceno, Martin; Piola, Alberto R.; Provost, Christine

    2016-07-01

    The Malvinas Current (MC) is an offshoot of the Antarctic Circumpolar Current (ACC). Downstream of Drake Passage, the northern fronts of the ACC veer northward, cross over the North Scotia Ridge (NSR) and the Malvinas Plateau, and enter the Argentine Basin. We investigate the variations of the MC circulation between the NSR and 41°S and their possible relations with the ACC circulation using data from Argo floats and satellite altimetry. The data depict meandering and eddy shedding of the northern ACC jets as they cross the NSR. The altimetry fields show that these eddies are trapped, break down, and dissipate over the Malvinas Plateau, suggesting that this region is a hot spot for dissipation of mesoscale variability. Variations of sea level anomalies (SLA) across the NSR do not impact the MC further north, except for intra-seasonal variability associated with coastal trapped waves. Altimetry and float trajectories show events during which a large fraction of the MC is cut off from the ACC. Blocking events at around 48.5°S are a recurrent feature of the MC circulation. Over the 23 year altimetry record, we detected 26 events during which the MC surface transport at 48.5°S was reduced to less than half its long-term mean. Blocking events last from 10 to 35 days and do not present any significant trend. These events were tracked back to positive SLA that built up over the Argentine Abyssal Plain. Future work is needed to understand the processes responsible for these blocking events.

  9. Block recursive LU preconditioners for the thermally coupled incompressible inductionless MHD problem

    NASA Astrophysics Data System (ADS)

    Badia, Santiago; Martín, Alberto F.; Planas, Ramon

    2014-10-01

    The thermally coupled incompressible inductionless magnetohydrodynamics (MHD) problem models the flow of an electrically charged fluid under the influence of an external electromagnetic field with thermal coupling. This system of partial differential equations is strongly coupled and highly nonlinear for real cases of interest. Therefore, fully implicit time integration schemes are very desirable in order to capture the different physical scales of the problem at hand. However, solving the multiphysics linear systems of equations resulting from such algorithms is a very challenging task which requires efficient and scalable preconditioners. In this work, a new family of recursive block LU preconditioners is designed and tested for solving the thermally coupled inductionless MHD equations. These preconditioners are obtained after splitting the fully coupled matrix into one-physics problems for every variable (velocity, pressure, current density, electric potential and temperature) that can be optimally solved, e.g., using preconditioned domain decomposition algorithms. The main idea is to arrange the original matrix into an (arbitrary) 2 × 2 block matrix, and consider an LU preconditioner obtained by approximating the corresponding Schur complement. For every one of the diagonal blocks in the LU preconditioner, if it involves more than one type of unknowns, we proceed the same way in a recursive fashion. This approach is stated in an abstract way, and can be straightforwardly applied to other multiphysics problems. Further, we precisely explain a flexible and general software design for the code implementation of this type of preconditioners.

  10. Strehl-constrained iterative blind deconvolution for post-adaptive-optics data

    NASA Astrophysics Data System (ADS)

    Desiderà, G.; Carbillet, M.

    2009-12-01

    Aims: We aim to improve blind deconvolution applied to post-adaptive-optics (AO) data by taking into account one of their basic characteristics, resulting from the necessarily partial AO correction: the Strehl ratio. Methods: We apply a Strehl constraint in the framework of iterative blind deconvolution (IBD) of post-AO near-infrared images simulated in a detailed end-to-end manner and considering a case that is as realistic as possible. Results: The results obtained clearly show the advantage of using such a constraint, from the point of view of both performance and stability, especially for poorly AO-corrected data. The proposed algorithm has been implemented in the freely-distributed and CAOS-based Software Package AIRY.

  11. Calibration of a polarimetric imaging SAR

    NASA Technical Reports Server (NTRS)

    Sarabandi, K.; Pierce, L. E.; Ulaby, F. T.

    1991-01-01

    Calibration of polarimetric imaging Synthetic Aperture Radars (SAR's) using point calibration targets is discussed. The four-port network calibration technique is used to describe the radar error model. The polarimetric ambiguity function of the SAR is then found using a single point target, namely a trihedral corner reflector. Based on this, an estimate for the backscattering coefficient of the terrain is found by a deconvolution process. A radar image taken by the JPL Airborne SAR (AIRSAR) is used for verification of the deconvolution calibration method. The calibrated responses of point targets in the image are compared both with theory and the POLCAL technique. Also, response of a distributed target are compared using the deconvolution and POLCAL techniques.

  12. Self-assembly of silk-elastinlike protein polymers into three-dimensional scaffolds for biomedical applications

    NASA Astrophysics Data System (ADS)

    Zeng, Like

    Production of brand new protein-based materials with precise control over the amino acid sequences at single residue level has been made possible by genetic engineering, through which artificial genes can be developed that encode protein-based materials with desired features. As an example, silk-elastinlike protein polymers (SELPs), composed of tandem repeats of amino acid sequence motifs from Bombyx mori (silkworm) silk and mammalian elastin, have been produced in this approach. SELPs have been studied extensively in the past two decades, however, the fundamental mechanism governing the self-assembly process to date still remains largely unresolved. Further, regardless of the unprecedented success when exploited in areas including drug delivery, gene therapy, and tissue augmentation, SELPs scaffolds as a three-dimensional cell culture model system are complicated by the inability of SELPs to provide the embedded tissue cells with appropriate biochemical stimuli essential for cell survival and function. In this dissertation, it is reported that the self-assembly of silk-elastinlike protein polymers (SELPs) into nanofibers in aqueous solutions can be modulated by tuning the curing temperature, the size of the silk blocks, and the charge of the elastin blocks. A core-sheath model was proposed for nanofiber formation, with the silk blocks in the cores and the hydrated elastin blocks in the sheaths. The folding of the silk blocks into stable cores -- affected by the size of the silk blocks and the charge of the elastin blocks -- plays a critical role in the assembly of silk-elastin nanofibers. The assembled nanofibers further form nanofiber clusters on the microscale, and the nanofiber clusters then coalesce into nanofiber micro-assemblies, interconnection of which eventually leads to the formation of three-dimensional scaffolds with distinct nanoscale and microscale features. SELP-Collagen hybrid scaffolds were also fabricated to enable independent control over the scaffolds' biochemical input and matrix stiffness. It is reported herein that in the hybrid scaffolds, collagen provides essential biochemical cues needed to promote cell attachment and function while SELP imparts matrix stiffness tunability. To obtain tissue-specificity in matrix stiffness that spans over several orders of magnitude covering from soft brain to stiff cartilage, the hybrid SELP-Collagen scaffolds were crosslinked by transglutaminase at physiological conditions compatible for simultaneous cell encapsulation. The effect of the increase in matrix stiffness induced by such enzymatic crosslinking on cellular viability and proliferation was also evaluated using in vitro cell assays.

  13. Circulating Estradiol Regulates Brain-Derived Estradiol via Actions at GnRH Receptors to Impact Memory in Ovariectomized Rats.

    PubMed

    Nelson, Britta S; Black, Katelyn L; Daniel, Jill M

    2016-01-01

    Systemic estradiol treatment enhances hippocampus-dependent memory in ovariectomized rats. Although these enhancements are traditionally thought to be due to circulating estradiol, recent data suggest these changes are brought on by hippocampus-derived estradiol, the synthesis of which depends on gonadotropin-releasing hormone (GnRH) activity. The goal of the current work is to test the hypothesis that peripheral estradiol affects hippocampus-dependent memory through brain-derived estradiol regulated via hippocampal GnRH receptor activity. In the first experiment, intracerebroventricular infusion of letrozole, which prevents the synthesis of estradiol, blocked the ability of peripheral estradiol administration in ovariectomized rats to enhance hippocampus-dependent memory in a radial-maze task. In the second experiment, hippocampal infusion of antide, a long-lasting GnRH receptor antagonist, blocked the ability of peripheral estradiol administration in ovariectomized rats to enhance hippocampus-dependent memory. In the third experiment, hippocampal infusion of GnRH enhanced hippocampus-dependent memory, the effects of which were blocked by letrozole infusion. Results indicate that peripheral estradiol-induced enhancement of cognition is mediated by brain-derived estradiol via hippocampal GnRH receptor activity.

  14. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory

    NASA Astrophysics Data System (ADS)

    Lee, M.; Leiter, K.; Eisner, C.; Breuer, A.; Wang, X.

    2017-09-01

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  15. A robust variant of block Jacobi-Davidson for extracting a large number of eigenpairs: Application to grid-based real-space density functional theory.

    PubMed

    Lee, M; Leiter, K; Eisner, C; Breuer, A; Wang, X

    2017-09-21

    In this work, we investigate a block Jacobi-Davidson (J-D) variant suitable for sparse symmetric eigenproblems where a substantial number of extremal eigenvalues are desired (e.g., ground-state real-space quantum chemistry). Most J-D algorithm variations tend to slow down as the number of desired eigenpairs increases due to frequent orthogonalization against a growing list of solved eigenvectors. In our specification of block J-D, all of the steps of the algorithm are performed in clusters, including the linear solves, which allows us to greatly reduce computational effort with blocked matrix-vector multiplies. In addition, we move orthogonalization against locked eigenvectors and working eigenvectors outside of the inner loop but retain the single Ritz vector projection corresponding to the index of the correction vector. Furthermore, we minimize the computational effort by constraining the working subspace to the current vectors being updated and the latest set of corresponding correction vectors. Finally, we incorporate accuracy thresholds based on the precision required by the Fermi-Dirac distribution. The net result is a significant reduction in the computational effort against most previous block J-D implementations, especially as the number of wanted eigenpairs grows. We compare our approach with another robust implementation of block J-D (JDQMR) and the state-of-the-art Chebyshev filter subspace (CheFSI) method for various real-space density functional theory systems. Versus CheFSI, for first-row elements, our method yields competitive timings for valence-only systems and 4-6× speedups for all-electron systems with up to 10× reduced matrix-vector multiplies. For all-electron calculations on larger elements (e.g., gold) where the wanted spectrum is quite narrow compared to the full spectrum, we observe 60× speedup with 200× fewer matrix-vector multiples vs. CheFSI.

  16. Variational optimization algorithms for uniform matrix product states

    NASA Astrophysics Data System (ADS)

    Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.

    2018-01-01

    We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.

  17. Rac1-Regulated Endothelial Radiation Response Stimulates Extravasation and Metastasis That Can Be Blocked by HMG-CoA Reductase Inhibitors

    PubMed Central

    Hamalukic, Melanie; Huelsenbeck, Johannes; Schad, Arno; Wirtz, Stefan; Kaina, Bernd; Fritz, Gerhard

    2011-01-01

    Radiotherapy (RT) plays a key role in cancer treatment. Although the benefit of ionizing radiation (IR) is well established, some findings raise the possibility that irradiation of the primary tumor not only triggers a killing response but also increases the metastatic potential of surviving tumor cells. Here we addressed the question of whether irradiation of normal cells outside of the primary tumor augments metastasis by stimulating the extravasation of circulating tumor cells. We show that IR exposure of human endothelial cells (EC), tumor cells (TC) or both increases TC-EC adhesion in vitro. IR-stimulated TC-EC adhesion was blocked by the HMG-CoA reductase inhibitor lovastatin. Glycyrrhizic acid from liquorice root, which acts as a Sialyl-Lewis X mimetic drug, and the Rac1 inhibitor NSC23766 also reduced TC-EC adhesion. To examine the in vivo relevance of these findings, tumorigenic cells were injected into the tail vein of immunodeficient mice followed by total body irradiation (TBI). The data obtained show that TBI dramatically enhances tumor cell extravasation and lung metastasis. This pro-metastatic radiation effect was blocked by pre-treating mice with lovastatin, glycyrrhizic acid or NSC23766. TBI of mice prior to tumor cell transplantation also stimulated metastasis, which was again blocked by lovastatin. The data point to a pro-metastatic trans-effect of RT, which likely rests on the endothelial radiation response promoting the extravasation of circulating tumor cells. Administration of the widely used lipid-lowering drug lovastatin prior to irradiation counteracts this process, likely by suppressing Rac1-regulated E-selectin expression following irradiation. The data support the concern that radiation exposure might increase the extravasation of circulating tumor cells and recommend co-administration of lipid-lowering drugs to avoid this adverse effect of ionizing radiation. PMID:22039482

  18. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  19. Role of Mas receptor antagonist (A779) in renal hemodynamics in condition of blocked angiotensin II receptors in rats.

    PubMed

    Mansoori, A; Oryan, S; Nematbakhsh, M

    2016-03-01

    The vasodilatory effect of angiotensin 1-7 (Ang 1-7) is exerted in the vascular bed via Mas receptor (MasR) gender dependently. However, the crosstalk between MasR and angiotensin II (Ang II) types 1 and 2 receptors (AT1R and AT2R) may change some actions of Ang 1-7 in renal circulation. In this study by blocking AT1R and AT2R, the role of MasR in kidney hemodynamics was described. In anaesthetized male and female Wistar rats, the effects of saline as vehicle and MasR blockade (A779) were tested on mean arterial pressure (MAP), renal perfusion pressure (RPP), renal blood flow (RBF), and renal vascular resistance (RVR) when both AT1R and AT2R were blocked by losartan and PD123319, respectively. In male rats, when AT1R and AT2R were blocked, there was a tendency for the increase in RBF/wet kidney tissue weight (RBF/KW) to be elevated by A779 as compared with the vehicle (P=0.08), and this was not the case in female rats. The impact of MasR on renal hemodynamics appears not to be sexual dimorphism either when Ang II receptors were blocked. It seems that co-blockade of all AT1R, AT2R, and MasR may alter RBF/ KW in male more than in female rats. These findings support a crosstalk between MasR and Ang II receptors in renal circulation.

  20. Histogram deconvolution - An aid to automated classifiers

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  1. Study of one- and two-dimensional filtering and deconvolution algorithms for a streaming array computer

    NASA Technical Reports Server (NTRS)

    Ioup, G. E.

    1985-01-01

    Appendix 5 of the Study of One- and Two-Dimensional Filtering and Deconvolution Algorithms for a Streaming Array Computer includes a resume of the professional background of the Principal Investigator on the project, lists of this publications and research papers, graduate thesis supervised, and grants received.

  2. Blind Deconvolution for Distributed Parameter Systems with Unbounded Input and Output and Determining Blood Alcohol Concentration from Transdermal Biosensor Data.

    PubMed

    Rosen, I G; Luczak, Susan E; Weiss, Jordan

    2014-03-15

    We develop a blind deconvolution scheme for input-output systems described by distributed parameter systems with boundary input and output. An abstract functional analytic theory based on results for the linear quadratic control of infinite dimensional systems with unbounded input and output operators is presented. The blind deconvolution problem is then reformulated as a series of constrained linear and nonlinear optimization problems involving infinite dimensional dynamical systems. A finite dimensional approximation and convergence theory is developed. The theory is applied to the problem of estimating blood or breath alcohol concentration (respectively, BAC or BrAC) from biosensor-measured transdermal alcohol concentration (TAC) in the field. A distributed parameter model with boundary input and output is proposed for the transdermal transport of ethanol from the blood through the skin to the sensor. The problem of estimating BAC or BrAC from the TAC data is formulated as a blind deconvolution problem. A scheme to identify distinct drinking episodes in TAC data based on a Hodrick Prescott filter is discussed. Numerical results involving actual patient data are presented.

  3. Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spread function for the geostationary earth radiation budget experiment.

    PubMed

    Matthews, Grant

    2004-12-01

    The Geostationary Earth Radiation Budget (GERB) experiment is a broadband satellite radiometer instrument program intended to resolve remaining uncertainties surrounding the effect of cloud radiative feedback on future climate change. By use of a custom-designed diffraction-aberration telescope model, the GERB detector spatial response is recovered by deconvolution applied to the ground calibration point-spread function (PSF) measurements. An ensemble of randomly generated white-noise test scenes, combined with the measured telescope transfer function results in the effect of noise on the deconvolution being significantly reduced. With the recovered detector response as a base, the same model is applied in construction of the predicted in-flight field-of-view response of each GERB pixel to both short- and long-wave Earth radiance. The results of this study can now be used to simulate and investigate the instantaneous sampling errors incurred by GERB. Also, the developed deconvolution method may be highly applicable in enhancing images or PSF data for any telescope system for which a wave-front error measurement is available.

  4. Point spread functions and deconvolution of ultrasonic images.

    PubMed

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  5. Nimbus 7 earth radiation budget wide field of view climate data set improvement. I - The earth albedo from deconvolution of shortwave measurements

    NASA Technical Reports Server (NTRS)

    Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee

    1987-01-01

    A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.

  6. Impact of polymers on the crystallization and phase transition kinetics of amorphous nifedipine during dissolution in aqueous media.

    PubMed

    Raina, Shweta A; Alonzo, David E; Zhang, Geoff G Z; Gao, Yi; Taylor, Lynne S

    2014-10-06

    The commercial and clinical success of amorphous solid dispersions (ASD) in overcoming the low bioavailability of poorly soluble molecules has generated momentum among pharmaceutical scientists to advance the fundamental understanding of these complex systems. A major limitation of these formulations stems from the propensity of amorphous solids to crystallize upon exposure to aqueous media. This study was specifically focused on developing analytical techniques to evaluate the impact of polymers on the crystallization behavior during dissolution, which is critical in designing effective amorphous formulations. In the study, the crystallization and polymorphic conversions of a model compound, nifedipine, were explored in the absence and presence of polyvinylpyrrolidone (PVP), hydroxypropylmethyl cellulose (HPMC), and HPMC-acetate succinate (HPMC-AS). A combination of analytical approaches including Raman spectroscopy, polarized light microscopy, and chemometric techniques such as multivariate curve resolution (MCR) were used to evaluate the kinetics of crystallization and polymorphic transitions as well as to identify the primary route of crystallization, i.e., whether crystallization took place in the dissolving solid matrix or from the supersaturated solutions generated during dissolution. Pure amorphous nifedipine, when exposed to aqueous media, was found to crystallize rapidly from the amorphous matrix, even when polymers were present in the dissolution medium. Matrix crystallization was avoided when amorphous solid dispersions were prepared, however, crystallization from the solution phase was rapid. MCR was found to be an excellent data processing technique to deconvolute the complex phase transition behavior of nifedipine.

  7. Long lifetime near-infrared-emitting quantum dots for time-gated in vivo imaging of rare circulating cells (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fragola, Alexandra; Bouccara, Sophie; Pezet, Sophie; Lequeux, Nicolas; Loriette, Vincent; Pons, Thomas

    2017-02-01

    The in vivo detection of rare circulating cells using non invasive fluorescence imaging would provide a key tool to study migration of eg. tumoral or immunological cells. Fluorescence detection is however currently limited by a lack of contrast between the small emission of isolated, fast circulating cells and the strong autofluorescence background of the surrounding tissues. We present the development of near infrared emitting quantum dots (NIR-QDs) with long fluorescence lifetime for sensitive time-gated in vivo imaging of circulating cells. These QDs are composed of low toxicity ZnCuInSe/ZnS materials and made biocompatible using a novel multidentate imidazole zwitterionic block copolymer, ensuring their long term intracellular stability. Cells of interest can thus be labeled ex vivo with QDs, injected intravenously and imaged in the near infrared range. Excitation using a pulsed laser coupled to time-gated detection enables the efficient rejection of short lifetime (≈ ns) autofluorescence background and detection of long lifetime (≈ 150 ns) fluorescence from QD-labeled cells. We demonstrate efficient in vivo imaging of single fast-flowing cells, which opens opportunities for future biological studies. [1] M. Tasso et al, "Sulfobetaine-Vinylimidazole block copolymers: a robust quantum dot surface chemistry expanding bioimaging's horizons", ACS Nano, 9(11), 2015 [2] S. Bouccara et al, "Time-gated cell imaging using long lifetime near-infrared-emitting quantum dots for autofluorescence rejection", J Biomed Optc, 19(5), 2014

  8. Mixture-based combinatorial libraries from small individual peptide libraries: a case study on α1-antitrypsin deficiency.

    PubMed

    Chang, Yi-Pin; Chu, Yen-Ho

    2014-05-16

    The design, synthesis and screening of diversity-oriented peptide libraries using a "libraries from libraries" strategy for the development of inhibitors of α1-antitrypsin deficiency are described. The major buttress of the biochemical approach presented here is the use of well-established solid-phase split-and-mix method for the generation of mixture-based libraries. The combinatorial technique iterative deconvolution was employed for library screening. While molecular diversity is the general consideration of combinatorial libraries, exquisite design through systematic screening of small individual libraries is a prerequisite for effective library screening and can avoid potential problems in some cases. This review will also illustrate how large peptide libraries were designed, as well as how a conformation-sensitive assay was developed based on the mechanism of the conformational disease. Finally, the combinatorially selected peptide inhibitor capable of blocking abnormal protein aggregation will be characterized by biophysical, cellular and computational methods.

  9. Deconvoluting physical and chemical heat: Temperature and spiciness influence flavor differently.

    PubMed

    Kapaun, Camille L; Dando, Robin

    2017-03-01

    Flavor is an essential, rich and rewarding part of human life. We refer to both physical and chemical heat in similar terms; elevated temperature and capsaicin are both termed hot. Both influence our perception of flavor, however little research exists into the possibly divergent effect of chemical and physical heat on flavor. A human sensory panel was recruited to determine the equivalent level of capsaicin to match the heat of several physical temperatures. In a subsequent session, the intensities of multiple concentrations of tastant solutions were scaled by the same panel. Finally, panelists evaluated tastants plus equivalent chemical or physical "heat". All basic tastes aside from umami were influenced by heat, capsaicin, or both. Interestingly, capsaicin blocked bitter taste input much more powerfully than elevated temperature. This suggests that despite converging percepts, chemical and physical heat have a fundamentally different effect on the perception of flavor. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Deconvolution of astronomical images using SOR with adaptive relaxation.

    PubMed

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  11. Gaussian and linear deconvolution of LC-MS/MS chromatograms of the eight aminobutyric acid isomers

    PubMed Central

    Vemula, Harika; Kitase, Yukiko; Ayon, Navid J.; Bonewald, Lynda; Gutheil, William G.

    2016-01-01

    Isomeric molecules present a challenge for analytical resolution and quantification, even with MS-based detection. The eight-aminobutyric acid (ABA) isomers are of interest for their various biological activities, particularly γ-aminobutyric acid (GABA) and the d- and l-isomers of β-aminoisobutyric acid (β-AIBA; BAIBA). This study aimed to investigate LC-MS/MS-based resolution of these ABA isomers as their Marfey's (Mar) reagent derivatives. HPLC was able to separate three Mar-ABA isomers l-β-ABA (l-BABA), and l- and d-α-ABA (AABA) completely, with three isomers (GABA, and d/l-BAIBA) in one chromatographic cluster, and two isomers (α-AIBA (AAIBA) and d-BABA) in a second cluster. Partially separated cluster components were deconvoluted using Gaussian peak fitting except for GABA and d-BAIBA. MS/MS detection of Marfey's derivatized ABA isomers provided six MS/MS fragments, with substantially different intensity profiles between structural isomers. This allowed linear deconvolution of ABA isomer peaks. Combining HPLC separation with linear and Gaussian deconvolution allowed resolution of all eight ABA isomers. Application to human serum found a substantial level of l-AABA (13 μM), an intermediate level of l-BAIBA (0.8 μM), and low but detectable levels (<0.2 μM) of GABA, l-BABA, AAIBA, d-BAIBA, and d-AABA. This approach should be useful for LC-MS/MS deconvolution of other challenging groups of isomeric molecules. PMID:27771391

  12. Deconvolution of ferredoxin, plastocyanin, and P700 transmittance changes in intact leaves with a new type of kinetic LED array spectrophotometer.

    PubMed

    Klughammer, Christof; Schreiber, Ulrich

    2016-05-01

    A newly developed compact measuring system for assessment of transmittance changes in the near-infrared spectral region is described; it allows deconvolution of redox changes due to ferredoxin (Fd), P700, and plastocyanin (PC) in intact leaves. In addition, it can also simultaneously measure chlorophyll fluorescence. The major opto-electronic components as well as the principles of data acquisition and signal deconvolution are outlined. Four original pulse-modulated dual-wavelength difference signals are measured (785-840 nm, 810-870 nm, 870-970 nm, and 795-970 nm). Deconvolution is based on specific spectral information presented graphically in the form of 'Differential Model Plots' (DMP) of Fd, P700, and PC that are derived empirically from selective changes of these three components under appropriately chosen physiological conditions. Whereas information on maximal changes of Fd is obtained upon illumination after dark-acclimation, maximal changes of P700 and PC can be readily induced by saturating light pulses in the presence of far-red light. Using the information of DMP and maximal changes, the new measuring system enables on-line deconvolution of Fd, P700, and PC. The performance of the new device is demonstrated by some examples of practical applications, including fast measurements of flash relaxation kinetics and of the Fd, P700, and PC changes paralleling the polyphasic fluorescence rise upon application of a 300-ms pulse of saturating light.

  13. Second-order standard addition for deconvolution and quantification of fatty acids of fish oil using GC-MS.

    PubMed

    Vosough, Maryam; Salemi, Amir

    2007-08-15

    In the present work two second-order calibration methods, generalized rank annihilation method (GRAM) and multivariate curve resolution-alternating least square (MCR-ALS) have been applied on standard addition data matrices obtained by gas chromatography-mass spectrometry (GC-MS) to characterize and quantify four unsaturated fatty acids cis-9-hexadecenoic acid (C16:1omega7c), cis-9-octadecenoic acid (C18:1omega9c), cis-11-eicosenoic acid (C20:1omega9) and cis-13-docosenoic acid (C22:1omega9) in fish oil considering matrix interferences. With these methods, the area does not need to be directly measured and predictions are more accurate. Because of non-trilinear conditions of GC-MS data matrices, at first MCR-ALS and GRAM have been used on uncorrected data matrices. In comparison to MCR-ALS, biased and imprecise concentrations (%R.S.D.=27.3) were obtained using GRAM without correcting the retention time-shift. As trilinearity is the essential requirement for implementing GRAM, the data need to be corrected. Multivariate rank alignment objectively corrects the run-to-run retention time variations between sample GC-MS data matrix and a standard addition GC-MS data matrix. Then, two second-order algorithms have been compared with each other. The above algorithms provided similar mean predictions, pure concentrations and spectral profiles. The results validated using standard mass spectra of target compounds. In addition, some of the quantification results were compared with the concentration values obtained using the selected mass chromatograms. As in the case of strong peak-overlap and the matrix effect, the classical univariate method of determination of the area of the peaks of the analytes will fail, the "second-order advantage" has solved this problem successfully.

  14. Straightening: existence, uniqueness and stability

    PubMed Central

    Destrade, M.; Ogden, R. W.; Sgura, I.; Vergori, L.

    2014-01-01

    One of the least studied universal deformations of incompressible nonlinear elasticity, namely the straightening of a sector of a circular cylinder into a rectangular block, is revisited here and, in particular, issues of existence and stability are addressed. Particular attention is paid to the system of forces required to sustain the large static deformation, including by the application of end couples. The influence of geometric parameters and constitutive models on the appearance of wrinkles on the compressed face of the block is also studied. Different numerical methods for solving the incremental stability problem are compared and it is found that the impedance matrix method, based on the resolution of a matrix Riccati differential equation, is the more precise. PMID:24711723

  15. Low-Dielectric Constant Polyimide Nanoporous Films: Synthesis and Properties

    NASA Astrophysics Data System (ADS)

    Mehdipour-Ataei, S.; Rahimi, A.; Saidi, S.

    2007-08-01

    Synthesis of high temperature polyimide foams with pore sizes in the nanometer range was developed. Foams were prepared by casting graft copolymers comprising a thermally stable block as the matrix and a thermally labile material as the dispersed phase. Polyimides derived from pyromellitic dianhydride with new diamines (4BAP and BAN) were used as the matrix material and functionalized poly(propylene glycol) oligomers were used as a thermally labile constituent. Upon thermal treatment the labile blocks were subsequently removed leaving pores with the size and shape of the original copolymer morphology. The polyimides and foamed polyimides were characterized by some conventional methods including FTIR, H-NMR, DSC, TGA, SEM, TEM, and dielectric constant.

  16. Gold - A novel deconvolution algorithm with optimization for waveform LiDAR processing

    NASA Astrophysics Data System (ADS)

    Zhou, Tan; Popescu, Sorin C.; Krause, Keith; Sheridan, Ryan D.; Putman, Eric

    2017-07-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: (1) direct decomposition, (2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson-Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from the corresponding reference data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, <0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, <1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (<1.01 m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE. Additionally, the high level of uncertainty occurs more on areas with high slope and high vegetation. This study provides an alternative and innovative approach for waveform processing that will benefit high fidelity processing of waveform LiDAR data to characterize vegetation structures.

  17. Improving cell mixture deconvolution by identifying optimal DNA methylation libraries (IDOL).

    PubMed

    Koestler, Devin C; Jones, Meaghan J; Usset, Joseph; Christensen, Brock C; Butler, Rondi A; Kobor, Michael S; Wiencke, John K; Kelsey, Karl T

    2016-03-08

    Confounding due to cellular heterogeneity represents one of the foremost challenges currently facing Epigenome-Wide Association Studies (EWAS). Statistical methods leveraging the tissue-specificity of DNA methylation for deconvoluting the cellular mixture of heterogenous biospecimens offer a promising solution, however the performance of such methods depends entirely on the library of methylation markers being used for deconvolution. Here, we introduce a novel algorithm for Identifying Optimal Libraries (IDOL) that dynamically scans a candidate set of cell-specific methylation markers to find libraries that optimize the accuracy of cell fraction estimates obtained from cell mixture deconvolution. Application of IDOL to training set consisting of samples with both whole-blood DNA methylation data (Illumina HumanMethylation450 BeadArray (HM450)) and flow cytometry measurements of cell composition revealed an optimized library comprised of 300 CpG sites. When compared existing libraries, the library identified by IDOL demonstrated significantly better overall discrimination of the entire immune cell landscape (p = 0.038), and resulted in improved discrimination of 14 out of the 15 pairs of leukocyte subtypes. Estimates of cell composition across the samples in the training set using the IDOL library were highly correlated with their respective flow cytometry measurements, with all cell-specific R (2)>0.99 and root mean square errors (RMSEs) ranging from [0.97 % to 1.33 %] across leukocyte subtypes. Independent validation of the optimized IDOL library using two additional HM450 data sets showed similarly strong prediction performance, with all cell-specific R (2)>0.90 and R M S E<4.00 %. In simulation studies, adjustments for cell composition using the IDOL library resulted in uniformly lower false positive rates compared to competing libraries, while also demonstrating an improved capacity to explain epigenome-wide variation in DNA methylation within two large publicly available HM450 data sets. Despite consisting of half as many CpGs compared to existing libraries for whole blood mixture deconvolution, the optimized IDOL library identified herein resulted in outstanding prediction performance across all considered data sets and demonstrated potential to improve the operating characteristics of EWAS involving adjustments for cell distribution. In addition to providing the EWAS community with an optimized library for whole blood mixture deconvolution, our work establishes a systematic and generalizable framework for the assembly of libraries that improve the accuracy of cell mixture deconvolution.

  18. SU-C-9A-03: Simultaneous Deconvolution and Segmentation for PET Tumor Delineation Using a Variational Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    2014-06-01

    Purpose: To implement a new method that integrates deconvolution with segmentation under the variational framework for PET tumor delineation. Methods: Deconvolution and segmentation are both challenging problems in image processing. The partial volume effect (PVE) makes tumor boundaries in PET image blurred which affects the accuracy of tumor segmentation. Deconvolution aims to obtain a PVE-free image, which can help to improve the segmentation accuracy. Conversely, a correct localization of the object boundaries is helpful to estimate the blur kernel, and thus assist in the deconvolution. In this study, we proposed to solve the two problems simultaneously using a variational methodmore » so that they can benefit each other. The energy functional consists of a fidelity term and a regularization term, and the blur kernel was limited to be the isotropic Gaussian kernel. We minimized the energy functional by solving the associated Euler-Lagrange equations and taking the derivative with respect to the parameters of the kernel function. An alternate minimization method was used to iterate between segmentation, deconvolution and blur-kernel recovery. The performance of the proposed method was tested on clinic PET images of patients with non-Hodgkin's lymphoma, and compared with seven other segmentation methods using the dice similarity index (DSI) and volume error (VE). Results: Among all segmentation methods, the proposed one (DSI=0.81, VE=0.05) has the highest accuracy, followed by the active contours without edges (DSI=0.81, VE=0.25), while other methods including the Graph Cut and the Mumford-Shah (MS) method have lower accuracy. A visual inspection shows that the proposed method localizes the real tumor contour very well. Conclusion: The result showed that deconvolution and segmentation can contribute to each other. The proposed variational method solve the two problems simultaneously, and leads to a high performance for tumor segmentation in PET. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less

  19. Reconstruction of 2D PET data with Monte Carlo generated system matrix for generalized natural pixels

    NASA Astrophysics Data System (ADS)

    Vandenberghe, Stefaan; Staelens, Steven; Byrne, Charles L.; Soares, Edward J.; Lemahieu, Ignace; Glick, Stephen J.

    2006-06-01

    In discrete detector PET, natural pixels are image basis functions calculated from the response of detector pairs. By using reconstruction with natural pixel basis functions, the discretization of the object into a predefined grid can be avoided. Here, we propose to use generalized natural pixel reconstruction. Using this approach, the basis functions are not the detector sensitivity functions as in the natural pixel case but uniform parallel strips. The backprojection of the strip coefficients results in the reconstructed image. This paper proposes an easy and efficient way to generate the matrix M directly by Monte Carlo simulation. Elements of the generalized natural pixel system matrix are formed by calculating the intersection of a parallel strip with the detector sensitivity function. These generalized natural pixels are easier to use than conventional natural pixels because the final step from solution to a square pixel representation is done by simple backprojection. Due to rotational symmetry in the PET scanner, the matrix M is block circulant and only the first blockrow needs to be stored. Data were generated using a fast Monte Carlo simulator using ray tracing. The proposed method was compared to a listmode MLEM algorithm, which used ray tracing for doing forward and backprojection. Comparison of the algorithms with different phantoms showed that an improved resolution can be obtained using generalized natural pixel reconstruction with accurate system modelling. In addition, it was noted that for the same resolution a lower noise level is present in this reconstruction. A numerical observer study showed the proposed method exhibited increased performance as compared to a standard listmode EM algorithm. In another study, more realistic data were generated using the GATE Monte Carlo simulator. For these data, a more uniform contrast recovery and a better contrast-to-noise performance were observed. It was observed that major improvements in contrast recovery were obtained with MLEM when the correct system matrix was used instead of simple ray tracing. The correct modelling was the major cause of improved contrast for the same background noise. Less important factors were the choice of the algorithm (MLEM performed better than ART) and the basis functions (generalized natural pixels gave better results than pixels).

  20. Matrix multiplication on the Intel Touchstone Delta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huss-Lederman, S.; Jacobson, E.M.; Tsao, A.

    1993-12-31

    Matrix multiplication is a key primitive in block matrix algorithms such as those found in LAPACK. We present results from our study of matrix multiplication algorithms on the Intel Touchstone Delta, a distributed memory message-passing architecture with a two-dimensional mesh topology. We obtain an implementation that uses communication primitives highly suited to the Delta and exploits the single node assembly-coded matrix multiplication. Our algorithm is completely general, able to deal with arbitrary mesh aspect ratios and matrix dimensions, and has achieved parallel efficiency of 86% with overall peak performance in excess of 8 Gflops on 256 nodes for an 8800more » {times} 8800 matrix. We describe our algorithm design and implementation, and present performance results that demonstrate scalability and robust behavior over varying mesh topologies.« less

  1. Stochastic simulation of human pulmonary blood flow and transit time frequency distribution based on anatomic and elasticity data.

    PubMed

    Huang, Wei; Shi, Jun; Yen, R T

    2012-12-01

    The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.

  2. Faces of matrix models

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2012-08-01

    Partition functions of eigenvalue matrix models possess a number of very different descriptions: as matrix integrals, as solutions to linear and nonlinear equations, as τ-functions of integrable hierarchies and as special-geometry prepotentials, as result of the action of W-operators and of various recursions on elementary input data, as gluing of certain elementary building blocks. All this explains the central role of such matrix models in modern mathematical physics: they provide the basic "special functions" to express the answers and relations between them, and they serve as a dream model of what one should try to achieve in any other field.

  3. Spectrophotometric Determination of the Dissociation Constant of an Acid-Base Indicator Using a Mathematical Deconvolution Technique

    ERIC Educational Resources Information Center

    Alter, Krystyn P.; Molloy, John L.; Niemeyer, Emily D.

    2005-01-01

    A laboratory experiment reinforces the concept of acid-base equilibria while introducing a common application of spectrophotometry and can easily be completed within a standard four-hour laboratory period. It provides students with an opportunity to use advanced data analysis techniques like data smoothing and spectral deconvolution to…

  4. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  5. SOURCE PULSE ENHANCEMENT BY DECONVOLUTION OF AN EMPIRICAL GREEN'S FUNCTION.

    USGS Publications Warehouse

    Mueller, Charles S.

    1985-01-01

    Observations of the earthquake source-time function are enhanced if path, recording-site, and instrument complexities can be removed from seismograms. Assuming that a small earthquake has a simple source, its seismogram can be treated as an empirical Green's function and deconvolved from the seismogram of a larger and/or more complex earthquake by spectral division. When the deconvolution is well posed, the quotient spectrum represents the apparent source-time function of the larger event. This study shows that with high-quality locally recorded earthquake data it is feasible to Fourier transform the quotient and obtain a useful result in the time domain. In practice, the deconvolution can be stabilized by one of several simple techniques. Application of the method is given. Refs.

  6. Deconvolution of time series in the laboratory

    NASA Astrophysics Data System (ADS)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  7. Structured block copolymer thin film composites for ultra-high energy density capacitors

    NASA Astrophysics Data System (ADS)

    Samant, Saumil; Hailu, Shimelis; Grabowski, Christopher; Durstock, Michael; Raghavan, Dharmaraj; Karim, Alamgir

    2014-03-01

    Development of high energy density capacitors is essential for future applications like hybrid vehicles and directed energy weaponry. Fundamentally, energy density is governed by product of dielectric permittivity ɛ and breakdown strength Vbd. Hence, improvements in energy density are greatly reliant on improving either ɛ or Vbd or a combination of both. Polymer films are widely used in capacitors due to high Vbd and low loss but they suffer from very low permittivities. Composite dielectrics offer a unique opportunity to combine the high ɛ of inorganic fillers with the high Vbd of a polymer matrix. For enhancement of dielectric properties, it is essential to improve matrix-filler interaction and control the spatial distribution of fillers for which nanostructured block copolymers BCP act as ideal templates. We use Directed Self-assembly of block copolymers to rapidly fabricate highly aligned BCP-TiO2 composite nanostructures in thin films under dynamic thermal gradient field to synergistically combine the high ɛ of functionalized TiO2 and high Vbd of BCP matrix. The results of impact of BCP morphology, processing conditions and concentration of TiO2 on capacitor performance will be reported. U.S. Air Force of Scientific Research under contract FA9550-12-1-0306

  8. A novel neutron energy spectrum unfolding code using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  9. Positron Annihilation Studies of High-Tc Superconductors

    NASA Astrophysics Data System (ADS)

    Peter, M.; Manuel, A. A.

    1989-01-01

    First we present the principles involved in the study of the two-photon momentum distribution: The method requires deconvolution of the positron wavefunction and the estimation of matrix elements effects. Single crystal samples must be of sufficient quality to avoid positron trapping (tested by positron lifetime measurements). In ordinary metals (alkalis, transition- and rare earth metals and compounds) two-photon momentum distribution studies have given results in close agreement with relevant band structure calculations. Discrepancies have been successfully described as enhancement effects due to correlations. In the superconducting oxides, measurements are more difficult because there are fewer conduction electrons and more trapping. Correlation effects of a different nature are expected to be important and might render the band picture inappropriate. Two-photon momentum distribution measurements have now been made by several groups, but have been interpreted in different ways. We relate the current state of affairs, and our present interpretation, to the latest available results.

  10. Fire blocking systems for aircraft seat cushions

    NASA Technical Reports Server (NTRS)

    Parker, J. A.; Kourtides, D. A. (Inventor)

    1984-01-01

    A configuration and method for reducing the flammability of bodies of organic materials that thermally decompose to give flammable gases comprises covering the body with a flexible matrix that catalytically cracks the flammable gases to less flammable species. Optionally, the matrix is covered with a gas impermeable outer layer. In a preferred embodiment, the invention takes the form of an aircraft seat in which the body is a poly(urethane) seat cushion, the matrix is an aramid fabric or felt and the outer layer is an aluminum film.

  11. Second level semi-degenerate fields in W_3 Toda theory: matrix element and differential equation

    NASA Astrophysics Data System (ADS)

    Belavin, Vladimir; Cao, Xiangyu; Estienne, Benoit; Santachiara, Raoul

    2017-03-01

    In a recent study we considered W_3 Toda 4-point functions that involve matrix elements of a primary field with the highest-weight in the adjoint representation of sl_3 . We generalize this result by considering a semi-degenerate primary field, which has one null vector at level two. We obtain a sixth-order Fuchsian differential equation for the conformal blocks. We discuss the presence of multiplicities, the matrix elements and the fusion rules.

  12. Controlled Breast Cancer Microarrays for the Deconvolution of Cellular Multilayering and Density Effects upon Drug Responses

    PubMed Central

    Håkanson, Maria; Kobel, Stefan; Lutolf, Matthias P.; Textor, Marcus; Cukierman, Edna; Charnley, Mirren

    2012-01-01

    Background Increasing evidence shows that the cancer microenvironment affects both tumorigenesis and the response of cancer to drug treatment. Therefore in vitro models that selectively reflect characteristics of the in vivo environment are greatly needed. Current methods allow us to screen the effect of extrinsic parameters such as matrix composition and to model the complex and three-dimensional (3D) cancer environment. However, 3D models that reflect characteristics of the in vivo environment are typically too complex and do not allow the separation of discrete extrinsic parameters. Methodology/Principal Findings In this study we used a poly(ethylene glycol) (PEG) hydrogel-based microwell array to model breast cancer cell behavior in multilayer cell clusters that allows a rigorous control of the environment. The innovative array fabrication enables different matrix proteins to be integrated into the bottom surface of microwells. Thereby, extrinsic parameters including dimensionality, type of matrix coating and the extent of cell-cell adhesion could be independently studied. Our results suggest that cell to matrix interactions and increased cell-cell adhesion, at high cell density, induce independent effects on the response to Taxol in multilayer breast cancer cell clusters. In addition, comparing the levels of apoptosis and proliferation revealed that drug resistance mediated by cell-cell adhesion can be related to altered cell cycle regulation. Conversely, the matrix-dependent response to Taxol did not correlate with proliferation changes suggesting that cell death inhibition may be responsible for this effect. Conclusions/Significance The application of the PEG hydrogel platform provided novel insight into the independent role of extrinsic parameters controlling drug response. The presented platform may not only become a useful tool for basic research related to the role of the cancer microenvironment but could also serve as a complementary platform for in vitro drug development. PMID:22792141

  13. Ejecta distribution patterns at Meteor Crater, Arizona: On the applicability of lithologic end-member deconvolution for spaceborne thermal infrared data of Earth and Mars

    NASA Astrophysics Data System (ADS)

    Ramsey, Michael S.

    2002-08-01

    A spectral deconvolution using a constrained least squares approach was applied to airborne thermal infrared multispectral scanner (TIMS) data of Meteor Crater, Arizona. The three principal sedimentary units sampled by the impact were chosen as end-members, and their spectra were derived from the emissivity images. To validate previous estimates of the erosion of the near-rim ejecta, the model was used to identify the areal extent of the reworked material. The outputs of the algorithm reveal subtle mixing patterns in the ejecta, identified larger ejecta blocks, and were used to further constrain the volume of Coconino Sandstone present in the vicinity of the crater. The availability of the multialtitude data set also provided a means to examine the effects of resolution degradation and quantify the subsequent errors on the model. These data served as a test case for the use of image-derived lithologic end-members at various scales, which is critical for examining thermal infrared data of planetary surfaces. The model results indicate that the Coconino Ss. reworked ejecta is detectable over 3 km from the crater. This was confirmed by field sampling within the primary ejecta field and wind streak. The areal distribution patterns of this unit imply past erosion and subsequent sediment transport that was low to moderate compared with early studies and therefore places further constraints on the ejecta degradation of Meteor Crater. It also provides an important example of the analysis that can be performed on thermal infrared data currently being returned from Earth orbit and expected from Mars in 2002.

  14. Adsorption of copolymers at polymer/air and polymer/solid interfaces

    NASA Astrophysics Data System (ADS)

    Oslanec, Robert

    Using mainly low-energy forward recoil spectrometry (LE-FRES) and neutron reflectivity (NR), copolymer behavior at polymer/air and polymer/solid interfaces is investigated. For a miscible blend of poly(styrene-ran-acrylonitrile) copolymers, the volume fraction profile of the copolymer with lower acrylonitrile content is flat near the surface in contrast to mean field predictions. Including copolymer polydispersity into a self consistent mean field (SCMF) model does not account for this profile shape. LE-FRES and NR is also used to study poly(deuterated styrene-block-methyl-methacrylate) (dPS-b-PMMA) adsorption from a polymer matrix to a silicon oxide substrate. The interfacial excess, zsp*, layer thickness, L, and layer-matrix width, w, depend strongly on the number of matrix segments, P, for P 2N, the matrix chains are repelled from the adsorbed layer and the layer characteristics become independent of P. An SCMF model of block copolymer adsorption is developed. SCMF predictions are in qualitative agreement with the experimental behavior of zsp*, L, and w as a function of P. Using this model, the interaction energy of the MMA block with the oxide substrate is found to be -8ksb{B}T. In a subsequent experiment, the matrix/dPS interaction is made increasingly unfavorable by increasing the 4-bromostyrene mole fraction, x, in a poly(styrene-ran-4-bromostyrene) (PBrsbxS) matrix. Whereas experiments show that zsp* slightly decreases as x increases, the SCMF model predicts that zsp* should increase as the matrix becomes more unfavorable. Upon including a small matrix attraction for the substrate, the SCMF model shows that zsp* decreases with x because of competition between PBrsbxS and dPS-b-PMMA for adsorbing sites. In thin film dewetting experiments on silicon oxide, the addition of dPS-b-PMMA to PS coatings acts to slow hole growth and prevent holes from impinging. Dewetting studies show that longer dPS-b-PMMA chains are more effective stabilizing agents than shorter ones and that 3 volume percent dPS-b-PMMA is the optimum additive concentration for this system. For a dPS-b-PMMA:PS blend, atomic force microscopy of the hole floor reveals mounds of residual polymer and a modulated contact line where the rim meets the substrate.

  15. Interactive Block Games for Assessing Children's Cognitive Skills: Design and Preliminary Evaluation.

    PubMed

    Lee, Kiju; Jeong, Donghwa; Schindler, Rachael C; Hlavaty, Laura E; Gross, Susan I; Short, Elizabeth J

    2018-01-01

    Background: This paper presents design and results from preliminary evaluation of Tangible Geometric Games (TAG-Games) for cognitive assessment in young children. The TAG-Games technology employs a set of sensor-integrated cube blocks, called SIG-Blocks, and graphical user interfaces for test administration and real-time performance monitoring. TAG-Games were administered to children from 4 to 8 years of age for evaluating preliminary efficacy of this new technology-based approach. Methods: Five different sets of SIG-Blocks comprised of geometric shapes, segmented human faces, segmented animal faces, emoticons, and colors, were used for three types of TAG-Games, including Assembly, Shape Matching, and Sequence Memory. Computational task difficulty measures were defined for each game and used to generate items with varying difficulty. For preliminary evaluation, TAG-Games were tested on 40 children. To explore the clinical utility of the information assessed by TAG-Games, three subtests of the age-appropriate Wechsler tests (i.e., Block Design, Matrix Reasoning, and Picture Concept) were also administered. Results: Internal consistency of TAG-Games was evaluated by the split-half reliability test. Weak to moderate correlations between Assembly and Block Design, Shape Matching and Matrix Reasoning, and Sequence Memory and Picture Concept were found. The computational measure of task complexity for each TAG-Game showed a significant correlation with participants' performance. In addition, age-correlations on TAG-Game scores were found, implying its potential use for assessing children's cognitive skills autonomously.

  16. Interactive Block Games for Assessing Children's Cognitive Skills: Design and Preliminary Evaluation

    PubMed Central

    Lee, Kiju; Jeong, Donghwa; Schindler, Rachael C.; Hlavaty, Laura E.; Gross, Susan I.; Short, Elizabeth J.

    2018-01-01

    Background: This paper presents design and results from preliminary evaluation of Tangible Geometric Games (TAG-Games) for cognitive assessment in young children. The TAG-Games technology employs a set of sensor-integrated cube blocks, called SIG-Blocks, and graphical user interfaces for test administration and real-time performance monitoring. TAG-Games were administered to children from 4 to 8 years of age for evaluating preliminary efficacy of this new technology-based approach. Methods: Five different sets of SIG-Blocks comprised of geometric shapes, segmented human faces, segmented animal faces, emoticons, and colors, were used for three types of TAG-Games, including Assembly, Shape Matching, and Sequence Memory. Computational task difficulty measures were defined for each game and used to generate items with varying difficulty. For preliminary evaluation, TAG-Games were tested on 40 children. To explore the clinical utility of the information assessed by TAG-Games, three subtests of the age-appropriate Wechsler tests (i.e., Block Design, Matrix Reasoning, and Picture Concept) were also administered. Results: Internal consistency of TAG-Games was evaluated by the split-half reliability test. Weak to moderate correlations between Assembly and Block Design, Shape Matching and Matrix Reasoning, and Sequence Memory and Picture Concept were found. The computational measure of task complexity for each TAG-Game showed a significant correlation with participants' performance. In addition, age-correlations on TAG-Game scores were found, implying its potential use for assessing children's cognitive skills autonomously. PMID:29868520

  17. Efficient volumetric estimation from plenoptic data

    NASA Astrophysics Data System (ADS)

    Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.

    2013-03-01

    The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.

  18. Fast Fourier-based deconvolution for three-dimensional acoustic source identification with solid spherical arrays

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Chu, Zhigang; Shen, Linbang; Ping, Guoli; Xu, Zhongming

    2018-07-01

    Being capable of demystifying the acoustic source identification result fast, Fourier-based deconvolution has been studied and applied widely for the delay and sum (DAS) beamforming with two-dimensional (2D) planar arrays. It is, however so far, still blank in the context of spherical harmonics beamforming (SHB) with three-dimensional (3D) solid spherical arrays. This paper is motivated to settle this problem. Firstly, for the purpose of determining the effective identification region, the premise of deconvolution, a shift-invariant point spread function (PSF), is analyzed with simulations. To make the premise be satisfied approximately, the opening angle in elevation dimension of the surface of interest should be small, while no restriction is imposed to the azimuth dimension. Then, two kinds of deconvolution theories are built for SHB using the zero and the periodic boundary conditions respectively. Both simulations and experiments demonstrate that the periodic boundary condition is superior to the zero one, and fits the 3D acoustic source identification with solid spherical arrays better. Finally, four periodic boundary condition based deconvolution methods are formulated, and their performance is disclosed both with simulations and experimentally. All the four methods offer enhanced spatial resolution and reduced sidelobe contaminations over SHB. The recovered source strength approximates to the exact one multiplied with a coefficient that is the square of the focus distance divided by the distance from the source to the array center, while the recovered pressure contribution is scarcely affected by the focus distance, always approximating to the exact one.

  19. Detection of increased vasa vasorum in artery walls: improving CT number accuracy using image deconvolution

    NASA Astrophysics Data System (ADS)

    Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.

    2017-03-01

    Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.

  20. Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.

    2003-09-01

    Multiconjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wave-front control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10-2 Hz, i.e., 4-5 orders of magnitude lower than the typical 103 Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.

  1. New algorithms for field-theoretic block copolymer simulations: Progress on using adaptive-mesh refinement and sparse matrix solvers in SCFT calculations

    NASA Astrophysics Data System (ADS)

    Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander

    2012-02-01

    Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.

  2. Ultrasound-Mediated Tumor Imaging and Nanotherapy using Drug Loaded, Block Copolymer Stabilized Perfluorocarbon Nanoemulsions

    PubMed Central

    Rapoport, Natalya; Nam, Kweon-Ho; Gupta, Roohi; Gao, Zhongao; Mohan, Praveena; Payne, Allison; Todd, Nick; Liu, Xin; Kim, Taeho; Shea, Jill; Scaife, Courtney; Parker, Dennis L.; Jeong, Eun-Kee; Kennedy, Anne M.

    2011-01-01

    Perfluorocarbon nanoemulsions can deliver lipophilic therapeutic agents to solid tumors and simultaneously provide for monitoring nanocarrier biodistribution via ultrasonography and/or 19F MRI. In the first generation of block copolymer stabilized perfluorocarbon nanoemulsions, perfluoropentane (PFP) was used as the droplet forming compound. Although manifesting excellent therapeutic and ultrasound imaging properties, PFP nanoemulsions were unstable at storage, difficult to handle, and underwent hard to control phenomenon of irreversible droplet-to-bubble transition upon injection. To solve the above problems, perfluoro-15-crown-5-ether (PFCE) was used as a core forming compound in the second generation of block copolymer stabilized perfluorocarbon nanoemulsions. PFCE nanodroplets manifest both ultrasound and fluorine (19F) MR contrast properties, which allows using multimodal imaging and 19F MR spectroscopy for monitoring nanodroplet pharmacokinetics and biodistribution. In the present paper, acoustic, imaging, and therapeutic properties of unloaded and paclitaxel (PTX) loaded PFCE nanoemulsions are reported. As manifested by the 19F MR spectroscopy, PFCE nanodroplets are long circulating, with about 50% of the injected dose remaining in circulation two hours after the systemic injection. Sonication with 1-MHz therapeutic ultrasound triggered reversible droplet-to-bubble transition in PFCE nanoemulsions. Microbubbles formed by acoustic vaporization of nanodroplets underwent stable cavitation. The nanodroplet size (200 nm to 350 nm depending on a type of the shell and conditions of emulsification) as well as long residence in circulation favored their passive accumulation in tumor tissue that was confirmed by ultrasonography. In the breast and pancreatic cancer animal models, ultrasound-mediated therapy with paclitaxel-loaded PFCE nanoemulsions showed excellent therapeutic properties characterized by tumor regression and suppression of metastasis. Anticipated mechanisms of the observed effects are discussed. PMID:21277919

  3. High-flexibility combinatorial peptide synthesis with laser-based transfer of monomers in solid matrix material.

    PubMed

    Loeffler, Felix F; Foertsch, Tobias C; Popov, Roman; Mattes, Daniela S; Schlageter, Martin; Sedlmayr, Martyna; Ridder, Barbara; Dang, Florian-Xuan; von Bojničić-Kninski, Clemens; Weber, Laura K; Fischer, Andrea; Greifenstein, Juliane; Bykovskaya, Valentina; Buliev, Ivan; Bischoff, F Ralf; Hahn, Lothar; Meier, Michael A R; Bräse, Stefan; Powell, Annie K; Balaban, Teodor Silviu; Breitling, Frank; Nesterov-Mueller, Alexander

    2016-06-14

    Laser writing is used to structure surfaces in many different ways in materials and life sciences. However, combinatorial patterning applications are still limited. Here we present a method for cost-efficient combinatorial synthesis of very-high-density peptide arrays with natural and synthetic monomers. A laser automatically transfers nanometre-thin solid material spots from different donor slides to an acceptor. Each donor bears a thin polymer film, embedding one type of monomer. Coupling occurs in a separate heating step, where the matrix becomes viscous and building blocks diffuse and couple to the acceptor surface. Furthermore, we can consecutively deposit two material layers of activation reagents and amino acids. Subsequent heat-induced mixing facilitates an in situ activation and coupling of the monomers. This allows us to incorporate building blocks with click chemistry compatibility or a large variety of commercially available non-activated, for example, posttranslationally modified building blocks into the array's peptides with >17,000 spots per cm(2).

  4. VizieR Online Data Catalog: Spatial deconvolution code (Quintero Noda+, 2015)

    NASA Astrophysics Data System (ADS)

    Quintero Noda, C.; Asensio Ramos, A.; Orozco Suarez, D.; Ruiz Cobo, B.

    2015-05-01

    This deconvolution method follows the scheme presented in Ruiz Cobo & Asensio Ramos (2013A&A...549L...4R) The Stokes parameters are projected onto a few spectral eigenvectors and the ensuing maps of coefficients are deconvolved using a standard Lucy-Richardson algorithm. This introduces a stabilization because the PCA filtering reduces the amount of noise. (1 data file).

  5. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  6. A method to measure the presampling MTF in digital radiography using Wiener deconvolution

    NASA Astrophysics Data System (ADS)

    Zhou, Zhongxing; Zhu, Qingzhen; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Li, Guohui

    2013-03-01

    We developed a novel method for determining the presampling modulation transfer function (MTF) of digital radiography systems from slanted edge images based on Wiener deconvolution. The degraded supersampled edge spread function (ESF) was obtained from simulated slanted edge images with known MTF in the presence of poisson noise, and its corresponding ideal ESF without degration was constructed according to its central edge position. To meet the requirements of the absolute integrable condition of Fourier transformation, the origianl ESFs were mirrored to construct the symmetric pattern of ESFs. Then based on Wiener deconvolution technique, the supersampled line spread function (LSF) could be acquired from the symmetric pattern of degraded supersampled ESFs in the presence of ideal symmetric ESFs and system noise. The MTF is then the normalized magnitude of the Fourier transform of the LSF. The determined MTF showed a strong agreement with the theoritical true MTF when appropriated Wiener parameter was chosen. The effects of Wiener parameter value and the width of square-like wave peak in symmetric ESFs were illustrated and discussed. In conclusion, an accurate and simple method to measure the presampling MTF was established using Wiener deconvolution technique according to slanted edge images.

  7. Deconvolution of interferometric data using interior point iterative algorithms

    NASA Astrophysics Data System (ADS)

    Theys, C.; Lantéri, H.; Aime, C.

    2016-09-01

    We address the problem of deconvolution of astronomical images that could be obtained with future large interferometers in space. The presentation is made in two complementary parts. The first part gives an introduction to the image deconvolution with linear and nonlinear algorithms. The emphasis is made on nonlinear iterative algorithms that verify the constraints of non-negativity and constant flux. The Richardson-Lucy algorithm appears there as a special case for photon counting conditions. More generally, the algorithm published recently by Lanteri et al. (2015) is based on scale invariant divergences without assumption on the statistic model of the data. The two proposed algorithms are interior-point algorithms, the latter being more efficient in terms of speed of calculation. These algorithms are applied to the deconvolution of simulated images corresponding to an interferometric system of 16 diluted telescopes in space. Two non-redundant configurations, one disposed around a circle and the other on an hexagonal lattice, are compared for their effectiveness on a simple astronomical object. The comparison is made in the direct and Fourier spaces. Raw "dirty" images have many artifacts due to replicas of the original object. Linear methods cannot remove these replicas while iterative methods clearly show their efficacy in these examples.

  8. Single-Ion Deconvolution of Mass Peak Overlaps for Atom Probe Microscopy.

    PubMed

    London, Andrew J; Haley, Daniel; Moody, Michael P

    2017-04-01

    Due to the intrinsic evaporation properties of the material studied, insufficient mass-resolving power and lack of knowledge of the kinetic energy of incident ions, peaks in the atom probe mass-to-charge spectrum can overlap and result in incorrect composition measurements. Contributions to these peak overlaps can be deconvoluted globally, by simply examining adjacent peaks combined with knowledge of natural isotopic abundances. However, this strategy does not account for the fact that the relative contributions to this convoluted signal can often vary significantly in different regions of the analysis volume; e.g., across interfaces and within clusters. Some progress has been made with spatially localized deconvolution in cases where the discrete microstructural regions can be easily identified within the reconstruction, but this means no further point cloud analyses are possible. Hence, we present an ion-by-ion methodology where the identity of each ion, normally obscured by peak overlap, is resolved by examining the isotopic abundance of their immediate surroundings. The resulting peak-deconvoluted data are a point cloud and can be analyzed with any existing tools. We present two detailed case studies and discussion of the limitations of this new technique.

  9. Image deblurring by motion estimation for remote sensing

    NASA Astrophysics Data System (ADS)

    Chen, Yueting; Wu, Jiagu; Xu, Zhihai; Li, Qi; Feng, Huajun

    2010-08-01

    The imagery resolution of imaging systems for remote sensing is often limited by image degradation resulting from unwanted motion disturbances of the platform during image exposures. Since the form of the platform vibration can be arbitrary, the lack of priori knowledge about the motion function (the PSF) suggests blind restoration approaches. A deblurring method which combines motion estimation and image deconvolution both for area-array and TDI remote sensing has been proposed in this paper. The image motion estimation is accomplished by an auxiliary high-speed detector and a sub-pixel correlation algorithm. The PSF is then reconstructed from estimated image motion vectors. Eventually, the clear image can be recovered by the Richardson-Lucy (RL) iterative deconvolution algorithm from the blurred image of the prime camera with the constructed PSF. The image deconvolution for the area-array detector is direct. While for the TDICCD detector, an integral distortion compensation step and a row-by-row deconvolution scheme are applied. Theoretical analyses and experimental results show that, the performance of the proposed concept is convincing. Blurred and distorted images can be properly recovered not only for visual observation, but also with significant objective evaluation increment.

  10. Comparison of active-set method deconvolution and matched-filtering for derivation of an ultrasound transit time spectrum.

    PubMed

    Wille, M-L; Zapf, M; Ruiter, N V; Gemmeke, H; Langton, C M

    2015-06-21

    The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity.

  11. Chemometric Data Analysis for Deconvolution of Overlapped Ion Mobility Profiles

    NASA Astrophysics Data System (ADS)

    Zekavat, Behrooz; Solouki, Touradj

    2012-11-01

    We present the details of a data analysis approach for deconvolution of the ion mobility (IM) overlapped or unresolved species. This approach takes advantage of the ion fragmentation variations as a function of the IM arrival time. The data analysis involves the use of an in-house developed data preprocessing platform for the conversion of the original post-IM/collision-induced dissociation mass spectrometry (post-IM/CID MS) data to a Matlab compatible format for chemometric analysis. We show that principle component analysis (PCA) can be used to examine the post-IM/CID MS profiles for the presence of mobility-overlapped species. Subsequently, using an interactive self-modeling mixture analysis technique, we show how to calculate the total IM spectrum (TIMS) and CID mass spectrum for each component of the IM overlapped mixtures. Moreover, we show that PCA and IM deconvolution techniques provide complementary results to evaluate the validity of the calculated TIMS profiles. We use two binary mixtures with overlapping IM profiles, including (1) a mixture of two non-isobaric peptides (neurotensin (RRPYIL) and a hexapeptide (WHWLQL)), and (2) an isobaric sugar isomer mixture of raffinose and maltotriose, to demonstrate the applicability of the IM deconvolution.

  12. Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.

    PubMed

    Eichstädt, S; Wilkens, V

    2017-06-01

    An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.

  13. Designing a stable feedback control system for blind image deconvolution.

    PubMed

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. A computational method for solving stochastic Itô–Volterra integral equations based on stochastic operational matrix for generalized hat basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir

    2014-08-01

    In this paper, a new computational method based on the generalized hat basis functions is proposed for solving stochastic Itô–Volterra integral equations. In this way, a new stochastic operational matrix for generalized hat functions on the finite interval [0,T] is obtained. By using these basis functions and their stochastic operational matrix, such problems can be transformed into linear lower triangular systems of algebraic equations which can be directly solved by forward substitution. Also, the rate of convergence of the proposed method is considered and it has been shown that it is O(1/(n{sup 2}) ). Further, in order to show themore » accuracy and reliability of the proposed method, the new approach is compared with the block pulse functions method by some examples. The obtained results reveal that the proposed method is more accurate and efficient in comparison with the block pule functions method.« less

  15. Free energy calculations: an efficient adaptive biasing potential method.

    PubMed

    Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul

    2010-05-06

    We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.

  16. Receiver function deconvolution using transdimensional hierarchical Bayesian inference

    NASA Astrophysics Data System (ADS)

    Kolb, J. M.; Lekić, V.

    2014-06-01

    Teleseismic waves can convert from shear to compressional (Sp) or compressional to shear (Ps) across impedance contrasts in the subsurface. Deconvolving the parent waveforms (P for Ps or S for Sp) from the daughter waveforms (S for Ps or P for Sp) generates receiver functions which can be used to analyse velocity structure beneath the receiver. Though a variety of deconvolution techniques have been developed, they are all adversely affected by background and signal-generated noise. In order to take into account the unknown noise characteristics, we propose a method based on transdimensional hierarchical Bayesian inference in which both the noise magnitude and noise spectral character are parameters in calculating the likelihood probability distribution. We use a reversible-jump implementation of a Markov chain Monte Carlo algorithm to find an ensemble of receiver functions whose relative fits to the data have been calculated while simultaneously inferring the values of the noise parameters. Our noise parametrization is determined from pre-event noise so that it approximates observed noise characteristics. We test the algorithm on synthetic waveforms contaminated with noise generated from a covariance matrix obtained from observed noise. We show that the method retrieves easily interpretable receiver functions even in the presence of high noise levels. We also show that we can obtain useful estimates of noise amplitude and frequency content. Analysis of the ensemble solutions produced by our method can be used to quantify the uncertainties associated with individual receiver functions as well as with individual features within them, providing an objective way for deciding which features warrant geological interpretation. This method should make possible more robust inferences on subsurface structure using receiver function analysis, especially in areas of poor data coverage or under noisy station conditions.

  17. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  18. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  19. Six-color intravital two-photon imaging of brain tumors and their dynamic microenvironment.

    PubMed

    Ricard, Clément; Debarbieux, Franck Christian

    2014-01-01

    The majority of intravital studies on brain tumor in living animal so far rely on dual color imaging. We describe here a multiphoton imaging protocol to dynamically characterize the interactions between six cellular components in a living mouse. We applied this methodology to a clinically relevant glioblastoma multiforme (GBM) model designed in reporter mice with targeted cell populations labeled by fluorescent proteins of different colors. This model permitted us to make non-invasive longitudinal and multi-scale observations of cell-to-cell interactions. We provide examples of such 5D (x,y,z,t,color) images acquired on a daily basis from volumes of interest, covering most of the mouse parietal cortex at subcellular resolution. Spectral deconvolution allowed us to accurately separate each cell population as well as some components of the extracellular matrix. The technique represents a powerful tool for investigating how tumor progression is influenced by the interactions of tumor cells with host cells and the extracellular matrix micro-environment. It will be especially valuable for evaluating neuro-oncological drug efficacy and target specificity. The imaging protocol provided here can be easily translated to other mouse models of neuropathologies, and should also be of fundamental interest for investigations in other areas of systems biology.

  20. Improved method for peak picking in matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Kempka, Martin; Sjödahl, Johan; Björk, Anders; Roeraade, Johan

    2004-01-01

    A method for peak picking for matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) is described. The method is based on the assumption that two sets of ions are formed during the ionization stage, which have Gaussian distributions but different velocity profiles. This gives rise to a certain degree of peak skewness. Our algorithm deconvolutes the peak and utilizes the fast velocity, bulk ion distribution for peak picking. Evaluation of the performance of the new method was conducted using peptide peaks from a bovine serum albumin (BSA) digest, and compared with the commercial peak-picking algorithms Centroid and SNAP. When using the new two-Gaussian algorithm, for strong signals the mass accuracy was equal to or marginally better than the results obtained from the commercial algorithms. However, for weak, distorted peaks, considerable improvement in both mass accuracy and precision was obtained. This improvement should be particularly useful in proteomics, where a lack of signal strength is often encountered when dealing with weakly expressed proteins. Finally, since the new peak-picking method uses information from the entire signal, no adjustments of parameters related to peak height have to be made, which simplifies its practical use. Copyright 2004 John Wiley & Sons, Ltd.

  1. Wear behavioral study of as cast and 7 hr homogenized Al25Mg2Si2Cu4Ni alloy at constant load

    NASA Astrophysics Data System (ADS)

    Harlapur, M. D.; Sondur, D. G.; Akkimardi, V. G.; Mallapur, D. G.

    2018-04-01

    In the current study, the wear behavior of as cast and 7 hr homogenized Al25Mg2Si2Cu4Ni alloy has been investigated. Microstructure, SEM and EDS results confirm the presence of different intermetallic and their effects on wear properties of Al25Mg2Si2Cu4Ni alloy in as cast as well as aged condition. Alloying main elements like Si, Cu, Mg and Ni partly dissolve in the primary α-Al matrix and to some amount present in the form of intermetallic phases. SEM structure of as cast alloy shows blocks of Mg2Si which is at random distributed in the aluminium matrix. Precipitates of Al2Cu in the form of Chinese script are also observed. Also `Q' phase (Al-Si-Cu-Mg) be distributed uniformly into the aluminium matrix. Few coarsened platelets of Ni are seen. In case of 7 hr homogenized samples blocks of Mg2Si get rounded at the corners, Platelets of Ni get fragmented and distributed uniformly in the aluminium matrix. Results show improved volumetric wear resistance and reduced coefficient of friction after homogenizing heat treatment.

  2. A Novel Approach for Constructing One-Way Hash Function Based on a Message Block Controlled 8D Hyperchaotic Map

    NASA Astrophysics Data System (ADS)

    Lin, Zhuosheng; Yu, Simin; Lü, Jinhu

    2017-06-01

    In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.

  3. Fundamental characteristics of degradation-recoverable solid-state DFB polymer laser.

    PubMed

    Yoshioka, Hiroaki; Yang, Yu; Watanabe, Hirofumi; Oki, Yuji

    2012-02-13

    A novel solid-state dye laser with degradation recovery was proposed and demonstrated. Polydimethylsiloxane was used as a nanoporous solid matrix to enable the internal circulation of dye molecules in the solid state. An internal circulation model for the dye molecules was also proposed and verified numerically by assuming molecular mobility and using a proposed diffusion equation. The durability of the laser was increased 20.5-fold compared with that of a conventional polymethylmethacrylate laser. This novel laser solves the low-durability problem of dye-doped polymer lasers.

  4. Serum tissue inhibitor of matrix metalloproteinase-1 levels are associated with mortality in patients with malignant middle cerebral artery infarction.

    PubMed

    Lorente, Leonardo; Martín, María M; Ramos, Luis; Cáceres, Juan J; Solé-Violán, Jordi; Argueso, Mónica; Jiménez, Alejandro; Borreguero-León, Juan M; Orbe, Josune; Rodríguez, José A; Páramo, José A

    2015-07-11

    In the last years, circulating matrix metalloproteinases (MMP)-9 levels have been associated with functional outcome in ischemic stroke patients. However the prognostic value of circulating levels of tissue inhibitor of matrix metalloproteinases (TIMP)-1 and MMP-10 in functional outcome of ischemic stroke patients has been scarcely studied. In addition, to our knowledge, serum MMP-9, MMP-10 and TIMP-1 levels in patients with malignant middle cerebral artery infarction (MMCAI) for mortality prediction have not been studied, and these were the objectives of this study. This was a multicenter, observational and prospective study carried out in six Spanish Intensive Care Units. We included patients with severe MMCAI defined as Glasgow Coma Scale (GCS) lower than 9. We measured circulating levels of MMP-9, MMP-10, TIMP-1, in 50 patients with severe MMCAI at diagnosis and in 50 healthy subjects. Endpoint was 30-day mortality. Patients with severe MMCAI showed higher serum levels of MMP-9 (p = 0.001), MMP-10 (p < 0.001), and TIMP-1 (p = 0.02) than healthy subjects. Non-surviving MMCAI patients (n = 26) compared to survivor ones (n = 24) showed higher circulating levels of TIMP-1 (p < 0.001), MMP-10 (p = 0.02) and PAI-1(p = 0.02), and lower MMP-9 levels (p = 0.04). Multiple binomial logistic regression analysis showed that serum TIMP-1 levels > 239 ng/mL are associated with 30-day mortality (OR = 5.82; 95% CI = 1.37-24.73; P = 0.02) controlling for GCS and age. The area under the curve for TIMP-1 as predictor of 30-day mortality was 0.81 (95% CI = 0.67-0.91; P < 0.001). We found an association between circulating levels of TIMP-1 and MMP-10 (rho = 0.45; P = 0.001), plasminogen activator inhibitor (PAI)-1 (rho = 0.53; P < 0.001), and tumor necrosis factor (TNF)-alpha (rho = 0.70; P < 0.001). The most relevant and new findings of our study, were that serum TIMP-1 levels in MMCAI patients were associated with mortality, and could be used as a prognostic biomarker of mortality in MMCAI patients.

  5. Multi-limit unsymmetrical MLIBD image restoration algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Cheng, Yiping; Chen, Zai-wang; Bo, Chen

    2012-11-01

    A novel multi-limit unsymmetrical iterative blind deconvolution(MLIBD) algorithm was presented to enhance the performance of adaptive optics image restoration.The algorithm enhances the reliability of iterative blind deconvolution by introducing the bandwidth limit into the frequency domain of point spread(PSF),and adopts the PSF dynamic support region estimation to improve the convergence speed.The unsymmetrical factor is automatically computed to advance its adaptivity.Image deconvolution comparing experiments between Richardson-Lucy IBD and MLIBD were done,and the result indicates that the iteration number is reduced by 22.4% and the peak signal-to-noise ratio is improved by 10.18dB with MLIBD method. The performance of MLIBD algorithm is outstanding in the images restoration the FK5-857 adaptive optics and the double-star adaptive optics.

  6. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    NASA Astrophysics Data System (ADS)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the rank-deficiency makes it improbable to solve for both STFs. To solve for the larger STF we need to assume the shape of the small STF to be known a priori. Thus, the reliability of the estimated large STF depends on the difference between the assumed and true shapes of the small STF. We will show how the reliability varies with realistic scenarios.

  7. Characterization of PD-1 upregulation on tumor-infiltrating lymphocytes in human and murine gliomas and preclinical therapeutic blockade.

    PubMed

    Dejaegher, Joost; Verschuere, Tina; Vercalsteren, Ellen; Boon, Louis; Cremer, Jonathan; Sciot, Raf; Van Gool, Stefaan W; De Vleeschouwer, Steven

    2017-11-01

    Blockade of the immune checkpoint molecule programmed-cell-death-protein-1 (PD-1) yielded promising results in several cancers. To understand the therapeutic potential in human gliomas, quantitative data describing the expression of PD-1 are essential. Moreover, due the immune-specialized region of the brain in which gliomas arise, differences between tumor-infiltrating and circulating lymphocytes should be acknowledged. In this study we have used flow cytometry to quantify PD-1 expression on tumor-infiltrating T cells of 25 freshly resected glioma cell suspensions (10 newly and 5 relapsed glioblastoma, 10 lower grade gliomas) and simultaneously isolated circulating T cells. A strong upregulation of PD-1 expression in the tumor microenvironment compared to the blood circulation was seen in all glioma patients. Additionally, circulating T cells were isolated from 15 age-matched healthy volunteers, but no differences in PD-1 expression were found compared to glioma patients. In the murine GL261 malignant glioma model, there was a similar upregulation of PD-1 on brain-infiltrating lymphocytes. Using a monoclonal PD-1 blocking antibody, we found a marked prolonged survival with 55% of mice reaching long-term survival. Analysis of brain-infiltrating cells 21 days after GL261 tumor implantation showed a shift in infiltrating lymphocyte subgroups with increased CD8+ T cells and decreased regulatory T cells. Together, our results suggest an important role of PD-1 in glioma-induced immune escape, and provide translational evidence for the use of PD-1 blocking antibodies in human malignant gliomas. © 2017 UICC.

  8. Extreme cyclone events in the Arctic: Wintertime variability and trends

    NASA Astrophysics Data System (ADS)

    Rinke, A.; Maturilli, M.; Graham, R. M.; Matthes, H.; Handorf, D.; Cohen, L.; Hudson, S. R.; Moore, J. C.

    2017-09-01

    Typically 20-40 extreme cyclone events (sometimes called ‘weather bombs’) occur in the Arctic North Atlantic per winter season, with an increasing trend of 6 events/decade over 1979-2015, according to 6 hourly station data from Ny-Ålesund. This increased frequency of extreme cyclones is consistent with observed significant winter warming, indicating that the meridional heat and moisture transport they bring is a factor in rising temperatures in the region. The winter trend in extreme cyclones is dominated by a positive monthly trend of about 3-4 events/decade in November-December, due mainly to an increasing persistence of extreme cyclone events. A negative trend in January opposes this, while there is no significant trend in February. We relate the regional patterns of the trend in extreme cyclones to anomalously low sea-ice conditions in recent years, together with associated large-scale atmospheric circulation changes such as ‘blockinglike’ circulation patterns (e.g. Scandinavian blocking in December and Ural blocking during January-February).

  9. Rockslide-debris avalanche of May 18, 1980, Mount St. Helens Volcano, Washington

    USGS Publications Warehouse

    Glicken, Harry

    1996-01-01

    This report provides a detailed picture of the rockslide-debris avalanche of the May 18, 1980, eruption of Mount St. Helens volcano. It provides a characterization of the deposit, a reinterpretation of the details of the first minutes of the eruption of May 18, and insight into the transport mechanism of the mass movement. Details of the rockslide event, as revealed by eyewitness photographs, are correlated with features of the deposit. The photographs show three slide blocks in the rockslide movement. Slide block I was triggered by a magnitude 5.1 earthquake at 8:32 a.m. Pacific Daylight Time (P.D.T.). An exploding cryptodome burst through slide block II to produce the 'blast surge.' Slide block III consisted of many discrete failures that were carried out in continuing pyroclastic currents generated from the exploding cryptodome. The cryptodome continued to depressurize after slide block III, producing a blast deposit that rests on top of the debris-avalanche deposit. The hummocky 2.5 cubic kilometer debris-avalanche deposit consists of block facies (pieces of the pre-eruption Mount St. Helens transported relatively intact) and matrix facies (a mixture of rocks from the old mountain and cryptodome dacite). Block facies is divided into five lithologic units. Matrix facies was derived from the explosively generated current of slide block III as well as from disaggregation and mixing of debris-avalanche blocks. The mean density of the old cone was measured to be abut 20 percent greater than the mean density of the avalanche deposit. Density in the deposit does not decrease with distance which suggests that debris-avalanche blocks were dilated at the mountain, rather than during transport. Various grain-size parameters that show that clast size converges about a mean with distance suggest mixing during transport. The debris-avalanche flow can be considered a grain flow, where particles -- either debris-avalanche blocks or the clasts within the blocks -- collided and created dispersive stress normal to the movement of material. The dispersive stress preserved the dilation of the material and allowed it to flow.

  10. Class I HDACs Regulate Angiotensin II-Dependent Cardiac Fibrosis via Fibroblasts and Circulating Fibrocytes

    PubMed Central

    Williams, Sarah M.; Golden-Mason, Lucy; Ferguson, Bradley S.; Douglas, Katherine B.; Cavasin, Maria A.; Demos-Davies, Kim; Yeager, Michael E.; Stenmark, Kurt R.; McKinsey, Timothy A.

    2014-01-01

    Fibrosis, which is defined as excessive accumulation of fibrous connective tissue, contributes to the pathogenesis of numerous diseases involving diverse organ systems. Cardiac fibrosis predisposes individuals to myocardial ischemia, arrhythmias and sudden death, and is commonly associated with diastolic dysfunction. Histone deacetylase (HDAC) inhibitors block cardiac fibrosis in pre-clinical models of heart failure. However, which HDAC isoforms govern cardiac fibrosis, and the mechanisms by which they do so, remains unclear. Here, we show that selective inhibition of class I HDACs potently suppresses angiotensin II (Ang II)-mediated cardiac fibrosis by targeting two key effector cell populations, cardiac fibroblasts and bone marrow-derived fibrocytes. Class I HDAC inhibition blocks cardiac fibroblast cell cycle progression through derepression of the genes encoding the cyclin-dependent kinase (CDK) inhibitors, p15 and p57. In contrast, class I HDAC inhibitors block agonist-dependent differentiation of fibrocytes through a mechanism involving repression of ERK1/2 signaling. These findings define novel roles for class I HDACs in the control of pathological cardiac fibrosis. Furthermore, since fibrocytes have been implicated in the pathogenesis of a variety of human diseases, including heart, lung and kidney failure, our results suggest broad utility for isoform-selective HDAC inhibitors as anti-fibrotic agents that function, in part, by targeting these circulating mesenchymal cells. PMID:24374140

  11. Parallel block schemes for large scale least squares computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment ofmore » the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.« less

  12. A feasibility and optimization study to determine cooling time and burnup of advanced test reactor fuels using a nondestructive technique

    NASA Astrophysics Data System (ADS)

    Navarro, Jorge

    The goal of this study presented is to determine the best available nondestructive technique necessary to collect validation data as well as to determine burnup and cooling time of the fuel elements on-site at the Advanced Test Reactor (ATR) canal. This study makes a recommendation of the viability of implementing a permanent fuel scanning system at the ATR canal and leads to the full design of a permanent fuel scan system. The study consisted at first in determining if it was possible and which equipment was necessary to collect useful spectra from ATR fuel elements at the canal adjacent to the reactor. Once it was establish that useful spectra can be obtained at the ATR canal, the next step was to determine which detector and which configuration was better suited to predict burnup and cooling time of fuel elements nondestructively. Three different detectors of High Purity Germanium (HPGe), Lanthanum Bromide (LaBr3), and High Pressure Xenon (HPXe) in two system configurations of above and below the water pool were used during the study. The data collected and analyzed were used to create burnup and cooling time calibration prediction curves for ATR fuel. The next stage of the study was to determine which of the three detectors tested was better suited for the permanent system. From spectra taken and the calibration curves obtained, it was determined that although the HPGe detector yielded better results, a detector that could better withstand the harsh environment of the ATR canal was needed. The in-situ nature of the measurements required a rugged fuel scanning system, low in maintenance and easy to control system. Based on the ATR canal feasibility measurements and calibration results, it was determined that the LaBr3 detector was the best alternative for canal in-situ measurements; however, in order to enhance the quality of the spectra collected using this scintillator, a deconvolution method was developed. Following the development of the deconvolution method for ATR applications, the technique was tested using one-isotope, multi-isotope, and fuel simulated sources. Burnup calibrations were perfomed using convoluted and deconvoluted data. The calibrations results showed burnup prediction by this method improves using deconvolution. The final stage of the deconvolution method development was to perform an irradiation experiment in order to create a surrogate fuel source to test the deconvolution method using experimental data. A conceptual design of the fuel scan system is path forward using the rugged LaBr 3 detector in an above the water configuration and deconvolution algorithms.

  13. A Shifted Block Lanczos Algorithm 1: The Block Recurrence

    NASA Technical Reports Server (NTRS)

    Grimes, Roger G.; Lewis, John G.; Simon, Horst D.

    1990-01-01

    In this paper we describe a block Lanczos algorithm that is used as the key building block of a software package for the extraction of eigenvalues and eigenvectors of large sparse symmetric generalized eigenproblems. The software package comprises: a version of the block Lanczos algorithm specialized for spectrally transformed eigenproblems; an adaptive strategy for choosing shifts, and efficient codes for factoring large sparse symmetric indefinite matrices. This paper describes the algorithmic details of our block Lanczos recurrence. This uses a novel combination of block generalizations of several features that have only been investigated independently in the past. In particular new forms of partial reorthogonalization, selective reorthogonalization and local reorthogonalization are used, as is a new algorithm for obtaining the M-orthogonal factorization of a matrix. The heuristic shifting strategy, the integration with sparse linear equation solvers and numerical experience with the code are described in a companion paper.

  14. A Single Sphingomyelin Species Promotes Exosomal Release of Endoglin into the Maternal Circulation in Preeclampsia.

    PubMed

    Ermini, Leonardo; Ausman, Jonathan; Melland-Smith, Megan; Yeganeh, Behzad; Rolfo, Alessandro; Litvack, Michael L; Todros, Tullia; Letarte, Michelle; Post, Martin; Caniggia, Isabella

    2017-09-22

    Preeclampsia (PE), an hypertensive disorder of pregnancy, exhibits increased circulating levels of a short form of the auxillary TGF-beta (TGFB) receptor endoglin (sENG). Until now, its release and functionality in PE remains poorly understood. Here we show that ENG selectively interacts with sphingomyelin(SM)-18:0 which promotes its clustering with metalloproteinase 14 (MMP14) in SM-18:0 enriched lipid rafts of the apical syncytial membranes from PE placenta where ENG is cleaved by MMP14 into sENG. The SM-18:0 enriched lipid rafts also contain type 1 and 2 TGFB receptors (TGFBR1 and TGFBR2), but not soluble fms-like tyrosine kinase 1 (sFLT1), another protein secreted in excess in the circulation of women with PE. The truncated ENG is then released into the maternal circulation via SM-18:0 enriched exosomes together with TGFBR1 and 2. Such an exosomal TGFB receptor complex could be functionally active and block the vascular effects of TGFB in the circulation of PE women.

  15. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    NASA Astrophysics Data System (ADS)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  16. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  17. Toward Overcoming the Local Minimum Trap in MFBD

    DTIC Science & Technology

    2015-07-14

    the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind

  18. Deconvolution for three-dimensional acoustic source identification based on spherical harmonics beamforming

    NASA Astrophysics Data System (ADS)

    Chu, Zhigang; Yang, Yang; He, Yansong

    2015-05-01

    Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.

  19. Poly-N-acetylglucosamine matrix polysaccharide impedes fluid convection and transport of the cationic surfactant cetylpyridinium chloride through bacterial biofilms.

    PubMed

    Ganeshnarayan, Krishnaraj; Shah, Suhagi M; Libera, Matthew R; Santostefano, Anthony; Kaplan, Jeffrey B

    2009-03-01

    Biofilms are composed of bacterial cells encased in a self-synthesized, extracellular polymeric matrix. Poly-beta(1,6)-N-acetyl-d-glucosamine (PNAG) is a major biofilm matrix component in phylogenetically diverse bacteria. In this study we investigated the physical and chemical properties of the PNAG matrix in biofilms produced in vitro by the gram-negative porcine respiratory pathogen Actinobacillus pleuropneumoniae and the gram-positive device-associated pathogen Staphylococcus epidermidis. The effect of PNAG on bulk fluid flow was determined by measuring the rate of fluid convection through biofilms cultured in centrifugal filter devices. The rate of fluid convection was significantly higher in biofilms cultured in the presence of the PNAG-degrading enzyme dispersin B than in biofilms cultured without the enzyme, indicating that PNAG decreases bulk fluid flow. PNAG also blocked transport of the quaternary ammonium compound cetylpyridinium chloride (CPC) through the biofilms. Binding of CPC to biofilms further impeded fluid convection and blocked transport of the azo dye Allura red. Bioactive CPC was efficiently eluted from biofilms by treatment with 1 M sodium chloride. Taken together, these findings suggest that CPC reacts directly with the PNAG matrix and alters its physical and chemical properties. Our results indicate that PNAG plays an important role in controlling the physiological state of biofilms and may contribute to additional biofilm-associated processes such as biocide resistance.

  20. Active tissue stiffness modulation controls valve interstitial cell phenotype and osteogenic potential in 3D culture.

    PubMed

    Duan, Bin; Yin, Ziying; Hockaday Kang, Laura; Magin, Richard L; Butcher, Jonathan T

    2016-05-01

    Calcific aortic valve disease (CAVD) progression is a highly dynamic process whereby normally fibroblastic valve interstitial cells (VIC) undergo osteogenic differentiation, maladaptive extracellular matrix (ECM) composition, structural remodeling, and tissue matrix stiffening. However, how VIC with different phenotypes dynamically affect matrix properties and how the altered matrix further affects VIC phenotypes in response to physiological and pathological conditions have not yet been determined. In this study, we develop 3D hydrogels with tunable matrix stiffness to investigate the dynamic interplay between VIC phenotypes and matrix biomechanics. We find that VIC populated within hydrogels with valve leaflet like stiffness differentiate towards myofibroblasts in osteogenic media, but surprisingly undergo osteogenic differentiation when cultured within lower initial stiffness hydrogels. VIC differentiation progressively stiffens the hydrogel microenvironment, which further upregulates both early and late osteogenic markers. These findings identify a dynamic positive feedback loop that governs acceleration of VIC calcification. Temporal stiffening of pathologically lower stiffness matrix back to normal level, or blocking the mechanosensitive RhoA/ROCK signaling pathway, delays the osteogenic differentiation process. Therefore, direct ECM biomechanical modulation can affect VIC phenotypes towards and against osteogenic differentiation in 3D culture. These findings highlight the importance of the homeostatic maintenance of matrix stiffness to restrict pathological VIC differentiation. We implement 3D hydrogels with tunable matrix stiffness to investigate the dynamic interaction between valve interstitial cells (VIC, major cell population in heart valve) and matrix biomechanics. This work focuses on how human VIC responses to changing 3D culture environments. Our findings identify a dynamic positive feedback loop that governs acceleration of VIC calcification, which is the hallmark of calcific aortic valve disease. Temporal stiffening of pathologically lower stiffness matrix back to normal level, or blocking the mechanosensitive signaling pathway, delays VIC osteogenic differentiation. Our findings provide an improved understanding of VIC-matrix interactions to aid in interpretation of VIC calcification studies in vitro and suggest that ECM disruption resulting in local tissue stiffness decreases may promote calcific aortic valve disease. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  1. Developmental exposure to perchlorate alters synaptic transmission in hippocampus of the adult rat: in vivo studies.

    EPA Science Inventory

    Perchlorate, a contaminant found in food and water supplies throughout the USA, blocks iodine uptake into the thyroid gland to reduce circulating levels of thyroid hormone. Neurological function accompanying developmental exposure to perchlorate was evaluated in the present study...

  2. Genetics coupled to quantitative intact proteomics links heritable aphid and endosymbiont protein expression to circulative polerovirus transmission.

    PubMed

    Cilia, M; Tamborindeguy, C; Fish, T; Howe, K; Thannhauser, T W; Gray, S

    2011-03-01

    Yellow dwarf viruses in the family Luteoviridae, which are the causal agents of yellow dwarf disease in cereal crops, are each transmitted most efficiently by different species of aphids in a circulative manner that requires the virus to interact with a multitude of aphid proteins. Aphid proteins differentially expressed in F2 Schizaphis graminum genotypes segregating for the ability to transmit Cereal yellow dwarf virus-RPV (CYDV-RPV) were identified using two-dimensional difference gel electrophoresis (DIGE) coupled to either matrix-assisted laser desorption ionization-tandem mass spectrometry or online nanoscale liquid chromatography coupled to electrospray tandem mass spectrometry. A total of 50 protein spots, containing aphid proteins and proteins from the aphid's obligate and maternally inherited bacterial endosymbiont, Buchnera, were identified as differentially expressed between transmission-competent and refractive aphids. Surprisingly, in virus transmission-competent F2 genotypes, the isoelectric points of the Buchnera proteins did not match those in the maternal Buchnera proteome as expected, but instead they aligned with the Buchnera proteome of the transmission-competent paternal parent. Among the aphid proteins identified, many were involved in energy metabolism, membrane trafficking, lipid signaling, and the cytoskeleton. At least eight aphid proteins were expressed as heritable, isoelectric point isoform pairs, one derived from each parental lineage. In the F2 genotypes, the expression of aphid protein isoforms derived from the competent parental lineage aligned with the virus transmission phenotype with high precision. Thus, these isoforms are candidate biomarkers for CYDV-RPV transmission in S. graminum. Our combined genetic and DIGE approach also made it possible to predict where several of the proteins may be expressed in refractive aphids with different barriers to transmission. Twelve proteins were predicted to act in the hindgut of the aphid, while six proteins were predicted to be associated with the accessory salivary glands or hemolymph. Knowledge of the proteins that regulate virus transmission and their predicted locations will aid in understanding the biochemical mechanisms regulating circulative virus transmission in aphids, as well as in identifying new targets to block transmission.

  3. Genetics Coupled to Quantitative Intact Proteomics Links Heritable Aphid and Endosymbiont Protein Expression to Circulative Polerovirus Transmission▿ †

    PubMed Central

    Cilia, M.; Tamborindeguy, C.; Fish, T.; Howe, K.; Thannhauser, T. W.; Gray, S.

    2011-01-01

    Yellow dwarf viruses in the family Luteoviridae, which are the causal agents of yellow dwarf disease in cereal crops, are each transmitted most efficiently by different species of aphids in a circulative manner that requires the virus to interact with a multitude of aphid proteins. Aphid proteins differentially expressed in F2 Schizaphis graminum genotypes segregating for the ability to transmit Cereal yellow dwarf virus-RPV (CYDV-RPV) were identified using two-dimensional difference gel electrophoresis (DIGE) coupled to either matrix-assisted laser desorption ionization-tandem mass spectrometry or online nanoscale liquid chromatography coupled to electrospray tandem mass spectrometry. A total of 50 protein spots, containing aphid proteins and proteins from the aphid's obligate and maternally inherited bacterial endosymbiont, Buchnera, were identified as differentially expressed between transmission-competent and refractive aphids. Surprisingly, in virus transmission-competent F2 genotypes, the isoelectric points of the Buchnera proteins did not match those in the maternal Buchnera proteome as expected, but instead they aligned with the Buchnera proteome of the transmission-competent paternal parent. Among the aphid proteins identified, many were involved in energy metabolism, membrane trafficking, lipid signaling, and the cytoskeleton. At least eight aphid proteins were expressed as heritable, isoelectric point isoform pairs, one derived from each parental lineage. In the F2 genotypes, the expression of aphid protein isoforms derived from the competent parental lineage aligned with the virus transmission phenotype with high precision. Thus, these isoforms are candidate biomarkers for CYDV-RPV transmission in S. graminum. Our combined genetic and DIGE approach also made it possible to predict where several of the proteins may be expressed in refractive aphids with different barriers to transmission. Twelve proteins were predicted to act in the hindgut of the aphid, while six proteins were predicted to be associated with the accessory salivary glands or hemolymph. Knowledge of the proteins that regulate virus transmission and their predicted locations will aid in understanding the biochemical mechanisms regulating circulative virus transmission in aphids, as well as in identifying new targets to block transmission. PMID:21159868

  4. Extracellular matrix mineralization in periodontal tissues: Noncollagenous matrix proteins, enzymes, and relationship to hypophosphatasia and X-linked hypophosphatemia

    PubMed Central

    McKee, Marc D.; Hoac, Betty; Addison, William N.; Barros, Nilana M.T.; Millán, José Luis; Chaussain, Catherine

    2013-01-01

    As broadly demonstrated for the formation of a functional skeleton, proper mineralization of periodontal alveolar bone and teeth – where calcium phosphate crystals are deposited and grow within an extracellular matrix – is essential to dental function. Mineralization defects in tooth dentin and cementum of the periodontium invariably lead to a weak (soft or brittle) dentition such that teeth become loose and prone to infection and are lost prematurely. Mineralization of the extremities of periodontal ligament fibres (Sharpey's fibres) where they insert into tooth cementum and alveolar bone is also essential for the function of the tooth suspensory apparatus in occlusion and mastication. Molecular determinants of mineralization in these tissues include mineral ion concentrations (phosphate and calcium), pyrophosphate, small integrin-binding ligand N-linked glycoproteins (SIBLINGs), and matrix vesicles. Amongst the enzymes important in regulating these mineralization determinants, two are discussed at length here with clinical examples given, namely tissue-nonspecific alkaline phosphatase (TNAP) and phosphate-regulating gene with homologies to endopeptidases on the X chromosome (PHEX). Inactivating mutations in these enzymes in humans and in mouse models lead to the soft bones and teeth characteristic of hypophosphatasia (HPP) and X-linked hypophosphatemia (XLH), respectively, where levels of local and systemic circulating mineralization determinants are perturbed. In XLH, in addition to renal phosphate wasting causing low circulating phosphate levels, phosphorylated mineralization-regulating SIBLING proteins such as matrix extracellular phosphoglycoprotein (MEPE) and osteopontin (OPN), and the phosphorylated peptides proteolytically released from them such as the acidic serine- and aspartate-rich motif (ASARM) peptide, may accumulate locally to impair mineralization in this disease. PMID:23931057

  5. Determination of ion mobility collision cross sections for unresolved isomeric mixtures using tandem mass spectrometry and chemometric deconvolution.

    PubMed

    Harper, Brett; Neumann, Elizabeth K; Stow, Sarah M; May, Jody C; McLean, John A; Solouki, Touradj

    2016-10-05

    Ion mobility (IM) is an important analytical technique for determining ion collision cross section (CCS) values in the gas-phase and gaining insight into molecular structures and conformations. However, limited instrument resolving powers for IM may restrict adequate characterization of conformationally similar ions, such as structural isomers, and reduce the accuracy of IM-based CCS calculations. Recently, we introduced an automated technique for extracting "pure" IM and collision-induced dissociation (CID) mass spectra of IM overlapping species using chemometric deconvolution of post-IM/CID mass spectrometry (MS) data [J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. Here we extend those capabilities to demonstrate how extracted IM profiles can be used to calculate accurate CCS values of peptide isomer ions which are not fully resolved by IM. We show that CCS values obtained from deconvoluted IM spectra match with CCS values measured from the individually analyzed corresponding peptides on uniform field IM instrumentation. We introduce an approach that utilizes experimentally determined IM arrival time (AT) "shift factors" to compensate for ion acceleration variations during post-IM/CID and significantly improve the accuracy of the calculated CCS values. Also, we discuss details of this IM deconvolution approach and compare empirical CCS values from traveling wave (TW)IM-MS and drift tube (DT)IM-MS with theoretically calculated CCS values using the projected superposition approximation (PSA). For example, experimentally measured deconvoluted TWIM-MS mean CCS values for doubly-protonated RYGGFM, RMFGYG, MFRYGG, and FRMYGG peptide isomers were 288.8 Å(2), 295.1 Å(2), 296.8 Å(2), and 300.1 Å(2); all four of these CCS values were within 1.5% of independently measured DTIM-MS values. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Image restoration and superresolution as probes of small scale far-IR structure in star forming regions

    NASA Technical Reports Server (NTRS)

    Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.

    1986-01-01

    Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.

  7. Real-Time Demonstration of Split Skin Graft Inosculation and Integra Dermal Matrix Neovascularization Using Confocal Laser Scanning Microscopy

    PubMed Central

    Greenwood, John; Amjadi, Mahyar; Dearman, Bronwyn; Mackie, Ian

    2009-01-01

    Objectives: During the first 48 hours after placement, an autograft “drinks” nutrients and dissolved oxygen from fluid exuding from the underlying recipient bed (“plasmatic imbibition”). The theory of inosculation (that skin grafts subsequently obtain nourishment via blood vessel “anastomosis” between new vessels invading from the wound bed and existing graft vessels) was hotly debated from the late 19th to mid-20th century. This study aimed to noninvasively observe blood flow in split skin grafts and Integra™ dermal regeneration matrix to provide further proof of inosculation and to contrast the structure of vascularization in both materials, reflecting mechanism. Methods: Observations were made both clinically and using confocal microscopy on normal skin, split skin graft, and Integra™. The VivaScope™ allows noninvasive, real-time, in vivo images of tissue to be obtained. Results: Observations of blood flow and tissue architecture in autologous skin graft and Integra™ suggest that 2 very different processes are occurring in the establishment of circulation in each case. Inosculation provides rapid circulatory return to skin grafts whereas slower neovascularization creates an unusual initial Integra™ circulation. Conclusions: The advent of confocal laser microscopy like the VivaScope 1500™, together with “virtual” journals such as ePlasty, enables us to provide exciting images and distribute them widely to a “reading” audience. The development of the early Integra™ vasculature by neovascularization results in a large-vessel, high-volume, rapid flow circulation contrasting markedly from the inosculatory process in skin grafts and the capillary circulation in normal skin and merits further (planned) investigation. PMID:19787028

  8. Biomarkers of the extracellular matrix and of collagen fragments.

    PubMed

    Chalikias, Georgios K; Tziakas, Dimitrios N

    2015-03-30

    A great body of evidence has shown that extracellular matrix (ECM) alterations are present in the major types of cardiac diseases: ischemic heart disease, heart disease associated with pressure overload, heart disease associated with volume overload, and intrinsic myocardial disease or cardiomyopathy. Collagen, type I and III, is the principal structural protein found in the myocardium and its pro- or telopeptides are released into the circulation during the course of cardiovascular diseases. Therefore, these peptides may reflect collagen synthesis and break-down and also represent a much more useful tool to address ECM changes from a distance. Clinical trials have been performed during recent years to examine the usage of these peptides as diagnostic or prognostic biomarkers in heart failure (HF) patients. This review aims to summarize published data concerning cardiac ECM and its circulating biomarkers. Studies that focused on collagen metabolism related biomarkers in patients with HF are analyzed. Finally, limitations associated with the clinical use of the aforementioned biomarkers are also discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Long range forecasts of the Northern Hemisphere anomalies with antecedent sea surface temperature patterns

    NASA Technical Reports Server (NTRS)

    Kung, Ernest C.

    1994-01-01

    The contract research has been conducted in the following three major areas: analysis of numerical simulations and parallel observations of atmospheric blocking, diagnosis of the lower boundary heating and the response of the atmospheric circulation, and comprehensive assessment of long-range forecasting with numerical and regression methods. The essential scientific and developmental purpose of this contract research is to extend our capability of numerical weather forecasting by the comprehensive general circulation model. The systematic work as listed above is thus geared to developing a technological basis for future NASA long-range forecasting.

  10. The rapid manufacture of uniform composite multicellular-biomaterial micropellets, their assembly into macroscopic organized tissues, and potential applications in cartilage tissue engineering.

    PubMed

    Babur, Betul Kul; Kabiri, Mahboubeh; Klein, Travis Jacob; Lott, William B; Doran, Michael Robert

    2015-01-01

    We and others have published on the rapid manufacture of micropellet tissues, typically formed from 100-500 cells each. The micropellet geometry enhances cellular biological properties, and in many cases the micropellets can subsequently be utilized as building blocks to assemble complex macrotissues. Generally, micropellets are formed from cells alone, however when replicating matrix-rich tissues such as cartilage it would be ideal if matrix or biomaterials supplements could be incorporated directly into the micropellet during the manufacturing process. Herein we describe a method to efficiently incorporate donor cartilage matrix into tissue engineered cartilage micropellets. We lyophilized bovine cartilage matrix, and then shattered it into microscopic pieces having average dimensions < 10 μm diameter; we termed this microscopic donor matrix "cartilage dust (CD)". Using a microwell platform, we show that ~0.83 μg CD can be rapidly and efficiently incorporated into single multicellular aggregates formed from 180 bone marrow mesenchymal stem/stromal cells (MSC) each. The microwell platform enabled the rapid manufacture of thousands of replica composite micropellets, with each micropellet having a material/CD core and a cellular surface. This micropellet organization enabled the rapid bulking up of the micropellet core matrix content, and left an adhesive cellular outer surface. This morphological organization enabled the ready assembly of the composite micropellets into macroscopic tissues. Generically, this is a versatile method that enables the rapid and uniform integration of biomaterials into multicellular micropellets that can then be used as tissue building blocks. In this study, the addition of CD resulted in an approximate 8-fold volume increase in the micropellets, with the donor matrix functioning to contribute to an increase in total cartilage matrix content. Composite micropellets were readily assembled into macroscopic cartilage tissues; the incorporation of CD enhanced tissue size and matrix content, but did not enhance chondrogenic gene expression.

  11. Effect of endodontic irrigants on biofilm matrix polysaccharides.

    PubMed

    Tawakoli, P N; Ragnarsson, K T; Rechenberg, D K; Mohn, D; Zehnder, M

    2017-02-01

    To specifically investigate the effect of endodontic irrigants at their clinical concentration on matrix polysaccharides of cultured biofilms. Saccharolytic effects of 3% H 2 O 2 , 2% chlorhexidine (CHX), 17% EDTA, 5% NaOCl and 0.9% saline (control) were tested using agarose (α 1-3 and β 1-4 glycosidic bonds) blocks (n = 3) in a weight assay. The irrigants were also applied to three-species biofilms (Streptococcus mutans UAB 159, Streptococcus oralis OMZ 607 and Actinomyces oris OMZ 745) grown anaerobically on hydroxyapatite discs (n = 6). Glycoconjugates in the matrix and total bacterial cell volumes were determined using combined Concanavalin A-/Syto 59-staining and confocal laser-scanning microscopy. Volumes of each scanned area (triplicates/sample) were calculated using Imaris software. Data were compared between groups using one-way anova/Tukey HSD, α = 0.05. The weight assay revealed that NaOCl was the only irrigant under investigation capable of dissolving the agarose blocks. NaOCl eradicated stainable matrix and bacteria in cultured biofilms after 1 min of exposure (P < 0.05 compared to all groups, volumes in means ± standard deviation, 10 -3  mm 3 per 0.6 mm 2 disc; NaOCl matrix: 0.10 ± 0.08, bacteria: 0.03 ± 0.06; saline control matrix: 4.01 ± 1.14, bacteria: 11.56 ± 3.02). EDTA also appeared to have some effect on the biofilm matrix (EDTA matrix: 1.90 ± 0.33, bacteria: 9.26 ± 2.21), whilst H 2 O 2 and CHX merely reduced bacterial cell volumes. Sodium hypochlorite can break glycosidic bonds. It dissolves glycoconjugates in the biofilm matrix. It also lyses bacterial cells. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  12. The Rapid Manufacture of Uniform Composite Multicellular-Biomaterial Micropellets, Their Assembly into Macroscopic Organized Tissues, and Potential Applications in Cartilage Tissue Engineering

    PubMed Central

    Kul Babur, Betul; Kabiri, Mahboubeh; Klein, Travis Jacob; Lott, William B.; Doran, Michael Robert

    2015-01-01

    We and others have published on the rapid manufacture of micropellet tissues, typically formed from 100–500 cells each. The micropellet geometry enhances cellular biological properties, and in many cases the micropellets can subsequently be utilized as building blocks to assemble complex macrotissues. Generally, micropellets are formed from cells alone, however when replicating matrix-rich tissues such as cartilage it would be ideal if matrix or biomaterials supplements could be incorporated directly into the micropellet during the manufacturing process. Herein we describe a method to efficiently incorporate donor cartilage matrix into tissue engineered cartilage micropellets. We lyophilized bovine cartilage matrix, and then shattered it into microscopic pieces having average dimensions < 10 μm diameter; we termed this microscopic donor matrix “cartilage dust (CD)”. Using a microwell platform, we show that ~0.83 μg CD can be rapidly and efficiently incorporated into single multicellular aggregates formed from 180 bone marrow mesenchymal stem/stromal cells (MSC) each. The microwell platform enabled the rapid manufacture of thousands of replica composite micropellets, with each micropellet having a material/CD core and a cellular surface. This micropellet organization enabled the rapid bulking up of the micropellet core matrix content, and left an adhesive cellular outer surface. This morphological organization enabled the ready assembly of the composite micropellets into macroscopic tissues. Generically, this is a versatile method that enables the rapid and uniform integration of biomaterials into multicellular micropellets that can then be used as tissue building blocks. In this study, the addition of CD resulted in an approximate 8-fold volume increase in the micropellets, with the donor matrix functioning to contribute to an increase in total cartilage matrix content. Composite micropellets were readily assembled into macroscopic cartilage tissues; the incorporation of CD enhanced tissue size and matrix content, but did not enhance chondrogenic gene expression. PMID:26020956

  13. Restoring defect structures in 3C-SiC/Si (001) from spherical aberration-corrected high-resolution transmission electron microscope images by means of deconvolution processing.

    PubMed

    Wen, C; Wan, W; Li, F H; Tang, D

    2015-04-01

    The [110] cross-sectional samples of 3C-SiC/Si (001) were observed with a spherical aberration-corrected 300 kV high-resolution transmission electron microscope. Two images taken not close to the Scherzer focus condition and not representing the projected structures intuitively were utilized for performing the deconvolution. The principle and procedure of image deconvolution and atomic sort recognition are summarized. The defect structure restoration together with the recognition of Si and C atoms from the experimental images has been illustrated. The structure maps of an intrinsic stacking fault in the area of SiC, and of Lomer and 60° shuffle dislocations at the interface have been obtained at atomic level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Sheet-scanned dual-axis confocal microscopy using Richardson-Lucy deconvolution.

    PubMed

    Wang, D; Meza, D; Wang, Y; Gao, L; Liu, J T C

    2014-09-15

    We have previously developed a line-scanned dual-axis confocal (LS-DAC) microscope with subcellular resolution suitable for high-frame-rate diagnostic imaging at shallow depths. Due to the loss of confocality along one dimension, the contrast (signal-to-background ratio) of a LS-DAC microscope is deteriorated compared to a point-scanned DAC microscope. However, by using a sCMOS camera for detection, a short oblique light-sheet is imaged at each scanned position. Therefore, by scanning the light sheet in only one dimension, a thin 3D volume is imaged. Both sequential two-dimensional deconvolution and three-dimensional deconvolution are performed on the thin image volume to improve the resolution and contrast of one en face confocal image section at the center of the volume, a technique we call sheet-scanned dual-axis confocal (SS-DAC) microscopy.

  15. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    NASA Astrophysics Data System (ADS)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  16. Punch stretching process monitoring using acoustic emission signal analysis. II - Application of frequency domain deconvolution

    NASA Technical Reports Server (NTRS)

    Liang, Steven Y.; Dornfeld, David A.; Nickerson, Jackson A.

    1987-01-01

    The coloring effect on the acoustic emission signal due to the frequency response of the data acquisition/processing instrumentation may bias the interpretation of AE signal characteristics. In this paper, a frequency domain deconvolution technique, which involves the identification of the instrumentation transfer functions and multiplication of the AE signal spectrum by the inverse of these system functions, has been carried out. In this way, the change in AE signal characteristics can be better interpreted as the result of the change in only the states of the process. Punch stretching process was used as an example to demonstrate the application of the technique. Results showed that, through the deconvolution, the frequency characteristics of AE signals generated during the stretching became more distinctive and can be more effectively used as tools for process monitoring.

  17. Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques

    PubMed Central

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896

  18. Improving the ability of image sensors to detect faint stars and moving objects using image deconvolution techniques.

    PubMed

    Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D

    2010-01-01

    In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.

  19. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Transforming growth factor-β signalling controls human breast cancer metastasis in a zebrafish xenograft model.

    PubMed

    Drabsch, Yvette; He, Shuning; Zhang, Long; Snaar-Jagalska, B Ewa; ten Dijke, Peter

    2013-11-07

    The transforming growth factor beta (TGF-β) signalling pathway is known to control human breast cancer invasion and metastasis. We demonstrate that the zebrafish xenograft assay is a robust and dependable animal model for examining the role of pharmacological modulators and genetic perturbation of TGF-β signalling in human breast tumour cells. We injected cancer cells into the embryonic circulation (duct of cuvier) and examined their invasion and metastasis into the avascular collagenous tail. Various aspects of the TGF-β signalling pathway were blocked by chemical inhibition, small interfering RNA (siRNA), or small hairpin RNA (shRNA). Analysis was conducted using fluorescent microscopy. Breast cancer cells with different levels of malignancy, according to in vitro and in vivo mouse studies, demonstrated invasive and metastatic properties within the embryonic zebrafish model that nicely correlated with their differential tumourigenicity in mouse models. Interestingly, MCF10A M2 and M4 cells invaded into the caudal hematopoietic tissue and were visible as a cluster of cells, whereas MDA MB 231 cells invaded into the tail fin and were visible as individual cells. Pharmacological inhibition with TGF-β receptor kinase inhibitors or tumour specific Smad4 knockdown disturbed invasion and metastasis in the zebrafish xenograft model and closely mimicked the results we obtained with these cells in a mouse metastasis model. Inhibition of matrix metallo proteinases, which are induced by TGF-β in breast cancer cells, blocked invasion and metastasis of breast cancer cells. The zebrafish-embryonic breast cancer xenograft model is applicable for the mechanistic understanding, screening and development of anti-TGF-β drugs for the treatment of metastatic breast cancer in a timely and cost-effective manner.

  1. The effect of Fisher information matrix approximation methods in population optimal design calculations.

    PubMed

    Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C

    2016-12-01

    With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.

  2. Linked Extreme Weather Events during Winter 2009-2010 and 2010-2011 in the Context of Northern Hemisphere Circulation Anomalies

    NASA Astrophysics Data System (ADS)

    Bosart, L. F.; Archambault, H. M.; Cordeira, J. M.

    2011-12-01

    Lance F. Bosart, Heather M. Archambault, and Jason M. Cordeira Department of Atmospheric and Environmental Sciences, University at Albany, State University of New York, Albany, New York The Northern Hemisphere (NH) planetary-scale circulation during winter 2009-2010 was characterized by an unusual combination of persistent high-latitude blocking and southward-displaced storm tracks, manifest by a strongly negative Arctic Oscillation (AO), in conjunction with a moderate El Nino event. The high-latitude blocking activity and southward-displaced storm tracks supported episodic cold-air outbreaks and enhanced storminess over parts of midlatitude eastern Asia, eastern North America, and western Europe as well as anomalous warmth over northeastern Canada and Greenland that delayed sea ice formation and ice thickening in these areas during winter 2009-2010. Although somewhat less extreme than winter 2009-2010, the first half of winter 2010-2011 was also characterized by high-latitude blocking and southward-displaced storm tracks (manifest by negative values of the AO) while the Pacific-North American (PNA), initially negative, became neutral in late December and most of January. Winter 2010-2011 was characterized by moderate La Nina conditions in contrast to moderate El Nino conditions that prevailed during winter 2009-2010. Despite the reversal of the ENSO phase from winter 2009-2010 to winter 2010-2011, high-latitude blocking activity and the associated southward-displaced storm tracks again allowed for episodic cold-air outbreaks and enhanced storminess over parts of midlatitude eastern Asia, central and eastern North America, and western Europe with delayed sea ice formation and thickening over the Davis Strait and adjacent regions during the first half of winter 2010-2011. Beginning in late January and continuing through early February 2011 the phase of the AO and the PNA reversed with the AO and PNA becoming positive and negative, respectively. This linked AO/PNA phase transition was associated with an extreme weather event that brought severe and record-setting cold to parts of the U.S. and Mexico, a powerful snow and ice storm in the Central U.S., and a subsequent and spectacular warm-up east of the Rockies. The purpose of this presentation will be to present an overview of the structure and evolution of the large-scale NH circulation anomalies during the 2009-2010 and 2010-2011 winters. Emphasis will be placed on showing how individual synoptic-scale weather events (e.g., recurving and transitioning western Pacific tropical cyclones, diabatically driven upper-level outflow from organized deep convection associated with the Madden-Julian Oscillation, and western North Atlantic storminess) contributed to the formation of significant and persistent large-scale circulation anomalies and how these large-scale circulation anomalies in turn impacted the storm tracks, regional temperature and precipitation anomalies, and the associated extreme weather.

  3. Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics.

    PubMed

    Gilles, Luc; Ellerbroek, Brent L; Vogel, Curtis R

    2003-09-10

    Multiconjugate adaptive optics (MCAO) systems with 10(4)-10(5) degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 10(4) actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10(-2) Hz, i.e., 4-5 orders of magnitude lower than the typical 10(3) Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.

  4. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    NASA Astrophysics Data System (ADS)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  5. Two-Dimensional Signal Processing and Storage and Theory and Applications of Electromagnetic Measurements.

    DTIC Science & Technology

    1983-06-01

    system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension

  6. Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar Electrokinetic Chromatography Data for the Quantitation of Trinitrotoluene in Mixtures of Other Nitroaromatic Compounds

    DTIC Science & Technology

    2014-02-24

    Suite 600 Washington, DC 20036 NRL/MR/ 6110 --14-9521 Approved for public release; distribution is unlimited. 1Science & Engineering Apprenticeship...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/ 6110 --14-9521 Chemometric Deconvolution of Continuous Electrokinetic Injection Micellar... Engineering Apprenticeship Program American Society for Engineering Education Washington, DC Kevin Johnson Navy Technology Center for Safety and

  7. Enhanced Seismic Imaging of Turbidite Deposits in Chicontepec Basin, Mexico

    NASA Astrophysics Data System (ADS)

    Chavez-Perez, S.; Vargas-Meleza, L.

    2007-05-01

    We test, as postprocessing tools, a combination of migration deconvolution and geometric attributes to attack the complex problems of reflector resolution and detection in migrated seismic volumes. Migration deconvolution has been empirically shown to be an effective approach for enhancing the illumination of migrated images, which are blurred versions of the subsurface reflectivity distribution, by decreasing imaging artifacts, improving spatial resolution, and alleviating acquisition footprint problems. We utilize migration deconvolution as a means to improve the quality and resolution of 3D prestack time migrated results from Chicontepec basin, Mexico, a very relevant portion of the producing onshore sector of Pemex, the Mexican petroleum company. Seismic data covers the Agua Fria, Coapechaca, and Tajin fields. It exhibits acquisition footprint problems, migration artifacts and a severe lack of resolution in the target area, where turbidite deposits need to be characterized between major erosional surfaces. Vertical resolution is about 35 m and the main hydrocarbon plays are turbidite beds no more than 60 m thick. We also employ geometric attributes (e.g., coherent energy and curvature), computed after migration deconvolution, to detect and map out depositional features, and help design development wells in the area. Results of this workflow show imaging enhancement and allow us to identify meandering channels and individual sand bodies, previously undistinguishable in the original seismic migrated images.

  8. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  9. Data Dependent Peak Model Based Spectrum Deconvolution for Analysis of High Resolution LC-MS Data

    PubMed Central

    2015-01-01

    A data dependent peak model (DDPM) based spectrum deconvolution method was developed for analysis of high resolution LC-MS data. To construct the selected ion chromatogram (XIC), a clustering method, the density based spatial clustering of applications with noise (DBSCAN), is applied to all m/z values of an LC-MS data set to group the m/z values into each XIC. The DBSCAN constructs XICs without the need for a user defined m/z variation window. After the XIC construction, the peaks of molecular ions in each XIC are detected using both the first and the second derivative tests, followed by an optimized chromatographic peak model selection method for peak deconvolution. A total of six chromatographic peak models are considered, including Gaussian, log-normal, Poisson, gamma, exponentially modified Gaussian, and hybrid of exponential and Gaussian models. The abundant nonoverlapping peaks are chosen to find the optimal peak models that are both data- and retention-time-dependent. Analysis of 18 spiked-in LC-MS data demonstrates that the proposed DDPM spectrum deconvolution method outperforms the traditional method. On average, the DDPM approach not only detected 58 more chromatographic peaks from each of the testing LC-MS data but also improved the retention time and peak area 3% and 6%, respectively. PMID:24533635

  10. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    NASA Astrophysics Data System (ADS)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  11. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  12. Partitioning of nitroxides in dispersed systems investigated by ultrafiltration, EPR and NMR spectroscopy.

    PubMed

    Krudopp, Heimke; Sönnichsen, Frank D; Steffen-Heins, Anja

    2015-08-15

    The partitioning behavior of paramagnetic nitroxides in dispersed systems can be determined by deconvolution of electron paramagnetic resonance (EPR) spectra giving equivalent results with the validated methods of ultrafiltration techniques (UF) and pulsed-field gradient nuclear magnetic resonance spectroscopy (PFG-NMR). The partitioning behavior of nitroxides with increasing lipophilicity was investigated in anionic, cationic and nonionic micellar systems and 10 wt% o/w emulsions. Apart from EPR spectra deconvolution, the PFG-NMR was used in micellar solutions as a non-destructive approach, while UF based on separation of very small volume of the aqueous phase. As a function of their substituent and lipophilicity, the proportions of nitroxides that were solubilized in the micellar or emulsion interface increased with increasing nitroxide lipophilicity for all emulsifier used. Comparing the different approaches, EPR deconvolution and UF revealed comparable nitroxide proportions that were solubilized in the interfaces. Those proportions were higher than found with PFG-NMR. For PFG-NMR self-diffusion experiments the reduced nitroxides were used revealing a high dynamic of hydroxylamines and emulsifiers. Deconvolution of EPR spectra turned out to be the preferred method for measuring the partitioning behavior of paramagnetic molecules as it enables distinguishing between several populations at their individual solubilization sites. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Resolving complex fibre architecture by means of sparse spherical deconvolution in the presence of isotropic diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Michailovich, O.; Rathi, Y.

    2014-03-01

    High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.

  14. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  15. Model-free quantification of dynamic PET data using nonparametric deconvolution

    PubMed Central

    Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R

    2015-01-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427

  16. Communication-avoiding symmetric-indefinite factorization

    DOE PAGES

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James; ...

    2014-11-13

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  17. Communication-avoiding symmetric-indefinite factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Grey Malone; Becker, Dulcenia; Demmel, James

    We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix A as the product A=PLTL TP T where P is a permutation matrix, L is lower triangular, and T is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. Asmore » a result, the current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal.« less

  18. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    PubMed

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  19. The life cycles of intense cyclonic and anticyclonic circulation systems observed over oceans

    NASA Technical Reports Server (NTRS)

    Smith, Phillip J.

    1994-01-01

    The work over the past six months has focused on the October/November 1985 blocking case study noted in the last progress report. A summary of the results of this effort is contained in the attached preprint papers for the Symposium on the Life Cycles of Extratropical Cyclones. Using this case study as a model, Ph.D. student Anthony Lupo is now initiating the multiple-case diagnosis by first examining two more fall 1985 blocking episodes. In addition, two secondary efforts have been completed, as summarized in the attached M.S. thesis abstracts. Both studies, which were primarily funded by a fellowship and a teaching assistantship, complement the objectives of this study by providing diagnoses of additional cyclone cases to serve as a comparative base for the pre-blocking cyclones to be studied in the multiple-case blocking diagnosis.

  20. Planning and Analysis of Fractured Rock Injection Tests in the Cerro Brillador Underground Laboratory, Northern Chile

    NASA Astrophysics Data System (ADS)

    Fairley, J. P., Jr.; Oyarzún L, R.; Villegas, G.

    2015-12-01

    Early theories of fluid migration in unsaturated fractured rock hypothesized that matrix suction would dominate flow up to the point of matrix saturation. However, experiments in underground laboratories such as the ESF (Yucca Mountain, NV) have demonstrated that liquid water can migrate significant distances through fractures in an unsaturated porous medium, suggesting limited interaction between fractures and unsaturated matrix blocks and potentially rapid transmission of recharge to the sat- urated zone. Determining the conditions under which this rapid recharge may take place is an important factor in understanding deep percolation processes in arid areas with thick unsaturated zones. As part of an on-going, Fondecyt-funded project (award 11150587) to study mountain block hydrological processes in arid regions, we are plan- ning a series of in-situ fracture flow injection tests in the Cerro Brillador/Mina Escuela, an underground laboratory and teaching facility belonging to the Universidad la Serena, Chile. Planning for the tests is based on an analytical model and curve-matching method, originally developed to evaluate data from injection tests at Yucca Mountain (Fairley, J.P., 2010, WRR 46:W08542), that uses a known rate of liquid injection to a fracture (for example, from a packed-off section of borehole) and the observed rate of seepage discharging from the fracture to estimate effective fracture aperture, matrix sorptivity, fracture/matrix flow partitioning, and the wetted fracture/matrix interac- tion area between the injection and recovery points. We briefly review the analytical approach and its application to test planning and analysis, and describe the proposed tests and their goals.

  1. Extracellular Matrix Induced Integrin Signal Transduction and Breast Cancer Invasion.

    DTIC Science & Technology

    1995-10-01

    Metalloproteinase, breast, mammary, integrin, collagen, RGDS, matrilysin 49 breast cancer 16. PRICE CODE 17. SECURITY CLASSIFICATION 18. SECURITY...Organization Name(s) and Address(es). Self-explanatory. Block 16. Price Code. Enter appropriate price Block 8. Performinc!_rcanization Report code...areas of necrosis in the center of the tumor; a portion of the mammary gland can be seen in the lower right . The matrilysin in situ showed

  2. Protein-resistant polymer coatings obtained by matrix assisted pulsed laser evaporation

    NASA Astrophysics Data System (ADS)

    Rusen, L.; Mustaciosu, C.; Mitu, B.; Filipescu, M.; Dinescu, M.; Dinca, V.

    2013-08-01

    Adsorption of proteins and polysaccharides is known to facilitate microbial attachment and subsequent formation of biofilm on surfaces that ultimately results in its biofouling. Therefore, protein repellent modified surfaces are necessary to block the irreversible attachment of microorganisms. Within this context, the feasibility of using the Poly(ethylene glycol)-block-poly(ɛ-caprolactone) methyl ether (PEG-block-PCL Me) copolymer as potential protein-resistant coating was explored in this work. The films were deposited using Matrix Assisted Pulsed Laser Evaporation (MAPLE), a technique that allows good control of composition, thickness and homogeneity. The chemical and morphological characteristics of the films were examined using Fourier Transform Infrared Spectroscopy (FTIR), contact angle measurements and Atomic Force Microscopy (AFM). The FTIR data demonstrates that the functional groups in the MAPLE-deposited films remain intact, especially for fluences below 0.5 J cm-2. Optical Microscopy and AFM images show that the homogeneity and the roughness of the coatings are related to both laser parameters (fluence, number of pulses) and target composition. Protein adsorption tests were performed on the PEG-block-PCL Me copolymer coated glass and on bare glass surface as a control. The results show that the presence of copolymer as coating significantly reduces the adsorption of proteins.

  3. Extensions of output variance constrained controllers to hard constraints

    NASA Technical Reports Server (NTRS)

    Skelton, R.; Zhu, G.

    1989-01-01

    Covariance Controllers assign specified matrix values to the state covariance. A number of robustness results are directly related to the covariance matrix. The conservatism in known upperbounds on the H infinity, L infinity, and L (sub 2) norms for stability and disturbance robustness of linear uncertain systems using covariance controllers is illustrated with examples. These results are illustrated for continuous and discrete time systems. **** ONLY 2 BLOCK MARKERS FOUND -- RETRY *****

  4. An investigation of anticyclonic circulation in the southern Gulf of Riga during the spring period

    NASA Astrophysics Data System (ADS)

    Soosaar, Edith; Maljutenko, Ilja; Raudsepp, Urmas; Elken, Jüri

    2014-04-01

    Previous studies of the gulf-type Region of Freshwater Influence (ROFI) have shown that circulation near the area of freshwater inflow sometimes becomes anticyclonic. Such a circulation is different from basic coastal ocean buoyancy-driven circulation where an anticyclonic bulge develops near the source and a coastal current is established along the right hand coast (in the northern hemisphere), resulting in the general cyclonic circulation. The spring (from March to June) circulation and spreading of river discharge water in the southern Gulf of Riga (GoR) in the Baltic Sea was analyzed based on the results of a 10-year simulation (1997-2006) using the General Estuarine Transport Model (GETM). Monthly mean currents in the upper layer of the GoR revealed a double gyre structure dominated either by an anticyclonic or cyclonic gyre in the near-head southeastern part and corresponding cyclonic/anticyclonic gyre in the near-mouth northwestern part of the gulf. Time series analysis of PCA and vorticity, calculated from velocity data and model sensitivity tests, showed that in spring the anticyclonic circulation in the upper layer of the southern GoR is driven primarily by the estuarine type density field. This anticyclonic circulation is enhanced by easterly winds but blocked or even reversed by westerly winds. The estuarine type density field is maintained by salt flux in the northwestern connection to the Baltic Proper and river discharge in the southern GoR.

  5. An investigation of anticyclonic circulation in the southern Gulf of Riga during the spring period

    NASA Astrophysics Data System (ADS)

    Soosaar, Edith; Maljutenko, Ilja; Raudsepp, Urmas; Elken, Jüri

    2015-04-01

    Previous studies of the gulf-type Region of Freshwater Influence (ROFI) have shown that circulation near the area of freshwater inflow sometimes becomes anticyclonic. Such a circulation is different from basic coastal ocean buoyancy-driven circulation where an anticyclonic bulge develops near the source and a coastal current is established along the right hand coast (in the northern hemisphere), resulting in the general cyclonic circulation. The spring (from March to June) circulation and spreading of river discharge water in the southern Gulf of Riga (GoR) in the Baltic Sea was analyzed based on the results of a 10-year simulation (1997-2006) using the General Estuarine Transport Model (GETM). Monthly mean currents in the upper layer of the GoR revealed a double gyre structure dominated either by an anticyclonic or cyclonic gyre in the near-head southeastern part and corresponding cyclonic/anticyclonic gyre in the near-mouth northwestern part of the gulf. Time series analysis of PCA and vorticity, calculated from velocity data and model sensitivity tests, showed that in spring the anticyclonic circulation in the upper layer of the southern GoR is driven primarily by the estuarine type density field. This anticyclonic circulation is enhanced by easterly winds but blocked or even reversed by westerly winds. The estuarine type density field is maintained by salt flux in the northwestern connection to the Baltic Proper and river discharge in the southern GoR.

  6. Biophysical properties of dermal building-blocks affects extra cellular matrix assembly in 3D endogenous macrotissue.

    PubMed

    Urciuolo, F; Garziano, A; Imparato, G; Panzetta, V; Fusco, S; Casale, C; Netti, P A

    2016-01-29

    The fabrication of functional tissue units is one of the major challenges in tissue engineering due to their in vitro use in tissue-on-chip systems, as well as in modular tissue engineering for the construction of macrotissue analogs. In this work, we aim to engineer dermal tissue micromodules obtained by culturing human dermal fibroblasts into porous gelatine microscaffold. We proved that such stromal cells coupled with gelatine microscaffolds are able to synthesize and to assemble an endogenous extracellular matrix (ECM) resulting in tissue micromodules, which evolve their biophysical features over the time. In particular, we found a time-dependent variation of oxygen consumption kinetic parameters, of newly formed ECM stiffness and of micromodules self-aggregation properties. As consequence when used as building blocks to fabricate larger tissues, the initial tissue micromodules state strongly affects the ECM organization and maturation in the final macrotissue. Such results highlight the role of the micromodules properties in controlling the formation of three-dimensional macrotissue in vitro, defining an innovative design criterion for selecting tissue-building blocks for modular tissue engineering.

  7. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  8. Self-organization processes in polysiloxane block copolymers, initiated by modifying fullerene additives

    NASA Astrophysics Data System (ADS)

    Voznyakovskii, A. P.; Kudoyarova, V. Kh.; Kudoyarov, M. F.; Patrova, M. Ya.

    2017-08-01

    Thin films of a polyblock polysiloxane copolymer and their composites with a modifying fullerene C60 additive are studied by atomic force microscopy, Rutherford backscattering, and neutron scattering. The data of atomic force microscopy show that with the addition of fullerene to the bulk of the polymer matrix, the initial relief of the film surface is leveled more, the larger the additive. This trend is associated with the processes of self-organization of rigid block sequences, which are initiated by the field effect of the surface of fullerene aggregates and lead to an increase in the number of their domains in the bulk of the polymer matrix. The data of Rutherford backscattering and neutron scattering indicate the formation of additional structures with a radius of 60 nm only in films containing fullerene, and their fraction increases with increasing fullerene concentration. A comparative analysis of the data of these methods has shown that such structures are, namely, the domains of a rigid block and are not formed by individual fullerene aggregates. The interrelation of the structure and mechanical properties of polymer films is considered.

  9. Expansion and improvements of the FORMA system for response and load analysis. Volume 1: Programming manual

    NASA Technical Reports Server (NTRS)

    Wohlen, R. L.

    1976-01-01

    Techniques are presented for the solution of structural dynamic systems on an electronic digital computer using FORMA (FORTRAN Matrix Analysis). FORMA is a library of subroutines coded in FORTRAN 4 for the efficient solution of structural dynamics problems. These subroutines are in the form of building blocks that can be put together to solve a large variety of structural dynamics problems. The obvious advantage of the building block approach is that programming and checkout time are limited to that required for putting the blocks together in the proper order.

  10. Gyroid structure via highly asymmetric ABC and AB blends

    NASA Astrophysics Data System (ADS)

    Ahn, Seonghyeon; Kwak, Jongheon; Choi, Chungryong; Kim, Jin Kon

    Gyroid structures are very important because of their co-continuous and network structures. However, a block copolymer shows gyroid structures only at 35 % volume fraction of one block. In this study, we designed ABC/AB blend system. B (polystyrene (PS)) is the matrix, while A (polyisoprene (PI)) and C (poly(2-vinyl pridine (P2VP)) are the core part. This blend shows gyroid structures at 20 % volume fraction, that is smaller than that observed at diblock copolymer. Morphologies of neat block copolymers and blends were characterized by TEM and small angle X-ray scattering.

  11. Parabiosis and single-cell RNA sequencing reveal a limited contribution of monocytes to myofibroblasts in kidney fibrosis.

    PubMed

    Kramann, Rafael; Machado, Flavia; Wu, Haojia; Kusaba, Tetsuro; Hoeft, Konrad; Schneider, Rebekka K; Humphreys, Benjamin D

    2018-05-03

    Fibrosis is the common final pathway of virtually all chronic injury to the kidney. While it is well accepted that myofibroblasts are the scar-producing cells in the kidney, their cellular origin is still hotly debated. The relative contribution of proximal tubular epithelium and circulating cells, including mesenchymal stem cells, macrophages, and fibrocytes, to the myofibroblast pool remains highly controversial. Using inducible genetic fate tracing of proximal tubular epithelium, we confirm that the proximal tubule does not contribute to the myofibroblast pool. However, in parabiosis models in which one parabiont is genetically labeled and the other is unlabeled and undergoes kidney fibrosis, we demonstrate that a small fraction of genetically labeled renal myofibroblasts derive from the circulation. Single-cell RNA sequencing confirms this finding but indicates that these cells are circulating monocytes, express few extracellular matrix or other myofibroblast genes, and express many proinflammatory cytokines. We conclude that this small circulating myofibroblast progenitor population contributes to renal fibrosis by paracrine rather than direct mechanisms.

  12. Identification of mineral composition and weathering product of tuff using reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Hyun, C.; Park, H.

    2009-12-01

    Tuff is intricately composed of various types of rock blocks and ash matrixes during volcanic formation processes. Qualitative identification and quantitative assessment of mineral composition of tuff usually have been done using manual inspection with naked-eyes and various chemical analyses. Those conventional methods are destructive to objects, time consuming and sometimes carry out biased results from subjective decision making. To overcome limits from conventional methods, assessment technique using reflectance spectroscopy was applied to tuff specimens. Reflectance spectroscopy measures electromagnetic reflectance on rock surface and can extract diagnostic absorption features originated from chemical composition and crystal structure of constituents in the reflectance curve so mineral species can be discriminated qualitatively. The intrinsic absorption feature from particular mineral can be converted to absorption depth representing relative coverage of the mineral in the measurement area by removing delineated convex hull from raw reflectance curve. The spectral measurements were performed with field spectrometer FieldSpec®3 of ASD Inc. and the wavelength range of measurement was form 350nm to 2500nm. Three types of tuff blocks, ash tuff, green lapilli tuff and red lapilli tuff, were sampled from Hwasun County in Korea and the types of tuffs. The differences between green tuff and red tuff are from the color of their matrixes. Ash tuff consists of feldspars and quartz and small amount of chalcedony, calcite, dolomite, epidote and basalt fragments. Green lapilli tuff consists of feldspar, quartz and muscovite and small amount of calcite, chalcedony, sericite, chlorite, quartzite and basalt fragments. Red lapilli tuff consists of feldspar, quartz and muscovite and small amount of calcite, chalcedony, limonite, zircon, chlorite, quartzite and basalt fragments. The tuff rocks were coarsely crushed and blocks and matrixes were separated to measure standard spectral reflectance of each constituent. Unmixing of mineral composition and their weathering products of blocks and matrixes in tuff were conducted and the ratio of mineral composition was calculated for each specimen. This study was supported by National Research Institute of Cultural Heritage (project title: Development on Evaluation Technology for Weathering Degree of Stone Cultural Properties, project no.: 09B011Y-00150-2009).

  13. Can preferred atmospheric circulation patterns over the North-Atlantic-Eurasian region be associated with arctic sea ice loss?

    NASA Astrophysics Data System (ADS)

    Crasemann, Berit; Handorf, Dörthe; Jaiser, Ralf; Dethloff, Klaus; Nakamura, Tetsu; Ukita, Jinro; Yamazaki, Koji

    2017-12-01

    In the framework of atmospheric circulation regimes, we study whether the recent Arctic sea ice loss and Arctic Amplification are associated with changes in the frequency of occurrence of preferred atmospheric circulation patterns during the extended winter season from December to March. To determine regimes we applied a cluster analysis to sea-level pressure fields from reanalysis data and output from an atmospheric general circulation model. The specific set up of the two analyzed model simulations for low and high ice conditions allows for attributing differences between the simulations to the prescribed sea ice changes only. The reanalysis data revealed two circulation patterns that occur more frequently for low Arctic sea ice conditions: a Scandinavian blocking in December and January and a negative North Atlantic Oscillation pattern in February and March. An analysis of related patterns of synoptic-scale activity and 2 m temperatures provides a synoptic interpretation of the corresponding large-scale regimes. The regimes that occur more frequently for low sea ice conditions are resembled reasonably well by the model simulations. Based on those results we conclude that the detected changes in the frequency of occurrence of large-scale circulation patterns can be associated with changes in Arctic sea ice conditions.

  14. Quantum Correlation Properties in Composite Parity-Conserved Matrix Product States

    NASA Astrophysics Data System (ADS)

    Zhu, Jing-Min

    2016-09-01

    We give a new thought for constructing long-range quantum correlation in quantum many-body systems. Our proposed composite parity-conserved matrix product state has long-range quantum correlation only for two spin blocks where their spin-block length larger than 1 compared to any subsystem only having short-range quantum correlation, and we investigate quantum correlation properties of two spin blocks varying with environment parameter and spacing spin number. We also find that the geometry quantum discords of two nearest-neighbor spin blocks and two next-nearest-neighbor spin blocks become smaller and for other conditions the geometry quantum discord becomes larger than that in any subcomponent, i.e., the increase or the production of the long-range quantum correlation is at the cost of reducing the short-range quantum correlation compared to the corresponding classical correlation and total correlation having no any characteristic of regulation. For nearest-neighbor and next-nearest-neighbor all the correlations take their maximal values at the same points, while for other conditions no whether for spacing same spin number or for different spacing spin numbers all the correlations taking their maximal values are respectively at different points which are very close. We believe that our work is helpful to comprehensively and deeply understand the organization and structure of quantum correlation especially for long-range quantum correlation of quantum many-body systems; and further helpful for the classification, the depiction and the measure of quantum correlation of quantum many-body systems.

  15. Mass transport-related stratal disruption and sedimentary products

    NASA Astrophysics Data System (ADS)

    Ogata, Kei; Mutti, Emiliano; Tinterri, Roberto

    2010-05-01

    From an outcrop perspective, mass transport deposit are commonly represented by "chaotic" units, characterized by dismembered and internally deformed slide blocks of different sizes and shapes, embedded in a more or less abundant fine-grained matrix. The large amount of data derived from geophysical investigations of modern continental margins have permitted the characterization of the overall geometry of many of these deposits, which, however, remain still relatively poorly described from outcrop studies of collisional basins. Results of this work show that in mass-transport deposits an unsorted, strongly mixed, relatively fine-grained clastic matrix almost invariably occurs in irregularly interconnected patches and pseudo-veins, infilling space between large clasts and blocks. We interpreted the aspect of this matrix as typical of a liquefied mixture of water and sediment, characterized by an extremely high mobility due to overpressured conditions, as evidenced by both lateral and vertical injections. On a much larger scale this kind of matrix is probably represented by the seismically "transparent" facies separating slide blocks of many mass-transport deposits observed in seismic-reflection profiles. The inferred mechanism of matrix production suggests a progressive soft-sediment deformation, linked to different phases of submarine landslide evolution (i.e. triggering, translation, accumulation and post-depositional stages), leading to an almost complete stratal disruption within the chaotic units. From our data we suggest that most submarine landslides move because of the development of ductile shear zones marked by the presence of "overpressured" matrix, both internally and along the basal surface. The matrix acts as a lubricating medium, accommodating friction forces and deformation, thus permitting the differential movement of discrete internal portions and enhancing the submarine slide mobility. Based on our experience, we suggest that this kind of deposit is quite common in the sedimentary record though still poorly reported and understood. Mutti and Carminatti (oral presentation from Mutti et al., 2006) have suggested to call these deposits "blocky-flow deposits", i.e. the deposit of a complex flow that is similar to a debris flow, or hyper-concentrated flow, except that it carries also out-size coherent and internally deformed blocks (meters to hundreds of meters across) usually arranged in isolated slump folds. The origin of blocky flows is difficult to understand on presently available data, particularly because it involves the contemporary origin of coherent slide blocks and a plastic flow that carries them as floating elements over considerable run-out distances. The recognition of the above-mentioned characteristics should be a powerful tool to discriminate sedimentary and tectonic "chaotic" units within accretionary systems, and to distinguish submarine landslide deposits transported as catastrophic blocky flows (and therefore part of the broad family of sediment gravity flows) from those in which transport took place primarily along shear planes (i.e. slumps, coherent slides), also highlighting a possible continuum from slides to turbidity currents. The discussed examples fall into a broad category of submarine slide deposits ranging from laterally extensive carbonate megabreccias (lower-middle Eocene "megaturbidites" of the south-central Pyrenees), to mass transport deposits with a very complex internal geometry developed in a highly tectonically mobile basin (upper Eocene - lower Oligocene Ranzano Sandstone, northern Apennines). References: Mutti, E., Carminatti, M., Moreira, J.L.P. & Grassi, A.A. (2006) - Chaotic Deposits: examples from the Brazilian offshore and from outcrop studies in the Spanish Pyrenees and Northern Apennines, Italy. - A.A.P.G. Annual Meeting, April 9-12, Houston, Texas.

  16. Test-retest and between-site reliability in a multicenter fMRI study.

    PubMed

    Friedman, Lee; Stern, Hal; Brown, Gregory G; Mathalon, Daniel H; Turner, Jessica; Glover, Gary H; Gollub, Randy L; Lauriello, John; Lim, Kelvin O; Cannon, Tyrone; Greve, Douglas N; Bockholt, Henry Jeremy; Belger, Aysenil; Mueller, Bryon; Doty, Michael J; He, Jianchun; Wells, William; Smyth, Padhraic; Pieper, Steve; Kim, Seyoung; Kubicki, Marek; Vangel, Mark; Potkin, Steven G

    2008-08-01

    In the present report, estimates of test-retest and between-site reliability of fMRI assessments were produced in the context of a multicenter fMRI reliability study (FBIRN Phase 1, www.nbirn.net). Five subjects were scanned on 10 MRI scanners on two occasions. The fMRI task was a simple block design sensorimotor task. The impulse response functions to the stimulation block were derived using an FIR-deconvolution analysis with FMRISTAT. Six functionally-derived ROIs covering the visual, auditory and motor cortices, created from a prior analysis, were used. Two dependent variables were compared: percent signal change and contrast-to-noise-ratio. Reliability was assessed with intraclass correlation coefficients derived from a variance components analysis. Test-retest reliability was high, but initially, between-site reliability was low, indicating a strong contribution from site and site-by-subject variance. However, a number of factors that can markedly improve between-site reliability were uncovered, including increasing the size of the ROIs, adjusting for smoothness differences, and inclusion of additional runs. By employing multiple steps, between-site reliability for 3T scanners was increased by 123%. Dropping one site at a time and assessing reliability can be a useful method of assessing the sensitivity of the results to particular sites. These findings should provide guidance toothers on the best practices for future multicenter studies.

  17. Image restoration using aberration taken by a Hartmann wavefront sensor on extended object, towards real-time deconvolution

    NASA Astrophysics Data System (ADS)

    Darudi, Ahmad; Bakhshi, Hadi; Asgari, Reza

    2015-05-01

    In this paper we present the results of image restoration using the data taken by a Hartmann sensor. The aberration is measure by a Hartmann sensor in which the object itself is used as reference. Then the Point Spread Function (PSF) is simulated and used for image reconstruction using the Lucy-Richardson technique. A technique is presented for quantitative evaluation the Lucy-Richardson technique for deconvolution.

  18. Novel Image Quality Control Systems(Add-On). Innovative Computational Methods for Inverse Problems in Optical and SAR Imaging

    DTIC Science & Technology

    2007-02-28

    Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex Medium Response, International Journal of Imaging Systems and...1767-1782, 2006. 31. Z. Mu, R. Plemmons, and P. Santago. Iterative Ultrasonic Signal and Image Deconvolution for Estimation of the Complex...rigorous mathematical and computational research on inverse problems in optical imaging of direct interest to the Army and also the intelligence agencies

  19. Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Chang-hui; Wei, Kai

    Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.

  20. Forward Looking Radar Imaging by Truncated Singular Value Decomposition and Its Application for Adverse Weather Aircraft Landing.

    PubMed

    Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu

    2015-06-18

    The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.

Top