Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Darin P.; Badea, Cristian T., E-mail: cristian.badea@duke.edu; Lee, Chang-Lung
Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem withinmore » the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. Results: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. Conclusions: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.« less
Spectrotemporal CT data acquisition and reconstruction at low dose
Clark, Darin P.; Lee, Chang-Lung; Kirsch, David G.; Badea, Cristian T.
2015-01-01
Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. Results: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. Conclusions: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time. PMID:26520724
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
Natural image sequences constrain dynamic receptive fields and imply a sparse code.
Häusler, Chris; Susemihl, Alex; Nawrot, Martin P
2013-11-06
In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
An algorithm for extraction of periodic signals from sparse, irregularly sampled data
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1994-01-01
Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.
Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.
Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen
2016-07-27
Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.
Tipton, John; Hooten, Mevin B.; Goring, Simon
2017-01-01
Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temperature surfaces from sparse and noisy measurements recorded at historical United States military forts and other observer stations from 1820 to 1894. One common method for reconstructing the paleoclimate from proxy data is principal component regression (PCR). With PCR, one learns a statistical relationship between the paleoclimate proxy data and a set of climate observations that are used as patterns for potential reconstruction scenarios. We explore PCR in a Bayesian hierarchical framework, extending classical PCR in a variety of ways. First, we model the latent principal components probabilistically, accounting for measurement error in the observational data. Next, we extend our method to better accommodate outliers that occur in the proxy data. Finally, we explore alternatives to the truncation of lower-order principal components using different regularization techniques. One fundamental challenge in paleoclimate reconstruction efforts is the lack of out-of-sample data for predictive validation. Cross-validation is of potential value, but is computationally expensive and potentially sensitive to outliers in sparse data scenarios. To overcome the limitations that a lack of out-of-sample records presents, we test our methods using a simulation study, applying proper scoring rules including a computationally efficient approximation to leave-one-out cross-validation using the log score to validate model performance. The result of our analysis is a spatially explicit reconstruction of spatio-temporal temperature from a very sparse historical record.
Alpha Matting with KL-Divergence Based Sparse Sampling.
Karacan, Levent; Erdem, Aykut; Erdem, Erkut
2017-06-22
In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.
High-frame-rate full-vocal-tract 3D dynamic speech imaging.
Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P
2017-04-01
To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity
Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo
2016-01-01
In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214
Decoding memory features from hippocampal spiking activities using sparse classification models.
Dong Song; Hampson, Robert E; Robinson, Brian S; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W
2016-08-01
To understand how memory information is encoded in the hippocampus, we build classification models to decode memory features from hippocampal CA3 and CA1 spatio-temporal patterns of spikes recorded from epilepsy patients performing a memory-dependent delayed match-to-sample task. The classification model consists of a set of B-spline basis functions for extracting memory features from the spike patterns, and a sparse logistic regression classifier for generating binary categorical output of memory features. Results show that classification models can extract significant amount of memory information with respects to types of memory tasks and categories of sample images used in the task, despite the high level of variability in prediction accuracy due to the small sample size. These results support the hypothesis that memories are encoded in the hippocampal activities and have important implication to the development of hippocampal memory prostheses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, W; Yin, F; Wang, C
Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution ofmore » VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI-MM-ROI-FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while maintaining the estimation accuracy. Estimation using slices sampled uniformly through the tumor achieved better accuracy than slices sampled non-uniformly. Conclusions: Preliminary studies showed that it is feasible to generate VC-MRI from multi-slice sparsely-sampled 2D-cine images for real-time 3D-target verification. This work was supported by the National Institutes of Health under Grant No. R01-CA184173 and a research grant from Varian Medical Systems.« less
Petrov, Andrii Y; Herbst, Michael; Andrew Stenger, V
2017-08-15
Rapid whole-brain dynamic Magnetic Resonance Imaging (MRI) is of particular interest in Blood Oxygen Level Dependent (BOLD) functional MRI (fMRI). Faster acquisitions with higher temporal sampling of the BOLD time-course provide several advantages including increased sensitivity in detecting functional activation, the possibility of filtering out physiological noise for improving temporal SNR, and freezing out head motion. Generally, faster acquisitions require undersampling of the data which results in aliasing artifacts in the object domain. A recently developed low-rank (L) plus sparse (S) matrix decomposition model (L+S) is one of the methods that has been introduced to reconstruct images from undersampled dynamic MRI data. The L+S approach assumes that the dynamic MRI data, represented as a space-time matrix M, is a linear superposition of L and S components, where L represents highly spatially and temporally correlated elements, such as the image background, while S captures dynamic information that is sparse in an appropriate transform domain. This suggests that L+S might be suited for undersampled task or slow event-related fMRI acquisitions because the periodic nature of the BOLD signal is sparse in the temporal Fourier transform domain and slowly varying low-rank brain background signals, such as physiological noise and drift, will be predominantly low-rank. In this work, as a proof of concept, we exploit the L+S method for accelerating block-design fMRI using a 3D stack of spirals (SoS) acquisition where undersampling is performed in the k z -t domain. We examined the feasibility of the L+S method to accurately separate temporally correlated brain background information in the L component while capturing periodic BOLD signals in the S component. We present results acquired in control human volunteers at 3T for both retrospective and prospectively acquired fMRI data for a visual activation block-design task. We show that a SoS fMRI acquisition with an acceleration of four and L+S reconstruction can achieve a brain coverage of 40 slices at 2mm isotropic resolution and 64 x 64 matrix size every 500ms. Copyright © 2017 Elsevier Inc. All rights reserved.
Temporal flicker reduction and denoising in video using sparse directional transforms
NASA Astrophysics Data System (ADS)
Kanumuri, Sandeep; Guleryuz, Onur G.; Civanlar, M. Reha; Fujibayashi, Akira; Boon, Choong S.
2008-08-01
The bulk of the video content available today over the Internet and over mobile networks suffers from many imperfections caused during acquisition and transmission. In the case of user-generated content, which is typically produced with inexpensive equipment, these imperfections manifest in various ways through noise, temporal flicker and blurring, just to name a few. Imperfections caused by compression noise and temporal flicker are present in both studio-produced and user-generated video content transmitted at low bit-rates. In this paper, we introduce an algorithm designed to reduce temporal flicker and noise in video sequences. The algorithm takes advantage of the sparse nature of video signals in an appropriate transform domain that is chosen adaptively based on local signal statistics. When the signal corresponds to a sparse representation in this transform domain, flicker and noise, which are spread over the entire domain, can be reduced easily by enforcing sparsity. Our results show that the proposed algorithm reduces flicker and noise significantly and enables better presentation of compressed videos.
NASA Astrophysics Data System (ADS)
Gong, Maoguo; Yang, Hailun; Zhang, Puzhao
2017-07-01
Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.
Angulo-Garcia, David; Berke, Joshua D; Torcini, Alessandro
2016-02-01
Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We find that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.
Variational Assimilation of Sparse and Uncertain Satellite Data For 1D Saint-Venant River Models
NASA Astrophysics Data System (ADS)
Garambois, P. A.; Brisset, P.; Monnier, J.; Roux, H.
2016-12-01
Profusion of satellites are providing increasingly accurate measurements of continental water cyle, and water bodies variations while in situ observability is declining. The future Surface Water and Ocean Topography (SWOT) mission will provide maps of river surface elevations widths and slopes with an almost global coverage and temporal revisits. This will offer the possibility to address a larger variety of inverse problems in surface hydrology. Data assimilation techniques, that are broadly used in several scientific fields, aim to optimally combine models, system observations and prior information. Variational assimilation consists in iterative minimization of a discrepency measure between model outputs and observations, here for retrieving boundary conditions and parameters of a 1D Saint Venant model. Nevertheless, inferring river discharge and hydraulic parameters thanks to the observation of river surface is not straightforward. This is particularly true in the case of sparse and uncertain observations of flow state variables since they are governed by nonlinear physical processes. This paper investigates the identifiability of hydraulic controls given sparse and uncertain satellite observations of a river. The identifiability of river discharge alone and with roughness is tested for several spatio temporal patterns of river observations, including SWOT like observations. A new 1D Shallow water model with variational data assimilation, within the DassFlow chain is presented as well as postprocessing and observation operator dedicated to the future SWOT and SWOT simulator data. In view to decrease inverse problem dimensionality discharge is represented in a reduced basis. Moreover we introduce an original and reduced parametrization of the flow resistance that can account for various flow regimes along with a cross section design dedicated to remote sensing. We show which discharge temporal frequencies can be identified w.r.t observation ones and at which accuracy. Eventually the important question of the discharge identifiability potential between observation times and depending on the spatio-temporal sampling is adressed with respect to the wave lengths of the hydrological signals.
LESS: Link Estimation with Sparse Sampling in Intertidal WSNs
Ji, Xiaoyu; Chen, Yi-chao; Li, Xiaopeng; Xu, Wenyuan
2018-01-01
Deploying wireless sensor networks (WSN) in the intertidal area is an effective approach for environmental monitoring. To sustain reliable data delivery in such a dynamic environment, a link quality estimation mechanism is crucial. However, our observations in two real WSN systems deployed in the intertidal areas reveal that link update in routing protocols often suffers from energy and bandwidth waste due to the frequent link quality measurement and updates. In this paper, we carefully investigate the network dynamics using real-world sensor network data and find it feasible to achieve accurate estimation of link quality using sparse sampling. We design and implement a compressive-sensing-based link quality estimation protocol, LESS, which incorporates both spatial and temporal characteristics of the system to aid the link update in routing protocols. We evaluate LESS in both real WSN systems and a large-scale simulation, and the results show that LESS can reduce energy and bandwidth consumption by up to 50% while still achieving more than 90% link quality estimation accuracy. PMID:29494557
Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul
2016-01-15
Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Adaptive regulation of sparseness by feedforward inhibition
Assisi, Collins; Stopfer, Mark; Laurent, Gilles; Bazhenov, Maxim
2014-01-01
In the mushroom body of insects, odors are represented by very few spikes in a small number of neurons, a highly efficient strategy known as sparse coding. Physiological studies of these neurons have shown that sparseness is maintained across thousand-fold changes in odor concentration. Using a realistic computational model, we propose that sparseness in the olfactory system is regulated by adaptive feedforward inhibition. When odor concentration changes, feedforward inhibition modulates the duration of the temporal window over which the mushroom body neurons may integrate excitatory presynaptic input. This simple adaptive mechanism could maintain the sparseness of sensory representations across wide ranges of stimulus conditions. PMID:17660812
Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun
2016-01-01
The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882
Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.
Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K
2014-02-01
Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.
Piano Transcription with Convolutional Sparse Lateral Inhibition
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon
2017-02-08
This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less
Piano Transcription with Convolutional Sparse Lateral Inhibition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt Egon
This paper extends our prior work on contextdependent piano transcription to estimate the length of the notes in addition to their pitch and onset. This approach employs convolutional sparse coding along with lateral inhibition constraints to approximate a musical signal as the sum of piano note waveforms (dictionary elements) convolved with their temporal activations. The waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. A dictionary containing multiple waveforms per pitch is generated by truncating a long waveform for each pitch to different lengths. During transcription, the dictionary elements are fixed and their temporal activationsmore » are estimated and post-processed to obtain the pitch, onset and note length estimation. A sparsity penalty promotes globally sparse activations of the dictionary elements, and a lateral inhibition term penalizes concurrent activations of different waveforms corresponding to the same pitch within a temporal neighborhood, to achieve note length estimation. Experiments on the MAPS dataset show that the proposed approach significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting in transcription accuracy.« less
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Action Recognition Using Nonnegative Action Component Representation and Sparse Basis Selection.
Wang, Haoran; Yuan, Chunfeng; Hu, Weiming; Ling, Haibin; Yang, Wankou; Sun, Changyin
2014-02-01
In this paper, we propose using high-level action units to represent human actions in videos and, based on such units, a novel sparse model is developed for human action recognition. There are three interconnected components in our approach. First, we propose a new context-aware spatial-temporal descriptor, named locally weighted word context, to improve the discriminability of the traditionally used local spatial-temporal descriptors. Second, from the statistics of the context-aware descriptors, we learn action units using the graph regularized nonnegative matrix factorization, which leads to a part-based representation and encodes the geometrical information. These units effectively bridge the semantic gap in action recognition. Third, we propose a sparse model based on a joint l2,1-norm to preserve the representative items and suppress noise in the action units. Intuitively, when learning the dictionary for action representation, the sparse model captures the fact that actions from the same class share similar units. The proposed approach is evaluated on several publicly available data sets. The experimental results and analysis clearly demonstrate the effectiveness of the proposed approach.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
NASA Astrophysics Data System (ADS)
Magyar, Andrew
The recent discovery of cells that respond to purely conceptual features of the environment (particular people, landmarks, objects, etc) in the human medial temporal lobe (MTL), has raised many questions about the nature of the neural code in humans. The goal of this dissertation is to develop a novel statistical method based upon maximum likelihood regression which will then be applied to these experiments in order to produce a quantitative description of the coding properties of the human MTL. In general, the method is applicable to any experiments in which a sequence of stimuli are presented to an organism while the binary responses of a large number of cells are recorded in parallel. The central concept underlying the approach is the total probability that a neuron responds to a random stimulus, called the neuronal sparsity. The model then estimates the distribution of response probabilities across the population of cells. Applying the method to single-unit recordings from the human medial temporal lobe, estimates of the sparsity distributions are acquired in four regions: the hippocampus, the entorhinal cortex, the amygdala, and the parahippocampal cortex. The resulting distributions are found to be sparse (large fraction of cells with a low response probability) and highly non-uniform, with a large proportion of ultra-sparse neurons that possess a very low response probability, and a smaller population of cells which respond much more frequently. Rammifications of the results are discussed in relation to the sparse coding hypothesis, and comparisons are made between the statistics of the human medial temporal lobe cells and place cells observed in the rodent hippocampus.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Van der Merwe, Deon; Price, Kevin P
2015-03-27
Harmful algal blooms (HABs) degrade water quality and produce toxins. The spatial distribution of HAbs may change rapidly due to variations wind, water currents, and population dynamics. Risk assessments, based on traditional sampling methods, are hampered by the sparseness of water sample data points, and delays between sampling and the availability of results. There is a need for local risk assessment and risk management at the spatial and temporal resolution relevant to local human and animal interactions at specific sites and times. Small, unmanned aircraft systems can gather color-infrared reflectance data at appropriate spatial and temporal resolutions, with full control over data collection timing, and short intervals between data gathering and result availability. Data can be interpreted qualitatively, or by generating a blue normalized difference vegetation index (BNDVI) that is correlated with cyanobacterial biomass densities at the water surface, as estimated using a buoyant packed cell volume (BPCV). Correlations between BNDVI and BPCV follow a logarithmic model, with r(2)-values under field conditions from 0.77 to 0.87. These methods provide valuable information that is complimentary to risk assessment data derived from traditional risk assessment methods, and could help to improve risk management at the local level.
Van der Merwe, Deon; Price, Kevin P.
2015-01-01
Harmful algal blooms (HABs) degrade water quality and produce toxins. The spatial distribution of HAbs may change rapidly due to variations wind, water currents, and population dynamics. Risk assessments, based on traditional sampling methods, are hampered by the sparseness of water sample data points, and delays between sampling and the availability of results. There is a need for local risk assessment and risk management at the spatial and temporal resolution relevant to local human and animal interactions at specific sites and times. Small, unmanned aircraft systems can gather color-infrared reflectance data at appropriate spatial and temporal resolutions, with full control over data collection timing, and short intervals between data gathering and result availability. Data can be interpreted qualitatively, or by generating a blue normalized difference vegetation index (BNDVI) that is correlated with cyanobacterial biomass densities at the water surface, as estimated using a buoyant packed cell volume (BPCV). Correlations between BNDVI and BPCV follow a logarithmic model, with r2-values under field conditions from 0.77 to 0.87. These methods provide valuable information that is complimentary to risk assessment data derived from traditional risk assessment methods, and could help to improve risk management at the local level. PMID:25826055
Optimized Design and Analysis of Sparse-Sampling fMRI Experiments
Perrachione, Tyler K.; Ghosh, Satrajit S.
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power. PMID:23616742
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power.
Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People
NASA Astrophysics Data System (ADS)
Thomik, Andreas; Faisal, A. Aldo
2015-03-01
Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).
NASA Technical Reports Server (NTRS)
Damadeo, R. P.; Zawodny, J. M.; Thomason, L. W.
2014-01-01
This paper details a new method of regression for sparsely sampled data sets for use with time-series analysis, in particular the Stratospheric Aerosol and Gas Experiment (SAGE) II ozone data set. Non-uniform spatial, temporal, and diurnal sampling present in the data set result in biased values for the long-term trend if not accounted for. This new method is performed close to the native resolution of measurements and is a simultaneous temporal and spatial analysis that accounts for potential diurnal ozone variation. Results show biases, introduced by the way data is prepared for use with traditional methods, can be as high as 10%. Derived long-term changes show declines in ozone similar to other studies but very different trends in the presumed recovery period, with differences up to 2% per decade. The regression model allows for a variable turnaround time and reveals a hemispheric asymmetry in derived trends in the middle to upper stratosphere. Similar methodology is also applied to SAGE II aerosol optical depth data to create a new volcanic proxy that covers the SAGE II mission period. Ultimately this technique may be extensible towards the inclusion of multiple data sets without the need for homogenization.
Motion vector field upsampling for improved 4D cone-beam CT motion compensation of the thorax
NASA Astrophysics Data System (ADS)
Sauppe, Sebastian; Rank, Christopher M.; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc
2017-03-01
To improve the accuracy of motion vector fields (MVFs) required for respiratory motion compensated (MoCo) CT image reconstruction without increasing the computational complexity of the MVF estimation approach, we propose a MVF upsampling method that is able to reduce the motion blurring in reconstructed 4D images. While respiratory gating improves the temporal resolution, it leads to sparse view sampling artifacts. MoCo image reconstruction has the potential to remove all motion artifacts while simultaneously making use of 100% of the rawdata. However the MVF accuracy is still below the temporal resolution of the CBCT data acquisition. Increasing the number of motion bins would increase reconstruction time and amplify sparse view artifacts, but not necessarily the accuracy of MVF. Therefore we propose a new method to upsample estimated MVFs and use those for MoCo. To estimate the MVFs, a modified version of the Demons algorithm is used. Our proposed method is able to interpolate the original MVFs up to a factor that each projection has its own individual MVF. To validate the method we use an artificially deformed clinical CT scan, with a breathing pattern of a real patient, and patient data acquired with a TrueBeamTM4D CBCT system (Varian Medical Systems). We evaluate our method for different numbers of respiratory bins, each again with different upsampling factors. Employing our upsampling method, motion blurring in the reconstructed 4D images, induced by irregular breathing and the limited temporal resolution of phase-correlated images, is substantially reduced.
Context-Dependent Piano Music Transcription With Convolutional Sparse Coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt
This study presents a novel approach to automatic transcription of piano music in a context-dependent setting. This approach employs convolutional sparse coding to approximate the music waveform as the summation of piano note waveforms (dictionary elements) convolved with their temporal activations (onset transcription). The piano note waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. During transcription, the note waveforms are fixed and their temporal activations are estimated and post-processed to obtain the pitch and onset transcription. This approach works in the time domain, models temporal evolution of piano notes, and estimates pitches and onsetsmore » simultaneously in the same framework. Finally, experiments show that it significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.« less
Context-Dependent Piano Music Transcription With Convolutional Sparse Coding
Cogliati, Andrea; Duan, Zhiyao; Wohlberg, Brendt
2016-08-04
This study presents a novel approach to automatic transcription of piano music in a context-dependent setting. This approach employs convolutional sparse coding to approximate the music waveform as the summation of piano note waveforms (dictionary elements) convolved with their temporal activations (onset transcription). The piano note waveforms are pre-recorded for the specific piano to be transcribed in the specific environment. During transcription, the note waveforms are fixed and their temporal activations are estimated and post-processed to obtain the pitch and onset transcription. This approach works in the time domain, models temporal evolution of piano notes, and estimates pitches and onsetsmore » simultaneously in the same framework. Finally, experiments show that it significantly outperforms a state-of-the-art music transcription method trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.« less
C-FSCV: Compressive Fast-Scan Cyclic Voltammetry for Brain Dopamine Recording.
Zamani, Hossein; Bahrami, Hamid Reza; Chalwadi, Preeti; Garris, Paul A; Mohseni, Pedram
2018-01-01
This paper presents a novel compressive sensing framework for recording brain dopamine levels with fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode. Termed compressive FSCV (C-FSCV), this approach compressively samples the measured total current in each FSCV scan and performs basic FSCV processing steps, e.g., background current averaging and subtraction, directly with compressed measurements. The resulting background-subtracted faradaic currents, which are shown to have a block-sparse representation in the discrete cosine transform domain, are next reconstructed from their compressively sampled counterparts with the block sparse Bayesian learning algorithm. Using a previously recorded dopamine dataset, consisting of electrically evoked signals recorded in the dorsal striatum of an anesthetized rat, the C-FSCV framework is shown to be efficacious in compressing and reconstructing brain dopamine dynamics and associated voltammograms with high fidelity (correlation coefficient, ), while achieving compression ratio, CR, values as high as ~ 5. Moreover, using another set of dopamine data recorded 5 minutes after administration of amphetamine (AMPH) to an ambulatory rat, C-FSCV once again compresses (CR = 5) and reconstructs the temporal pattern of dopamine release with high fidelity ( ), leading to a true-positive rate of 96.4% in detecting AMPH-induced dopamine transients.
An ultra-sparse code underliesthe generation of neural sequences in a songbird
NASA Astrophysics Data System (ADS)
Hahnloser, Richard H. R.; Kozhevnikov, Alexay A.; Fee, Michale S.
2002-09-01
Sequences of motor activity are encoded in many vertebrate brains by complex spatio-temporal patterns of neural activity; however, the neural circuit mechanisms underlying the generation of these pre-motor patterns are poorly understood. In songbirds, one prominent site of pre-motor activity is the forebrain robust nucleus of the archistriatum (RA), which generates stereotyped sequences of spike bursts during song and recapitulates these sequences during sleep. We show that the stereotyped sequences in RA are driven from nucleus HVC (high vocal centre), the principal pre-motor input to RA. Recordings of identified HVC neurons in sleeping and singing birds show that individual HVC neurons projecting onto RA neurons produce bursts sparsely, at a single, precise time during the RA sequence. These HVC neurons burst sequentially with respect to one another. We suggest that at each time in the RA sequence, the ensemble of active RA neurons is driven by a subpopulation of RA-projecting HVC neurons that is active only at that time. As a population, these HVC neurons may form an explicit representation of time in the sequence. Such a sparse representation, a temporal analogue of the `grandmother cell' concept for object recognition, eliminates the problem of temporal interference during sequence generation and learning attributed to more distributed representations.
Determining Greenland Ice Sheet Accumulation Rates from Radar Remote Sensing
NASA Technical Reports Server (NTRS)
Jezek, Kenneth C.
2001-01-01
An important component of NASA's Program for Arctic Regional Climate Assessment (PARCA) is a mass balance investigation of the Greenland Ice Sheet. The mass balance is calculated by taking the difference between the snow accumulation and the ice discharge of the ice sheet. Uncertainties in this calculation include the snow accumulation rate, which has traditionally been determined by interpolating data from ice core samples taken throughout the ice sheet. The sparse data associated with ice cores, coupled with the high spatial and temporal resolution provided by remote sensing, have motivated scientists to investigate relationships between accumulation rate and microwave observations.
NASA Astrophysics Data System (ADS)
Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.
2017-12-01
Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.
Synthesizing spatiotemporally sparse smartphone sensor data for bridge modal identification
NASA Astrophysics Data System (ADS)
Ozer, Ekin; Feng, Maria Q.
2016-08-01
Smartphones as vibration measurement instruments form a large-scale, citizen-induced, and mobile wireless sensor network (WSN) for system identification and structural health monitoring (SHM) applications. Crowdsourcing-based SHM is possible with a decentralized system granting citizens with operational responsibility and control. Yet, citizen initiatives introduce device mobility, drastically changing SHM results due to uncertainties in the time and the space domains. This paper proposes a modal identification strategy that fuses spatiotemporally sparse SHM data collected by smartphone-based WSNs. Multichannel data sampled with the time and the space independence is used to compose the modal identification parameters such as frequencies and mode shapes. Structural response time history can be gathered by smartphone accelerometers and converted into Fourier spectra by the processor units. Timestamp, data length, energy to power conversion address temporal variation, whereas spatial uncertainties are reduced by geolocation services or determining node identity via QR code labels. Then, parameters collected from each distributed network component can be extended to global behavior to deduce modal parameters without the need of a centralized and synchronous data acquisition system. The proposed method is tested on a pedestrian bridge and compared with a conventional reference monitoring system. The results show that the spatiotemporally sparse mobile WSN data can be used to infer modal parameters despite non-overlapping sensor operation schedule.
Sparse Modeling of Human Actions from Motion Imagery
2011-09-02
is here developed. Spatio-temporal features that char- acterize local changes in the image are rst extracted. This is followed by the learning of a...video comes from the optimal sparse linear com- bination of the learned basis vectors (action primitives) representing the actions. A low...computational cost deep-layer model learning the inter- class correlations of the data is added for increasing discriminative power. In spite of its simplicity
Dynamic Controllability and Dispatchability Relationships
NASA Technical Reports Server (NTRS)
Morris, Paul Henry
2014-01-01
An important issue for temporal planners is the ability to handle temporal uncertainty. Recent papers have addressed the question of how to tell whether a temporal network is Dynamically Controllable, i.e., whether the temporal requirements are feasible in the light of uncertain durations of some processes. We present a fast algorithm for Dynamic Controllability. We also note a correspondence between the reduction steps in the algorithm and the operations involved in converting the projections to dispatchable form. This has implications for the complexity for sparse networks.
Practical Sub-Nyquist Sampling via Array-Based Compressed Sensing Receiver Architecture
2016-07-10
different array ele- ments at different sub-Nyquist sampling rates. Signal processing inspired by the sparse fast Fourier transform allows for signal...reconstruction algorithms can be computationally demanding (REF). The related sparse Fourier transform algorithms aim to reduce the processing time nec- essary to...compute the DFT of frequency-sparse signals [7]. In particular, the sparse fast Fourier transform (sFFT) achieves processing time better than the
Sparsely sampling the sky: Regular vs. random sampling
NASA Astrophysics Data System (ADS)
Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.
2015-09-01
Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.
Holton, Chase; Luo, Hong; Dahlen, Paul; Gorder, Kyle; Dettenmaier, Erik; Johnson, Paul C
2013-01-01
Current vapor intrusion (VI) pathway assessment heavily weights concentrations from infrequent (monthly-seasonal) 24 h indoor air samples. This study collected a long-term and high-frequency data set that can be used to assess indoor air sampling strategies for answering key pathway assessment questions like: "Is VI occurring?", and "Will VI impacts exceed thresholds of concern?". Indoor air sampling was conducted for 2.5 years at 2-4 h intervals in a house overlying a dilute chlorinated solvent plume (10-50 μg/L TCE). Indoor air concentrations varied by 3 orders of magnitude (<0.01-10 ppbv TCE) with two recurring behaviors. The VI-active behavior, which was prevalent in fall, winter, and spring involved time-varying impacts intermixed with sporadic periods of inactivity; the VI-dormant behavior, which was prevalent in the summer, involved long periods of inactivity with sporadic VI impacts. These data were used to study outcomes of three simple sparse data sampling plans; the probabilities of false-negative and false-positive decisions were dependent on the ratio of the (action level/true mean of the data), the number of exceedances needed, and the sampling strategy. The analysis also suggested a significant potential for poor characterization of long-term mean concentrations with sparse sampling plans. The results point to a need for additional dense data sets and further investigation into the robustness of possible VI assessment paradigms. As this is the first data set of its kind, it is unknown if the results are representative of other VI-sites.
Adaptive OFDM Waveform Design for Spatio-Temporal-Sparsity Exploited STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
In this chapter, we describe a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly moving target using an orthogonal frequency division multiplexing (OFDM) radar. The motivation of employing an OFDM signal is that it improves the target-detectability from the interfering signals by increasing the frequency diversity of the system. However, due to the addition of one extra dimension in terms of frequency, the adaptive degrees-of-freedom in an OFDM-STAP also increases. Therefore, to avoid the construction a fully adaptive OFDM-STAP, we develop a sparsity-based STAP algorithm. We observe that the interference spectrum is inherently sparse in the spatio-temporal domain,more » as the clutter responses occupy only a diagonal ridge on the spatio-temporal plane and the jammer signals interfere only from a few spatial directions. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data compared to the other existing STAP techniques, and produces nearly optimum STAP performance. In addition to designing the STAP filter, we optimally design the transmit OFDM signals by maximizing the output signal-to-interference-plus-noise ratio (SINR) in order to improve the STAP performance. The computation of output SINR depends on the estimated value of the interference covariance matrix, which we obtain by applying the sparse recovery algorithm. Therefore, we analytically assess the effects of the synthesized OFDM coefficients on the sparse recovery of the interference covariance matrix by computing the coherence measure of the sparse measurement matrix. Our numerical examples demonstrate the achieved STAP-performance due to sparsity-based technique and adaptive waveform design.« less
A feasibility study for compressed sensing combined phase contrast MR angiography reconstruction
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo; Han, Bong-Soo
2012-02-01
Phase contrast magnetic resonance angiography (PC MRA) is a technique for flow velocity measurement and vessels visualization, simultaneously. The PC MRA takes long scan time because each flow encoding gradients which are composed bipolar gradient type need to reconstruct the angiography image. Moreover, it takes more image acquisition time when we use the PC MRA at the low-tesla MRI system. In this study, we studied and evaluation of feasibility for CS MRI reconstruction combined PC MRA which data acquired by low-tesla MRI system. We used non-linear reconstruction algorithm which named Bregman iteration for CS image reconstruction and validate the usefulness of CS combined PC MRA reconstruction technique. The results of CS reconstructed PC MRA images provide similar level of image quality between fully sampled reconstruction data and sparse sampled reconstruction using CS technique. Although our results used half of sampling ratio and do not used specification hardware device or performance which are improving the temporal resolution of MR image acquisition such as parallel imaging reconstruction using phased array coil or non-cartesian trajectory, we think that CS combined PC MRA technique will be helpful to increase the temporal resolution and at low-tesla MRI system.
Signal Separation of Helicopter Radar Returns Using Wavelet-Based Sparse Signal Optimisation
2016-10-01
RR–0436 ABSTRACT A novel wavelet-based sparse signal representation technique is used to separate the main and tail rotor blade components of a...helicopter from the composite radar returns. The received signal consists of returns from the rotating main and tail rotor blades , the helicopter body...component signal com- prising of returns from the main body, the main and tail rotor hubs and blades . Temporal and Doppler characteristics of these
Sikka, Ritu; Cuddy, Lola L.; Johnsrude, Ingrid S.; Vanstone, Ashley D.
2015-01-01
Several studies of semantic memory in non-musical domains involving recognition of items from long-term memory have shown an age-related shift from the medial temporal lobe structures to the frontal lobe. However, the effects of aging on musical semantic memory remain unexamined. We compared activation associated with recognition of familiar melodies in younger and older adults. Recognition follows successful retrieval from the musical lexicon that comprises a lifetime of learned musical phrases. We used the sparse-sampling technique in fMRI to determine the neural correlates of melody recognition by comparing activation when listening to familiar vs. unfamiliar melodies, and to identify age differences. Recognition-related cortical activation was detected in the right superior temporal, bilateral inferior and superior frontal, left middle orbitofrontal, bilateral precentral, and left supramarginal gyri. Region-of-interest analysis showed greater activation for younger adults in the left superior temporal gyrus and for older adults in the left superior frontal, left angular, and bilateral superior parietal regions. Our study provides powerful evidence for these musical memory networks due to a large sample (N = 40) that includes older adults. This study is the first to investigate the neural basis of melody recognition in older adults and to compare the findings to younger adults. PMID:26500480
NASA Astrophysics Data System (ADS)
Patej, A.; Eisenstein, D. J.
2018-07-01
We develop a formalism for measuring the cosmological distance scale from baryon acoustic oscillations (BAO) using the cross-correlation of a sparse redshift survey with a denser photometric sample. This reduces the shot noise that would otherwise affect the autocorrelation of the sparse spectroscopic map. As a proof of principle, we make the first on-sky application of this method to a sparse sample defined as the z > 0.6 tail of the Sloan Digital Sky Survey's (SDSS) BOSS/CMASS sample of galaxies and a dense photometric sample from SDSS DR9. We find a 2.8σ preference for the BAO peak in the cross-correlation at an effective z = 0.64, from which we measure the angular diameter distance DM(z = 0.64) = (2418 ± 73 Mpc)(rs/rs, fid). Accordingly, we expect that using this method to combine sparse spectroscopy with the deep, high-quality imaging that is just now becoming available will enable higher precision BAO measurements than possible with the spectroscopy alone.
NASA Astrophysics Data System (ADS)
Patej, Anna; Eisenstein, Daniel J.
2018-04-01
We develop a formalism for measuring the cosmological distance scale from baryon acoustic oscillations (BAO) using the cross-correlation of a sparse redshift survey with a denser photometric sample. This reduces the shot noise that would otherwise affect the auto-correlation of the sparse spectroscopic map. As a proof of principle, we make the first on-sky application of this method to a sparse sample defined as the z > 0.6 tail of the Sloan Digital Sky Survey's (SDSS) BOSS/CMASS sample of galaxies and a dense photometric sample from SDSS DR9. We find a 2.8σ preference for the BAO peak in the cross-correlation at an effective z = 0.64, from which we measure the angular diameter distance DM(z = 0.64) = (2418 ± 73 Mpc)(rs/rs, fid). Accordingly, we expect that using this method to combine sparse spectroscopy with the deep, high quality imaging that is just now becoming available will enable higher precision BAO measurements than possible with the spectroscopy alone.
Dual-wavelength OR-PAM with compressed sensing for cell tracking in a 3D cell culture system
NASA Astrophysics Data System (ADS)
Huang, Rou-Xuan; Fu, Ying; Liu, Wang; Ma, Yu-Ting; Hsieh, Bao-Yu; Chen, Shu-Ching; Sun, Mingjian; Li, Pai-Chi
2018-02-01
Monitoring dynamic interactions of T cells migrating toward tumor is beneficial to understand how cancer immunotherapy works. Optical-resolution photoacoustic microscope (OR-PAM) can provide not only high spatial resolution but also deeper penetration than conventional optical microscopy. With the aid of exogenous contrast agents, the dual-wavelength OR-PAM can be applied to map the distribution of CD8+ cytotoxic T lymphocytes (CTLs) with gold nanospheres (AuNS) under 523nm laser irradiation and Hepta1-6 tumor spheres with indocyanine green (ICG) under 800nm irradiation. However, at 1K laser PRF, it takes approximately 20 minutes to obtain a full sample volume of 160 × 160 × 150 μm3 . To increase the imaging rate, we propose a random non-uniform sparse sampling mechanism to achieve fast sparse photoacoustic data acquisition. The image recovery process is formulated as a low-rank matrix recovery (LRMR) based on compressed sensing (CS) theory. We show that it could be stably recovered via nuclear-norm minimization optimization problem to maintain image quality from a significantly fewer measurement. In this study, we use the dual-wavelength OR-PAM with CS to visualize T cell trafficking in a 3D culture system with higher temporal resolution. Data acquisition time is reduced by 40% in such sample volume where sampling density is 0.5. The imaging system reveals the potential to understand the dynamic cellular process for preclinical screening of anti-cancer drugs.
Extending Geographic Weights of Evidence Models for Use in Location Based Services
ERIC Educational Resources Information Center
Sonwalkar, Mukul Dinkar
2012-01-01
This dissertation addresses the use and modeling of spatio-temporal data for the purposes of providing applications for location based services. One of the major issues in dealing with spatio-temporal data for location based services is the availability and sparseness of such data. Other than the hardware costs associated with collecting movement…
NASA Astrophysics Data System (ADS)
Xiao, Sa; Deng, He; Duan, Caohui; Xie, Junshuai; Zhang, Huiting; Sun, Xianping; Ye, Chaohui; Zhou, Xin
2018-05-01
Dynamic hyperpolarized (HP) 129Xe MRI is able to visualize the process of lung ventilation, which potentially provides unique information about lung physiology and pathophysiology. However, the longitudinal magnetization of HP 129Xe is nonrenewable, making it difficult to achieve high image quality while maintaining high temporal-spatial resolution in the pulmonary dynamic MRI. In this paper, we propose a new accelerated dynamic HP 129Xe MRI scheme incorporating the low-rank, sparse and gas-inflow effects (L + S + G) constraints. According to the gas-inflow effects of HP gas during the lung inspiratory process, a variable-flip-angle (VFA) strategy is designed to compensate for the rapid attenuation of the magnetization. After undersampling k-space data, an effective reconstruction algorithm considering the low-rank, sparse and gas-inflow effects constraints is developed to reconstruct dynamic MR images. In this way, the temporal and spatial resolution of dynamic MR images is improved and the artifacts are lessened. Simulation and in vivo experiments implemented on the phantom and healthy volunteers demonstrate that the proposed method is not only feasible and effective to compensate for the decay of the magnetization, but also has a significant improvement compared with the conventional reconstruction algorithms (P-values are less than 0.05). This confirms the superior performance of the proposed designs and their ability to maintain high quality and temporal-spatial resolution.
Deploying temporary networks for upscaling of sparse network stations
NASA Astrophysics Data System (ADS)
Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Kelly, Victoria; Hall, Mark; Palecki, Michael A.; Temimi, Marouane
2016-10-01
Soil observations networks at the national scale play an integral role in hydrologic modeling, drought assessment, agricultural decision support, and our ability to understand climate change. Understanding soil moisture variability is necessary to apply these measurements to model calibration, business and consumer applications, or even human health issues. The installation of soil moisture sensors as sparse, national networks is necessitated by limited financial resources. However, this results in the incomplete sampling of the local heterogeneity of soil type, vegetation cover, topography, and the fine spatial distribution of precipitation events. To this end, temporary networks can be installed in the areas surrounding a permanent installation within a sparse network. The temporary networks deployed in this study provide a more representative average at the 3 km and 9 km scales, localized about the permanent gauge. The value of such temporary networks is demonstrated at test sites in Millbrook, New York and Crossville, Tennessee. The capacity of a single U.S. Climate Reference Network (USCRN) sensor set to approximate the average of a temporary network at the 3 km and 9 km scales using a simple linear scaling function is tested. The capacity of a temporary network to provide reliable estimates with diminishing numbers of sensors, the temporal stability of those networks, and ultimately, the relationship of the variability of those networks to soil moisture conditions at the permanent sensor are investigated. In this manner, this work demonstrates the single-season installation of a temporary network as a mechanism to characterize the soil moisture variability at a permanent gauge within a sparse network.
Measuring suspended sediment: Chapter 10
Gray, J.R.; Landers, M.N.
2013-01-01
Suspended sediment in streams and rivers can be measured using traditional instruments and techniques and (or) surrogate technologies. The former, as described herein, consists primarily of both manually deployed isokinetic samplers and their deployment protocols developed by the Federal Interagency Sedimentation Project. They are used on all continents other than Antarctica. The reliability of the typically spatially rich but temporally sparse data produced by traditional means is supported by a broad base of scientific literature since 1940. However, the suspended sediment surrogate technologies described herein – based on hydroacoustic, nephelometric, laser, and pressure difference principles – tend to produce temporally rich but in some cases spatially sparse datasets. The value of temporally rich data in the accuracy of continuous sediment-discharge records is hard to overstate, in part because such data can often overcome the shortcomings of poor spatial coverage. Coupled with calibration data produced by traditional means, surrogate technologies show considerable promise toward providing the fluvial sediment data needed to increase and bring more consistency to sediment-discharge measurements worldwide.
Sparse representation based SAR vehicle recognition along with aspect angle.
Xing, Xiangwei; Ji, Kefeng; Zou, Huanxin; Sun, Jixiang
2014-01-01
As a method of representing the test sample with few training samples from an overcomplete dictionary, sparse representation classification (SRC) has attracted much attention in synthetic aperture radar (SAR) automatic target recognition (ATR) recently. In this paper, we develop a novel SAR vehicle recognition method based on sparse representation classification along with aspect information (SRCA), in which the correlation between the vehicle's aspect angle and the sparse representation vector is exploited. The detailed procedure presented in this paper can be summarized as follows. Initially, the sparse representation vector of a test sample is solved by sparse representation algorithm with a principle component analysis (PCA) feature-based dictionary. Then, the coefficient vector is projected onto a sparser one within a certain range of the vehicle's aspect angle. Finally, the vehicle is classified into a certain category that minimizes the reconstruction error with the novel sparse representation vector. Extensive experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) dataset and the results demonstrate that the proposed method performs robustly under the variations of depression angle and target configurations, as well as incomplete observation.
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David
2017-03-01
Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.
Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.
Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S
2010-01-01
Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.
Discriminant WSRC for Large-Scale Plant Species Recognition.
Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong
2017-01-01
In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.
Group-sparse representation with dictionary learning for medical image denoising and fusion.
Li, Shutao; Yin, Haitao; Fang, Leyuan
2012-12-01
Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.
Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data
Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
Identifying sighting clusters of endangered taxa with historical records.
Duffy, Karl J
2011-04-01
The probability and time of extinction of taxa is often inferred from statistical analyses of historical records. Many of these analyses require the exclusion of multiple records within a unit of time (i.e., a month or a year). Nevertheless, spatially explicit, temporally aggregated data may be useful for identifying clusters of sightings (i.e., sighting clusters) in space and time. Identification of sighting clusters highlights changes in the historical recording of endangered taxa. I used two methods to identify sighting clusters in historical records: the Ederer-Myers-Mantel (EMM) test and the space-time permutation scan (STPS). I applied these methods to the spatially explicit sighting records of three species of orchids that are listed as endangered in the Republic of Ireland under the Wildlife Act (1976): Cephalanthera longifolia, Hammarbya paludosa, and Pseudorchis albida. Results with the EMM test were strongly affected by the choice of the time interval, and thus the number of temporal samples, used to examine the records. For example, sightings of P. albida clustered when the records were partitioned into 20-year temporal samples, but not when they were partitioned into 22-year temporal samples. Because the statistical power of EMM was low, it will not be useful when data are sparse. Nevertheless, the STPS identified regions that contained sighting clusters because it uses a flexible scanning window (defined by cylinders of varying size that move over the study area and evaluate the likelihood of clustering) to detect them, and it identified regions with high and regions with low rates of orchid sightings. The STPS analyses can be used to detect sighting clusters of endangered species that may be related to regions of extirpation and may assist in the categorization of threat status. ©2010 Society for Conservation Biology.
Sparse distributed memory overview
NASA Technical Reports Server (NTRS)
Raugh, Mike
1990-01-01
The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.
Non-uniform sampling: post-Fourier era of NMR data collection and processing.
Kazimierczuk, Krzysztof; Orekhov, Vladislav
2015-11-01
The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Mayberry, Emily J.; Sage, Karen; Ehsan, Sheeba; Ralph, Matthew A. Lambon
2011-01-01
When relearning words, patients with semantic dementia (SD) exhibit a characteristic rigidity, including a failure to generalise names to untrained exemplars of trained concepts. This has been attributed to an over-reliance on the medial temporal region which captures information in sparse, non-overlapping and therefore rigid representations. The…
The impact of nonuniform sampling on stratospheric ozone trends derived from occultation instruments
NASA Astrophysics Data System (ADS)
Damadeo, Robert P.; Zawodny, Joseph M.; Remsberg, Ellis E.; Walker, Kaley A.
2018-01-01
This paper applies a recently developed technique for deriving long-term trends in ozone from sparsely sampled data sets to multiple occultation instruments simultaneously without the need for homogenization. The technique can compensate for the nonuniform temporal, spatial, and diurnal sampling of the different instruments and can also be used to account for biases and drifts between instruments. These problems have been noted in recent international assessments as being a primary source of uncertainty that clouds the significance of derived trends. Results show potential recovery
trends of ˜ 2-3 % decade-1 in the upper stratosphere at midlatitudes, which are similar to other studies, and also how sampling biases present in these data sets can create differences in derived recovery trends of up to ˜ 1 % decade-1 if not properly accounted for. Limitations inherent to all techniques (e.g., relative instrument drifts) and their impacts (e.g., trend differences up to ˜ 2 % decade-1) are also described and a potential path forward towards resolution is presented.
Scarpino, Samuel V.; Jansen, Patrick A.; Garzon-Lopez, Carol X.; Winkelhagen, Annemarie J. S.; Bohlman, Stephanie A.; Walsh, Peter D.
2010-01-01
Background The movement patterns of wild animals depend crucially on the spatial and temporal availability of resources in their habitat. To date, most attempts to model this relationship were forced to rely on simplified assumptions about the spatiotemporal distribution of food resources. Here we demonstrate how advances in statistics permit the combination of sparse ground sampling with remote sensing imagery to generate biological relevant, spatially and temporally explicit distributions of food resources. We illustrate our procedure by creating a detailed simulation model of fruit production patterns for Dipteryx oleifera, a keystone tree species, on Barro Colorado Island (BCI), Panama. Methodology and Principal Findings Aerial photographs providing GPS positions for large, canopy trees, the complete census of a 50-ha and 25-ha area, diameter at breast height data from haphazardly sampled trees and long-term phenology data from six trees were used to fit 1) a point process model of tree spatial distribution and 2) a generalized linear mixed-effect model of temporal variation of fruit production. The fitted parameters from these models are then used to create a stochastic simulation model which incorporates spatio-temporal variations of D. oleifera fruit availability on BCI. Conclusions and Significance We present a framework that can provide a statistical characterization of the habitat that can be included in agent-based models of animal movements. When environmental heterogeneity cannot be exhaustively mapped, this approach can be a powerful alternative. The results of our model on the spatio-temporal variation in D. oleifera fruit availability will be used to understand behavioral and movement patterns of several species on BCI. PMID:21124927
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
Lee, Wang Wei; Kukreja, Sunil L.; Thakor, Nitish V.
2017-01-01
This paper presents a neuromorphic tactile encoding methodology that utilizes a temporally precise event-based representation of sensory signals. We introduce a novel concept where touch signals are characterized as patterns of millisecond precise binary events to denote pressure changes. This approach is amenable to a sparse signal representation and enables the extraction of relevant features from thousands of sensing elements with sub-millisecond temporal precision. We also proposed measures adopted from computational neuroscience to study the information content within the spiking representations of artificial tactile signals. Implemented on a state-of-the-art 4096 element tactile sensor array with 5.2 kHz sampling frequency, we demonstrate the classification of transient impact events while utilizing 20 times less communication bandwidth compared to frame based representations. Spiking sensor responses to a large library of contact conditions were also synthesized using finite element simulations, illustrating an 8-fold improvement in information content and a 4-fold reduction in classification latency when millisecond-precise temporal structures are available. Our research represents a significant advance, demonstrating that a neuromorphic spatiotemporal representation of touch is well suited to rapid identification of critical contact events, making it suitable for dynamic tactile sensing in robotic and prosthetic applications. PMID:28197065
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
Estimation of river and stream temperature trends under haphazard sampling
Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao
2015-01-01
Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization
Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996
Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.
NASA Astrophysics Data System (ADS)
Taubmann, O.; Haase, V.; Lauritsch, G.; Zheng, Y.; Krings, G.; Hornegger, J.; Maier, A.
2017-04-01
Time-resolved tomographic cardiac imaging using an angiographic C-arm device may support clinicians during minimally invasive therapy by enabling a thorough analysis of the heart function directly in the catheter laboratory. However, clinically feasible acquisition protocols entail a highly challenging reconstruction problem which suffers from sparse angular sampling of the trajectory. Compressed sensing theory promises that useful images can be recovered despite massive undersampling by means of sparsity-based regularization. For a multitude of reasons—most notably the desired reduction of scan time, dose and contrast agent required—it is of great interest to know just how little data is actually sufficient for a certain task. In this work, we apply a convex optimization approach based on primal-dual splitting to 4D cardiac C-arm computed tomography. We examine how the quality of spatially and temporally total-variation-regularized reconstruction degrades when using as few as 6.9+/- 1.2 projection views per heart phase. First, feasible regularization weights are determined in a numerical phantom study, demonstrating the individual benefits of both regularizers. Secondly, a task-based evaluation is performed in eight clinical patients. Semi-automatic segmentation-based volume measurements of the left ventricular blood pool performed on strongly undersampled images show a correlation of close to 99% with measurements obtained from less sparsely sampled data.
New methods for sampling sparse populations
Anna Ringvall
2007-01-01
To improve surveys of sparse objects, methods that use auxiliary information have been suggested. Guided transect sampling uses prior information, e.g., from aerial photographs, for the layout of survey strips. Instead of being laid out straight, the strips will wind between potentially more interesting areas. 3P sampling (probability proportional to prediction) uses...
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Yongchao; Dorn, Charles; Mancini, Tyler
Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less
Yang, Yongchao; Dorn, Charles; Mancini, Tyler; ...
2016-12-05
Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less
How Does the Sparse Memory “Engram” Neurons Encode the Memory of a Spatial–Temporal Event?
Guan, Ji-Song; Jiang, Jun; Xie, Hong; Liu, Kai-Yuan
2016-01-01
Episodic memory in human brain is not a fixed 2-D picture but a highly dynamic movie serial, integrating information at both the temporal and the spatial domains. Recent studies in neuroscience reveal that memory storage and recall are closely related to the activities in discrete memory engram (trace) neurons within the dentate gyrus region of hippocampus and the layer 2/3 of neocortex. More strikingly, optogenetic reactivation of those memory trace neurons is able to trigger the recall of naturally encoded memory. It is still unknown how the discrete memory traces encode and reactivate the memory. Considering a particular memory normally represents a natural event, which consists of information at both the temporal and spatial domains, it is unknown how the discrete trace neurons could reconstitute such enriched information in the brain. Furthermore, as the optogenetic-stimuli induced recall of memory did not depend on firing pattern of the memory traces, it is most likely that the spatial activation pattern, but not the temporal activation pattern of the discrete memory trace neurons encodes the memory in the brain. How does the neural circuit convert the activities in the spatial domain into the temporal domain to reconstitute memory of a natural event? By reviewing the literature, here we present how the memory engram (trace) neurons are selected and consolidated in the brain. Then, we will discuss the main challenges in the memory trace theory. In the end, we will provide a plausible model of memory trace cell network, underlying the conversion of neural activities between the spatial domain and the temporal domain. We will also discuss on how the activation of sparse memory trace neurons might trigger the replay of neural activities in specific temporal patterns. PMID:27601979
How Does the Sparse Memory "Engram" Neurons Encode the Memory of a Spatial-Temporal Event?
Guan, Ji-Song; Jiang, Jun; Xie, Hong; Liu, Kai-Yuan
2016-01-01
Episodic memory in human brain is not a fixed 2-D picture but a highly dynamic movie serial, integrating information at both the temporal and the spatial domains. Recent studies in neuroscience reveal that memory storage and recall are closely related to the activities in discrete memory engram (trace) neurons within the dentate gyrus region of hippocampus and the layer 2/3 of neocortex. More strikingly, optogenetic reactivation of those memory trace neurons is able to trigger the recall of naturally encoded memory. It is still unknown how the discrete memory traces encode and reactivate the memory. Considering a particular memory normally represents a natural event, which consists of information at both the temporal and spatial domains, it is unknown how the discrete trace neurons could reconstitute such enriched information in the brain. Furthermore, as the optogenetic-stimuli induced recall of memory did not depend on firing pattern of the memory traces, it is most likely that the spatial activation pattern, but not the temporal activation pattern of the discrete memory trace neurons encodes the memory in the brain. How does the neural circuit convert the activities in the spatial domain into the temporal domain to reconstitute memory of a natural event? By reviewing the literature, here we present how the memory engram (trace) neurons are selected and consolidated in the brain. Then, we will discuss the main challenges in the memory trace theory. In the end, we will provide a plausible model of memory trace cell network, underlying the conversion of neural activities between the spatial domain and the temporal domain. We will also discuss on how the activation of sparse memory trace neurons might trigger the replay of neural activities in specific temporal patterns.
Improved FastICA algorithm in fMRI data analysis using the sparsity property of the sources.
Ge, Ruiyang; Wang, Yubao; Zhang, Jipeng; Yao, Li; Zhang, Hang; Long, Zhiying
2016-04-01
As a blind source separation technique, independent component analysis (ICA) has many applications in functional magnetic resonance imaging (fMRI). Although either temporal or spatial prior information has been introduced into the constrained ICA and semi-blind ICA methods to improve the performance of ICA in fMRI data analysis, certain types of additional prior information, such as the sparsity, has seldom been added to the ICA algorithms as constraints. In this study, we proposed a SparseFastICA method by adding the source sparsity as a constraint to the FastICA algorithm to improve the performance of the widely used FastICA. The source sparsity is estimated through a smoothed ℓ0 norm method. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of SparseFastICA and made a performance comparison between SparseFastICA, FastICA and Infomax ICA. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of SparseFastICA for the source separation in fMRI data. Both the simulated and real fMRI experimental results showed that SparseFastICA has better robustness to noise and better spatial detection power than FastICA. Although the spatial detection power of SparseFastICA and Infomax did not show significant difference, SparseFastICA had faster computation speed than Infomax. SparseFastICA was comparable to the Infomax algorithm with a faster computation speed. More importantly, SparseFastICA outperformed FastICA in robustness and spatial detection power and can be used to identify more accurate brain networks than FastICA algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.
Dense encoding of natural odorants by ensembles of sparsely activated neurons in the olfactory bulb
Gschwend, Olivier; Beroud, Jonathan; Vincis, Roberto; Rodriguez, Ivan; Carleton, Alan
2016-01-01
Sensory information undergoes substantial transformation along sensory pathways, usually encompassing sparsening of activity. In the olfactory bulb, though natural odorants evoke dense glomerular input maps, mitral and tufted (M/T) cells tuning is considered to be sparse because of highly odor-specific firing rate change. However, experiments used to draw this conclusion were either based on recordings performed in anesthetized preparations or used monomolecular odorants presented at arbitrary concentrations. In this study, we evaluated the lifetime and population sparseness evoked by natural odorants by capturing spike temporal patterning of neuronal assemblies instead of individual M/T tonic activity. Using functional imaging and tetrode recordings in awake mice, we show that natural odorants at their native concentrations are encoded by broad assemblies of M/T cells. While reducing odorant concentrations, we observed a reduced number of activated glomeruli representations and consequently a narrowing of M/T tuning curves. We conclude that natural odorants at their native concentrations recruit M/T cells with phasic rather than tonic activity. When encoding odorants in assemblies, M/T cells carry information about a vast number of odorants (lifetime sparseness). In addition, each natural odorant activates a broad M/T cell assembly (population sparseness). PMID:27824096
Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models
Maji, Suvrajit; Bruchez, Marcel P.
2012-01-01
Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information. PMID:22629348
Uncovering representations of sleep-associated hippocampal ensemble spike activity
NASA Astrophysics Data System (ADS)
Chen, Zhe; Grosmark, Andres D.; Penagos, Hector; Wilson, Matthew A.
2016-08-01
Pyramidal neurons in the rodent hippocampus exhibit spatial tuning during spatial navigation, and they are reactivated in specific temporal order during sharp-wave ripples observed in quiet wakefulness or slow wave sleep. However, analyzing representations of sleep-associated hippocampal ensemble spike activity remains a great challenge. In contrast to wake, during sleep there is a complete absence of animal behavior, and the ensemble spike activity is sparse (low occurrence) and fragmental in time. To examine important issues encountered in sleep data analysis, we constructed synthetic sleep-like hippocampal spike data (short epochs, sparse and sporadic firing, compressed timescale) for detailed investigations. Based upon two Bayesian population-decoding methods (one receptive field-based, and the other not), we systematically investigated their representation power and detection reliability. Notably, the receptive-field-free decoding method was found to be well-tuned for hippocampal ensemble spike data in slow wave sleep (SWS), even in the absence of prior behavioral measure or ground truth. Our results showed that in addition to the sample length, bin size, and firing rate, number of active hippocampal pyramidal neurons are critical for reliable representation of the space as well as for detection of spatiotemporal reactivated patterns in SWS or quiet wakefulness.
NASA Astrophysics Data System (ADS)
Orović, Irena; Stanković, Srdjan; Amin, Moeness
2013-05-01
A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
Human motion tracking by temporal-spatial local gaussian process experts.
Zhao, Xu; Fu, Yun; Liu, Yuncai
2011-04-01
Human pose estimation via motion tracking systems can be considered as a regression problem within a discriminative framework. It is always a challenging task to model the mapping from observation space to state space because of the high-dimensional characteristic in the multimodal conditional distribution. In order to build the mapping, existing techniques usually involve a large set of training samples in the learning process which are limited in their capability to deal with multimodality. We propose, in this work, a novel online sparse Gaussian Process (GP) regression model to recover 3-D human motion in monocular videos. Particularly, we investigate the fact that for a given test input, its output is mainly determined by the training samples potentially residing in its local neighborhood and defined in the unified input-output space. This leads to a local mixture GP experts system composed of different local GP experts, each of which dominates a mapping behavior with the specific covariance function adapting to a local region. To handle the multimodality, we combine both temporal and spatial information therefore to obtain two categories of local experts. The temporal and spatial experts are integrated into a seamless hybrid system, which is automatically self-initialized and robust for visual tracking of nonlinear human motion. Learning and inference are extremely efficient as all the local experts are defined online within very small neighborhoods. Extensive experiments on two real-world databases, HumanEva and PEAR, demonstrate the effectiveness of our proposed model, which significantly improve the performance of existing models.
Southern Ocean Seasonal Net Production from Satellite, Atmosphere, and Ocean Data Sets
NASA Technical Reports Server (NTRS)
Keeling, Ralph F.; Campbell, J. (Technical Monitor)
2002-01-01
A new climatology of monthly air-sea O2 flux was developed using the net air-sea heat flux as a template for spatial and temporal interpolation of sparse hydrographic data. The climatology improves upon the previous climatology of Najjar and Keeling in the Southern Hemisphere, where the heat-based approach helps to overcome limitations due to sparse data coverage. The climatology is used to make comparisons with productivity derived from CZCS images. The climatology is also used in support of an investigation of the plausible impact of recent global warming an oceanic O2 inventories.
Intensity correlation imaging with sunlight-like source
NASA Astrophysics Data System (ADS)
Wang, Wentao; Tang, Zhiguo; Zheng, Huaibin; Chen, Hui; Yuan, Yuan; Liu, Jinbin; Liu, Yanyan; Xu, Zhuo
2018-05-01
We show a method of intensity correlation imaging of targets illuminated by a sunlight-like source both theoretically and experimentally. With a Faraday anomalous dispersion optical filter (FADOF), we have modulated the coherence time of a thermal source up to 0.167 ns. And we carried out measurements of temporal and spatial correlations, respectively, with an intensity interferometer setup. By skillfully using the even Fourier fitting on the very sparse sampling data, the images of targets are successfully reconstructed from the low signal-noise-ratio(SNR) interference pattern by applying an iterative phase retrieval algorithm. The resulting imaging quality is as well as the one obtained by the theoretical fitting. The realization of such a case will bring this technique closer to geostationary satellite imaging illuminated by sunlight.
Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning
NASA Astrophysics Data System (ADS)
Li, Jun-Bao; Liu, Jing; Pan, Jeng-Shyang; Yao, Hongxun
2017-06-01
Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.
An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.
Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi
2016-02-01
Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce themore » electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. As a result, the use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO 3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.« less
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; ...
2016-10-17
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce themore » electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.« less
Semi-implicit integration factor methods on sparse grids for high-dimensional systems
NASA Astrophysics Data System (ADS)
Wang, Dongyong; Chen, Weitao; Nie, Qing
2015-07-01
Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.
Sparse representation of whole-brain fMRI signals for identification of functional networks.
Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming
2015-02-01
There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.
Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.
Sajda, Paul
2010-01-01
In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.
Fast and low-dose computed laminography using compressive sensing based technique
NASA Astrophysics Data System (ADS)
Abbas, Sajid; Park, Miran; Cho, Seungryong
2015-03-01
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.
Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi
2012-10-01
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
A Flexible Spatio-Temporal Model for Air Pollution with Spatial and Spatio-Temporal Covariates.
Lindström, Johan; Szpiro, Adam A; Sampson, Paul D; Oron, Assaf P; Richards, Mark; Larson, Tim V; Sheppard, Lianne
2014-09-01
The development of models that provide accurate spatio-temporal predictions of ambient air pollution at small spatial scales is of great importance for the assessment of potential health effects of air pollution. Here we present a spatio-temporal framework that predicts ambient air pollution by combining data from several different monitoring networks and deterministic air pollution model(s) with geographic information system (GIS) covariates. The model presented in this paper has been implemented in an R package, SpatioTemporal, available on CRAN. The model is used by the EPA funded Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) to produce estimates of ambient air pollution; MESA Air uses the estimates to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. In this paper we use the model to predict long-term average concentrations of NO x in the Los Angeles area during a ten year period. Predictions are based on measurements from the EPA Air Quality System, MESA Air specific monitoring, and output from a source dispersion model for traffic related air pollution (Caline3QHCR). Accuracy in predicting long-term average concentrations is evaluated using an elaborate cross-validation setup that accounts for a sparse spatio-temporal sampling pattern in the data, and adjusts for temporal effects. The predictive ability of the model is good with cross-validated R 2 of approximately 0.7 at subject sites. Replacing four geographic covariate indicators of traffic density with the Caline3QHCR dispersion model output resulted in very similar prediction accuracy from a more parsimonious and more interpretable model. Adding traffic-related geographic covariates to the model that included Caline3QHCR did not further improve the prediction accuracy.
Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.
Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang
2017-07-01
It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.
Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding
Li, Xin; Guo, Rui; Chen, Chao
2014-01-01
Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.
Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff
2017-01-01
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
Statistical Deconvolution for Superresolution Fluorescence Microscopy
Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei
2012-01-01
Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393
NASA Astrophysics Data System (ADS)
Shoupeng, Song; Zhou, Jiang
2017-03-01
Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.
Sparsely sampling the sky: a Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Paykari, P.; Jaffe, A. H.
2013-08-01
The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.
Robust Small Target Co-Detection from Airborne Infrared Image Sequences.
Gao, Jingli; Wen, Chenglin; Liu, Meiqin
2017-09-29
In this paper, a novel infrared target co-detection model combining the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain is proposed to detect small targets in a sequence of infrared images with complex backgrounds. Firstly, a dense target extraction model based on nonlinear weights is proposed, which can better suppress background of images and enhance small targets than weights of singular values. Secondly, a sparse target extraction model based on entry-wise weighted robust principal component analysis is proposed. The entry-wise weight adaptively incorporates structural prior in terms of local weighted entropy, thus, it can extract real targets accurately and suppress background clutters efficiently. Finally, the commonality of targets in the spatio-temporal domain are used to construct target refinement model for false alarms suppression and target confirmation. Since real targets could appear in both of the dense and sparse reconstruction maps of a single frame, and form trajectories after tracklet association of consecutive frames, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames have strong ability to discriminate between small targets and background clutters. Experimental results demonstrate that the proposed small target co-detection method can not only suppress background clutters effectively, but also detect targets accurately even if with target-like interference.
Correlated activity supports efficient cortical processing
Hung, Chou P.; Cui, Ding; Chen, Yueh-peng; Lin, Chia-pei; Levine, Matthew R.
2015-01-01
Visual recognition is a computational challenge that is thought to occur via efficient coding. An important concept is sparseness, a measure of coding efficiency. The prevailing view is that sparseness supports efficiency by minimizing redundancy and correlations in spiking populations. Yet, we recently reported that “choristers”, neurons that behave more similarly (have correlated stimulus preferences and spontaneous coincident spiking), carry more generalizable object information than uncorrelated neurons (“soloists”) in macaque inferior temporal (IT) cortex. The rarity of choristers (as low as 6% of IT neurons) indicates that they were likely missed in previous studies. Here, we report that correlation strength is distinct from sparseness (choristers are not simply broadly tuned neurons), that choristers are located in non-granular output layers, and that correlated activity predicts human visual search efficiency. These counterintuitive results suggest that a redundant correlational structure supports efficient processing and behavior. PMID:25610392
A novel sub-shot segmentation method for user-generated video
NASA Astrophysics Data System (ADS)
Lei, Zhuo; Zhang, Qian; Zheng, Chi; Qiu, Guoping
2018-04-01
With the proliferation of the user-generated videos, temporal segmentation is becoming a challengeable problem. Traditional video temporal segmentation methods like shot detection are not able to work on unedited user-generated videos, since they often only contain one single long shot. We propose a novel temporal segmentation framework for user-generated video. It finds similar frames with a tree partitioning min-Hash technique, constructs sparse temporal constrained affinity sub-graphs, and finally divides the video into sub-shot-level segments with a dense-neighbor-based clustering method. Experimental results show that our approach outperforms all the other related works. Furthermore, it is indicated that the proposed approach is able to segment user-generated videos at an average human level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
Dynamic mode decomposition for plasma diagnostics and validation.
Taylor, Roy; Kutz, J Nathan; Morgan, Kyle; Nelson, Brian A
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Dynamic mode decomposition for plasma diagnostics and validation
NASA Astrophysics Data System (ADS)
Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Fast and low-dose computed laminography using compressive sensing based technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbas, Sajid, E-mail: scho@kaist.ac.kr; Park, Miran, E-mail: scho@kaist.ac.kr; Cho, Seungryong, E-mail: scho@kaist.ac.kr
2015-03-31
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspiredmore » total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.« less
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters).
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-07
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Fredenberg, E.; Lundqvist, Mats; Siewerdsen, J. H.
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging. PMID:26611740
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe
2018-02-15
We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
Accelerated High-Dimensional MR Imaging with Sparse Sampling Using Low-Rank Tensors
He, Jingfei; Liu, Qiegen; Christodoulou, Anthony G.; Ma, Chao; Lam, Fan
2017-01-01
High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI. PMID:27093543
Spatio-temporal features for tracking and quadruped/biped discrimination
NASA Astrophysics Data System (ADS)
Rickman, Rick; Copsey, Keith; Bamber, David C.; Page, Scott F.
2012-05-01
Techniques such as SIFT and SURF facilitate efficient and robust image processing operations through the use of sparse and compact spatial feature descriptors and show much potential for defence and security applications. This paper considers the extension of such techniques to include information from the temporal domain, to improve utility in applications involving moving imagery within video data. In particular, the paper demonstrates how spatio-temporal descriptors can be used very effectively as the basis of a target tracking system and as target discriminators which can distinguish between bipeds and quadrupeds. Results using sequences of video imagery of walking humans and dogs are presented, and the relative merits of the approach are discussed.
Hardware Acceleration of Sparse Cognitive Algorithms
2016-05-01
Processor in Memory (PiM) extensions and a 65 nm ASIC version. They were compared against a 28 nm GPU baseline using the KTH video action recognition...30 Table 17. Memory Requirement of Proposed ASIC...for improvement of performance per unit of power for customized implementations of the Sparsey and Numenta Hierarchical Temporal Memory (HTM
Competing for Consciousness: Prolonged Mask Exposure Reduces Object Substitution Masking
ERIC Educational Resources Information Center
Goodhew, Stephanie C.; Visser, Troy A. W.; Lipp, Ottmar V.; Dux, Paul E.
2011-01-01
In object substitution masking (OSM) a sparse, temporally trailing 4-dot mask impairs target identification, even though it has different contours from, and does not spatially overlap with the target. Here, we demonstrate a previously unknown characteristic of OSM: Observers show reduced masking at prolonged (e.g., 640 ms) relative to intermediate…
Measuring Meaning: Searching for and Making Sense of Spousal Loss in Late-Life
ERIC Educational Resources Information Center
Coleman, Rachel A.; Neimeyer, Robert A.
2010-01-01
Despite much recent theorizing, evidence regarding the temporal relationship of sense-making to adjustment following bereavement remains relatively sparse. This study examined the role of searching for and making sense of loss in late-life spousal bereavement, using prospective, longitudinal data from the Changing Lives of Older Couples (CLOC)…
Self-expressive Dictionary Learning for Dynamic 3D Reconstruction.
Zheng, Enliang; Ji, Dinghuang; Dunn, Enrique; Frahm, Jan-Michael
2017-08-22
We target the problem of sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning, where the dictionary is defined as an aggregation of the temporally varying 3D structures. Given the smooth motion of dynamic objects, we observe any element in the dictionary can be well approximated by a sparse linear combination of other elements in the same dictionary (i.e. self-expression). Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. We further analyze the reconstructability of our approach under different capture scenarios, and its comparison and relation to existing methods. Experimental results on large amounts of synthetic data as well as real imagery demonstrate the effectiveness of our approach.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Towards sparse characterisation of on-body ultra-wideband wireless channels.
Yang, Xiaodong; Ren, Aifeng; Zhang, Zhiya; Ur Rehman, Masood; Abbasi, Qammer Hussain; Alomainy, Akram
2015-06-01
With the aim of reducing cost and power consumption of the receiving terminal, compressive sensing (CS) framework is applied to on-body ultra-wideband (UWB) channel estimation. It is demonstrated in this Letter that the sparse on-body UWB channel impulse response recovered by the CS framework fits the original sparse channel well; thus, on-body channel estimation can be achieved using low-speed sampling devices.
Towards sparse characterisation of on-body ultra-wideband wireless channels
Ren, Aifeng; Zhang, Zhiya; Ur Rehman, Masood; Abbasi, Qammer Hussain; Alomainy, Akram
2015-01-01
With the aim of reducing cost and power consumption of the receiving terminal, compressive sensing (CS) framework is applied to on-body ultra-wideband (UWB) channel estimation. It is demonstrated in this Letter that the sparse on-body UWB channel impulse response recovered by the CS framework fits the original sparse channel well; thus, on-body channel estimation can be achieved using low-speed sampling devices. PMID:26609409
Lugauer, Felix; Wetzl, Jens; Forman, Christoph; Schneider, Manuel; Kiefer, Berthold; Hornegger, Joachim; Nickel, Dominik; Maier, Andreas
2018-06-01
Our aim was to develop and validate a 3D Cartesian Look-Locker [Formula: see text] mapping technique that achieves high accuracy and whole-liver coverage within a single breath-hold. The proposed method combines sparse Cartesian sampling based on a spatiotemporally incoherent Poisson pattern and k-space segmentation, dedicated for high-temporal-resolution imaging. This combination allows capturing tissue with short relaxation times with volumetric coverage. A joint reconstruction of the 3D + inversion time (TI) data via compressed sensing exploits the spatiotemporal sparsity and ensures consistent quality for the subsequent multistep [Formula: see text] mapping. Data from the National Institute of Standards and Technology (NIST) phantom and 11 volunteers, along with reference 2D Look-Locker acquisitions, are used for validation. 2D and 3D methods are compared based on [Formula: see text] values in different abdominal tissues at 1.5 and 3 T. [Formula: see text] maps obtained from the proposed 3D method compare favorably with those from the 2D reference and additionally allow for reformatting or volumetric analysis. Excellent agreement is shown in phantom [bias[Formula: see text] < 2%, bias[Formula: see text] < 5% for (120; 2000) ms] and volunteer data (3D and 2D deviation < 4% for liver, muscle, and spleen) for clinically acceptable scan (20 s) and reconstruction times (< 4 min). Whole-liver [Formula: see text] mapping with high accuracy and precision is feasible in one breath-hold using spatiotemporally incoherent, sparse 3D Cartesian sampling.
SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, B; Gao, H
Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less
Accelerating free breathing myocardial perfusion MRI using multi coil radial k - t SLR
NASA Astrophysics Data System (ADS)
Goud Lingala, Sajan; DiBella, Edward; Adluru, Ganesh; McGann, Christopher; Jacob, Mathews
2013-10-01
The clinical utility of myocardial perfusion MR imaging (MPI) is often restricted by the inability of current acquisition schemes to simultaneously achieve high spatio-temporal resolution, good volume coverage, and high signal to noise ratio. Moreover, many subjects often find it difficult to hold their breath for sufficiently long durations making it difficult to obtain reliable MPI data. Accelerated acquisition of free breathing MPI data can overcome some of these challenges. Recently, an algorithm termed as k - t SLR has been proposed to accelerate dynamic MRI by exploiting sparsity and low rank properties of dynamic MRI data. The main focus of this paper is to further improve k - t SLR and demonstrate its utility in considerably accelerating free breathing MPI. We extend its previous implementation to account for multi-coil radial MPI acquisitions. We perform k - t sampling experiments to compare different radial trajectories and determine the best sampling pattern. We also introduce a novel augmented Lagrangian framework to considerably improve the algorithm’s convergence rate. The proposed algorithm is validated using free breathing rest and stress radial perfusion data sets from two normal subjects and one patient with ischemia. k - t SLR was observed to provide faithful reconstructions at high acceleration levels with minimal artifacts compared to existing MPI acceleration schemes such as spatio-temporal constrained reconstruction and k - t SPARSE/SENSE.
Compressed sensing reconstruction of cardiac cine MRI using golden angle spiral trajectories
NASA Astrophysics Data System (ADS)
Tolouee, Azar; Alirezaie, Javad; Babyn, Paul
2015-11-01
In dynamic cardiac cine Magnetic Resonance Imaging (MRI), the spatiotemporal resolution is limited by the low imaging speed. Compressed sensing (CS) theory has been applied to improve the imaging speed and thus the spatiotemporal resolution. The purpose of this paper is to improve CS reconstruction of under sampled data by exploiting spatiotemporal sparsity and efficient spiral trajectories. We extend k-t sparse algorithm to spiral trajectories to achieve high spatio temporal resolutions in cardiac cine imaging. We have exploited spatiotemporal sparsity of cardiac cine MRI by applying a 2D + time wavelet-Fourier transform. For efficient coverage of k-space, we have used a modified version of multi shot (interleaved) spirals trajectories. In order to reduce incoherent aliasing artifact, we use different random undersampling pattern for each temporal frame. Finally, we have used nonuniform fast Fourier transform (NUFFT) algorithm to reconstruct the image from the non-uniformly acquired samples. The proposed approach was tested in simulated and cardiac cine MRI data. Results show that higher acceleration factors with improved image quality can be obtained with the proposed approach in comparison to the existing state-of-the-art method. The flexibility of the introduced method should allow it to be used not only for the challenging case of cardiac imaging, but also for other patient motion where the patient moves or breathes during acquisition.
Accelerating free breathing myocardial perfusion MRI using multi coil radial k-t SLR
Lingala, Sajan Goud; DiBella, Edward; Adluru, Ganesh; McGann, Christopher; Jacob, Mathews
2013-01-01
The clinical utility of myocardial perfusion MR imaging (MPI) is often restricted by the inability of current acquisition schemes to simultaneously achieve high spatio-temporal resolution, good volume coverage, and high signal to noise ratio. Moreover, many subjects often find it difficult to hold their breath for sufficiently long durations making it difficult to obtain reliable MPI data. Accelerated acquisition of free breathing MPI data can overcome some of these challenges. Recently, an algorithm termed as k − t SLR has been proposed to accelerate dynamic MRI by exploiting sparsity and low rank properties of dynamic MRI data. The main focus of this paper is to further improve k − t SLR and demonstrate its utility in considerably accelerating free breathing MPI. We extend its previous implementation to account for multi-coil radial MPI acquisitions. We perform k − t sampling experiments to compare different radial trajectories and determine the best sampling pattern. We also introduce a novel augmented Lagrangian framework to considerably improve the algorithm's convergence rate. The proposed algorithm is validated using free breathing rest and stress radial perfusion data sets from two normal subjects and one patient with ischemia. k − t SLR was observed to provide faithful reconstructions at high acceleration levels with minimal artifacts compared to existing MPI acceleration schemes such as spatio-temporal constrained reconstruction (STCR) and k − t SPARSE/SENSE. PMID:24077063
Mei, Kai; Kopp, Felix K; Bippus, Rolf; Köhler, Thomas; Schwaiger, Benedikt J; Gersing, Alexandra S; Fehringer, Andreas; Sauter, Andreas; Münzel, Daniela; Pfeiffer, Franz; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B; Baum, Thomas
2017-12-01
Osteoporosis diagnosis using multidetector CT (MDCT) is limited to relatively high radiation exposure. We investigated the effect of simulated ultra-low-dose protocols on in-vivo bone mineral density (BMD) and quantitative trabecular bone assessment. Institutional review board approval was obtained. Twelve subjects with osteoporotic vertebral fractures and 12 age- and gender-matched controls undergoing routine thoracic and abdominal MDCT were included (average effective dose: 10 mSv). Ultra-low radiation examinations were achieved by simulating lower tube currents and sparse samplings at 50%, 25% and 10% of the original dose. BMD and trabecular bone parameters were extracted in T10-L5. Except for BMD measurements in sparse sampling data, absolute values of all parameters derived from ultra-low-dose data were significantly different from those derived from original dose images (p<0.05). BMD, apparent bone fraction and trabecular thickness were still consistently lower in subjects with than in those without fractures (p<0.05). In ultra-low-dose scans, BMD and microstructure parameters were able to differentiate subjects with and without vertebral fractures, suggesting osteoporosis diagnosis is feasible. However, absolute values differed from original values. BMD from sparse sampling appeared to be more robust. This dose-dependency of parameters should be considered for future clinical use. • BMD and quantitative bone parameters are assessable in ultra-low-dose in vivo MDCT scans. • Bone mineral density does not change significantly when sparse sampling is applied. • Quantitative trabecular bone microstructure measurements are sensitive to dose reduction. • Osteoporosis subjects could be differentiated even at 10% of original dose. • Radiation exposure should be considered when comparing quantitative bone parameters.
Robust representation and recognition of facial emotions using extreme sparse learning.
Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang
2015-07-01
Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.
NASA Astrophysics Data System (ADS)
Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan
2018-02-01
The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.
Bouridane, Ahmed; Ling, Bingo Wing-Kuen
2018-01-01
This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
The brain dynamics of rapid perceptual adaptation to adverse listening conditions.
Erb, Julia; Henry, Molly J; Eisner, Frank; Obleser, Jonas
2013-06-26
Listeners show a remarkable ability to quickly adjust to degraded speech input. Here, we aimed to identify the neural mechanisms of such short-term perceptual adaptation. In a sparse-sampling, cardiac-gated functional magnetic resonance imaging (fMRI) acquisition, human listeners heard and repeated back 4-band-vocoded sentences (in which the temporal envelope of the acoustic signal is preserved, while spectral information is highly degraded). Clear-speech trials were included as baseline. An additional fMRI experiment on amplitude modulation rate discrimination quantified the convergence of neural mechanisms that subserve coping with challenging listening conditions for speech and non-speech. First, the degraded speech task revealed an "executive" network (comprising the anterior insula and anterior cingulate cortex), parts of which were also activated in the non-speech discrimination task. Second, trial-by-trial fluctuations in successful comprehension of degraded speech drove hemodynamic signal change in classic "language" areas (bilateral temporal cortices). Third, as listeners perceptually adapted to degraded speech, downregulation in a cortico-striato-thalamo-cortical circuit was observable. The present data highlight differential upregulation and downregulation in auditory-language and executive networks, respectively, with important subcortical contributions when successfully adapting to a challenging listening situation.
A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.
Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie
2016-09-01
Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.
NASA Technical Reports Server (NTRS)
Keeler, James D.
1988-01-01
The information capacity of Kanerva's Sparse Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and Hopfield-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns.
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
NASA Astrophysics Data System (ADS)
Lai, Chunren; Guo, Shengwen; Cheng, Lina; Wang, Wensheng; Wu, Kai
2017-02-01
It's very important to differentiate the temporal lobe epilepsy (TLE) patients from healthy people and localize the abnormal brain regions of the TLE patients. The cortical features and changes can reveal the unique anatomical patterns of brain regions from the structural MR images. In this study, structural MR images from 28 normal controls (NC), 18 left TLE (LTLE), and 21 right TLE (RTLE) were acquired, and four types of cortical feature, namely cortical thickness (CTh), cortical surface area (CSA), gray matter volume (GMV), and mean curvature (MCu), were explored for discriminative analysis. Three feature selection methods, the independent sample t-test filtering, the sparse-constrained dimensionality reduction model (SCDRM), and the support vector machine-recursive feature elimination (SVM-RFE), were investigated to extract dominant regions with significant differences among the compared groups for classification using the SVM classifier. The results showed that the SVM-REF achieved the highest performance (most classifications with more than 92% accuracy), followed by the SCDRM, and the t-test. Especially, the surface area and gray volume matter exhibited prominent discriminative ability, and the performance of the SVM was improved significantly when the four cortical features were combined. Additionally, the dominant regions with higher classification weights were mainly located in temporal and frontal lobe, including the inferior temporal, entorhinal cortex, fusiform, parahippocampal cortex, middle frontal and frontal pole. It was demonstrated that the cortical features provided effective information to determine the abnormal anatomical pattern and the proposed method has the potential to improve the clinical diagnosis of the TLE.
Ran, Bin; Song, Li; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. PMID:27448326
NASA Astrophysics Data System (ADS)
Chevallier, Frédéric; Broquet, Grégoire; Pierangelo, Clémence; Crisp, David
2017-07-01
The column-average dry air-mole fraction of carbon dioxide in the atmosphere (XCO2) is measured by scattered satellite measurements like those from the Orbiting Carbon Observatory (OCO-2). We show that global continuous maps of XCO2 (corresponding to level 3 of the satellite data) at daily or coarser temporal resolution can be inferred from these data with a Kalman filter built on a model of persistence. Our application of this approach on 2 years of OCO-2 retrievals indicates that the filter provides better information than a climatology of XCO2 at both daily and monthly scales. Provided that the assigned observation uncertainty statistics are tuned in each grid cell of the XCO2 maps from an objective method (based on consistency diagnostics), the errors predicted by the filter at daily and monthly scales represent the true error statistics reasonably well, except for a bias in the high latitudes of the winter hemisphere and a lack of resolution (i.e., a too small discrimination skill) of the predicted error standard deviations. Due to the sparse satellite sampling, the broad-scale patterns of XCO2 described by the filter seem to lag behind the real signals by a few weeks. Finally, the filter offers interesting insights into the quality of the retrievals, both in terms of random and systematic errors.
Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Exhaustive Search for Sparse Variable Selection in Linear Regression
NASA Astrophysics Data System (ADS)
Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato
2018-04-01
We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
NASA Astrophysics Data System (ADS)
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.
Knogler, Laura D; Markov, Daniil A; Dragomir, Elena I; Štih, Vilim; Portugues, Ruben
2017-05-08
A fundamental question in neurobiology is how animals integrate external sensory information from their environment with self-generated motor and sensory signals in order to guide motor behavior and adaptation. The cerebellum is a vertebrate hindbrain region where all of these signals converge and that has been implicated in the acquisition, coordination, and calibration of motor activity. Theories of cerebellar function postulate that granule cells encode a variety of sensorimotor signals in the cerebellar input layer. These models suggest that representations should be high-dimensional, sparse, and temporally patterned. However, in vivo physiological recordings addressing these points have been limited and in particular have been unable to measure the spatiotemporal dynamics of population-wide activity. In this study, we use both calcium imaging and electrophysiology in the awake larval zebrafish to investigate how cerebellar granule cells encode three types of sensory stimuli as well as stimulus-evoked motor behaviors. We find that a large fraction of all granule cells are active in response to these stimuli, such that representations are not sparse at the population level. We find instead that most responses belong to only one of a small number of distinct activity profiles, which are temporally homogeneous and anatomically clustered. We furthermore identify granule cells that are active during swimming behaviors and others that are multimodal for sensory and motor variables. When we pharmacologically change the threshold of a stimulus-evoked behavior, we observe correlated changes in these representations. Finally, electrophysiological data show no evidence for temporal patterning in the coding of different stimulus durations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, M.; Jin, J.
2017-12-01
Vegetation phenology is one of the most sensitive bio-indicators of climate change, and it has received increasing interests in the context of global warming. As one of the most sensitive areas to global change, the Tibetan Plateau is a unique region to study the trends in vegetation phenology in response to climate change because of its unique vegetation composition, climate features and low-level human disturbance. Although some studies have aroused wide controversies about the actual plant phenology patterns in the Tibetan Plateau, yet the reasons remain unclear. In particular, the phenology characteristics of sparse herbaceous or sparse shrub and evergreen forest that are mostly located in the northwest and southeast of the Tibetan Plateau remain less studied. In this study, the spatio-temporal patterns of the start (SOS), end (EOS) and length (LOS) of the vegetation growing season for six vegetation types in the Tibetan Plateau, including evergreen broadleaf forests, evergreen coniferous forests, evergreen shrub, meadow, steppe and sparse herbaceous or sparse shrub, were quantified from 1982 to 2014 using NOAA/AVHRR NDVI data set at a spatial resolution of 0.05°×0.05° and 7-day intervals using NDVI relative change rate threshold and sixth order polynomial fit models. Assisted with the monthly precipitation and temperature data, the relative effects of changing climates on the variability of phenology were also examined. Diverse phenological changes were observed for different land cover types, with an advancing start of growing season (SOS), delaying end of growing season (EOS) and increasing length of growing season (LOS) in the eastern Tibetan Plateau where meadow was the dominant vegetation type, but with the opposite changes in the steppe and sparse herbaceous or sparse shrub regions which are mostly located in the northwestern and western edges of the Tibetan Plateau. Correlation analysis indicated that sufficient preseason precipitation may delay the SOS of evergreen forests in the southeastern Plateau and advance the SOS of steppe and sparse herbaceous or sparse shrub in relatively arid areas, while the advance of SOS in meadow areas could be related to higher preseason temperature.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
Temporal motifs reveal homophily, gender-specific patterns, and group talk in call sequences.
Kovanen, Lauri; Kaski, Kimmo; Kertész, János; Saramäki, Jari
2013-11-05
Recent studies on electronic communication records have shown that human communication has complex temporal structure. We study how communication patterns that involve multiple individuals are affected by attributes such as sex and age. To this end, we represent the communication records as a colored temporal network where node color is used to represent individuals' attributes, and identify patterns known as temporal motifs. We then construct a null model for the occurrence of temporal motifs that takes into account the interaction frequencies and connectivity between nodes of different colors. This null model allows us to detect significant patterns in call sequences that cannot be observed in a static network that uses interaction frequencies as link weights. We find sex-related differences in communication patterns in a large dataset of mobile phone records and show the existence of temporal homophily, the tendency of similar individuals to participate in communication patterns beyond what would be expected on the basis of their average interaction frequencies. We also show that temporal patterns differ between dense and sparse neighborhoods in the network. Because also this result is independent of interaction frequencies, it can be seen as an extension of Granovetter's hypothesis to temporal networks.
Temporal motifs reveal homophily, gender-specific patterns, and group talk in call sequences
Kovanen, Lauri; Kaski, Kimmo; Kertész, János; Saramäki, Jari
2013-01-01
Recent studies on electronic communication records have shown that human communication has complex temporal structure. We study how communication patterns that involve multiple individuals are affected by attributes such as sex and age. To this end, we represent the communication records as a colored temporal network where node color is used to represent individuals’ attributes, and identify patterns known as temporal motifs. We then construct a null model for the occurrence of temporal motifs that takes into account the interaction frequencies and connectivity between nodes of different colors. This null model allows us to detect significant patterns in call sequences that cannot be observed in a static network that uses interaction frequencies as link weights. We find sex-related differences in communication patterns in a large dataset of mobile phone records and show the existence of temporal homophily, the tendency of similar individuals to participate in communication patterns beyond what would be expected on the basis of their average interaction frequencies. We also show that temporal patterns differ between dense and sparse neighborhoods in the network. Because also this result is independent of interaction frequencies, it can be seen as an extension of Granovetter’s hypothesis to temporal networks. PMID:24145424
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
High-resolution dynamic 31 P-MRSI using a low-rank tensor model.
Ma, Chao; Clifford, Bryan; Liu, Yuchi; Gu, Yuning; Lam, Fan; Yu, Xin; Liang, Zhi-Pei
2017-08-01
To develop a rapid 31 P-MRSI method with high spatiospectral resolution using low-rank tensor-based data acquisition and image reconstruction. The multidimensional image function of 31 P-MRSI is represented by a low-rank tensor to capture the spatial-spectral-temporal correlations of data. A hybrid data acquisition scheme is used for sparse sampling, which consists of a set of "training" data with limited k-space coverage to capture the subspace structure of the image function, and a set of sparsely sampled "imaging" data for high-resolution image reconstruction. An explicit subspace pursuit approach is used for image reconstruction, which estimates the bases of the subspace from the "training" data and then reconstructs a high-resolution image function from the "imaging" data. We have validated the feasibility of the proposed method using phantom and in vivo studies on a 3T whole-body scanner and a 9.4T preclinical scanner. The proposed method produced high-resolution static 31 P-MRSI images (i.e., 6.9 × 6.9 × 10 mm 3 nominal resolution in a 15-min acquisition at 3T) and high-resolution, high-frame-rate dynamic 31 P-MRSI images (i.e., 1.5 × 1.5 × 1.6 mm 3 nominal resolution, 30 s/frame at 9.4T). Dynamic spatiospectral variations of 31 P-MRSI signals can be efficiently represented by a low-rank tensor. Exploiting this mathematical structure for data acquisition and image reconstruction can lead to fast 31 P-MRSI with high resolution, frame-rate, and SNR. Magn Reson Med 78:419-428, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Quresh S. Latif; Martha M. Ellis; Victoria A. Saab; Kim Mellen-McLean
2017-01-01
Sparsely distributed species attract conservation concern, but insufficient information on population trends challenges conservation and funding prioritization. Occupancy-based monitoring is attractive for these species, but appropriate sampling design and inference depend on particulars of the study system. We employed spatially explicit simulations to identify...
Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
High-resolution wavefront reconstruction using the frozen flow hypothesis
NASA Astrophysics Data System (ADS)
Liu, Xuewen; Liang, Yonghui; Liu, Jin; Xu, Jieping
2017-10-01
This paper describes an approach to reconstructing wavefronts on finer grid using the frozen flow hypothesis (FFH), which exploits spatial and temporal correlations between consecutive wavefront sensor (WFS) frames. Under the assumption of FFH, slope data from WFS can be connected to a finer, composite slope grid using translation and down sampling, and elements in transformation matrices are determined by wind information. Frames of slopes are then combined and slopes on finer grid are reconstructed by solving a sparse, large-scale, ill-posed least squares problem. By using reconstructed finer slope data and adopting Fried geometry of WFS, high-resolution wavefronts are then reconstructed. The results show that this method is robust even with detector noise and wind information inaccuracy, and under bad seeing conditions, high-frequency information in wavefronts can be recovered more accurately compared with when correlations in WFS frames are ignored.
Video-rate volumetric functional imaging of the brain at synaptic resolution.
Lu, Rongwen; Sun, Wenzhi; Liang, Yajie; Kerlin, Aaron; Bierfeld, Jens; Seelig, Johannes D; Wilson, Daniel E; Scholl, Benjamin; Mohar, Boaz; Tanimoto, Masashi; Koyama, Minoru; Fitzpatrick, David; Orger, Michael B; Ji, Na
2017-04-01
Neurons and neural networks often extend hundreds of micrometers in three dimensions. Capturing the calcium transients associated with their activity requires volume imaging methods with subsecond temporal resolution. Such speed is a challenge for conventional two-photon laser-scanning microscopy, because it depends on serial focal scanning in 3D and indicators with limited brightness. Here we present an optical module that is easily integrated into standard two-photon laser-scanning microscopes to generate an axially elongated Bessel focus, which when scanned in 2D turns frame rate into volume rate. We demonstrated the power of this approach in enabling discoveries for neurobiology by imaging the calcium dynamics of volumes of neurons and synapses in fruit flies, zebrafish larvae, mice and ferrets in vivo. Calcium signals in objects as small as dendritic spines could be resolved at video rates, provided that the samples were sparsely labeled to limit overlap in their axially projected images.
Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.
Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin
2013-09-01
Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.
Search for evidence of low energy protons in solar flares
NASA Technical Reports Server (NTRS)
Metcalf, Thomas R.; Wuelser, Jean-Pierre; Canfield, Richard C.; Hudson, Hugh S.
1992-01-01
We searched for linear polarization in the H alpha line using the Stokes Polarimeter at Mees Solar Observatory and present observations of a flare from NOAA active region 6659 which began at 01:30 UT on 14 Jun. 1991. Our dataset also includes H alpha spectra from the Mees charge coupled device (MCCD) imaging spectrograph as well as hard x ray observations from the Burst and Transient Source Experiment (BATSE) instrument on board the Gamma Ray Observatory (GRO). The polarimeter scanned a 40 x 40 inch field of view using 16 raster points in a 4 x 4 grid. Each scan took about 30 seconds with 2 seconds at each raster point. The polarimeter stopped 8.5 inches between raster points and each point covered a 6 inch region. This sparse sampling increased the total field of view without reducing the temporal cadence. At each raster point, an H alpha spectrum with 20 mA spectral sampling is obtained covering 2.6 A centered on H alpha line center. The preliminary conclusions from the research are presented.
Gamma-Ray Bursts and Cosmology
NASA Technical Reports Server (NTRS)
Norris, Jay P.
2003-01-01
The unrivalled, extreme luminosities of gamma-ray bursts (GRBs) make them the favored beacons for sampling the high redshift Universe. To employ GRBs to study the cosmic terrain -- e.g., star and galaxy formation history -- GRB luminosities must be calibrated, and the luminosity function versus redshift must be measured or inferred. Several nascent relationships between gamma-ray temporal or spectral indicators and luminosity or total energy have been reported. These measures promise to further our understanding of GRBs once the connections between the luminosity indicators and GRB jets and emission mechanisms are better elucidated. The current distribution of 33 redshifts determined from host galaxies and afterglows peaks near z $\\sim$ 1, whereas for the full BATSE sample of long bursts, the lag-luminosity relation predicts a broad peak z $\\sim$ 1--4 with a tail to z $\\sim$ 20, in rough agreement with theoretical models based on star formation considerations. For some GRB subclasses and apparently related phenomena -- short bursts, long-lag bursts, and X-ray flashes -- the present information on their redshift distributions is sparse or entirely lacking, and progress is expected in Swift era when prompt alerts become numerous.
Pan, Minghao; Yang, Yongmin; Guan, Fengjiao; Hu, Haifeng; Xu, Hailong
2017-01-01
The accurate monitoring of blade vibration under operating conditions is essential in turbo-machinery testing. Blade tip timing (BTT) is a promising non-contact technique for the measurement of blade vibrations. However, the BTT sampling data are inherently under-sampled and contaminated with several measurement uncertainties. How to recover frequency spectra of blade vibrations though processing these under-sampled biased signals is a bottleneck problem. A novel method of BTT signal processing for alleviating measurement uncertainties in recovery of multi-mode blade vibration frequency spectrum is proposed in this paper. The method can be divided into four phases. First, a single measurement vector model is built by exploiting that the blade vibration signals are sparse in frequency spectra. Secondly, the uniqueness of the nonnegative sparse solution is studied to achieve the vibration frequency spectrum. Thirdly, typical sources of BTT measurement uncertainties are quantitatively analyzed. Finally, an improved vibration frequency spectra recovery method is proposed to get a guaranteed level of sparse solution when measurement results are biased. Simulations and experiments are performed to prove the feasibility of the proposed method. The most outstanding advantage is that this method can prevent the recovered multi-mode vibration spectra from being affected by BTT measurement uncertainties without increasing the probe number. PMID:28758952
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
Demitri, Nevine; Zoubir, Abdelhak M
2017-01-01
Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.
Dictionary learning and time sparsity in dynamic MRI.
Caballero, Jose; Rueckert, Daniel; Hajnal, Joseph V
2012-01-01
Sparse representation methods have been shown to tackle adequately the inherent speed limits of magnetic resonance imaging (MRI) acquisition. Recently, learning-based techniques have been used to further accelerate the acquisition of 2D MRI. The extension of such algorithms to dynamic MRI (dMRI) requires careful examination of the signal sparsity distribution among the different dimensions of the data. Notably, the potential of temporal gradient (TG) sparsity in dMRI has not yet been explored. In this paper, a novel method for the acceleration of cardiac dMRI is presented which investigates the potential benefits of enforcing sparsity constraints on patch-based learned dictionaries and TG at the same time. We show that an algorithm exploiting sparsity on these two domains can outperform previous sparse reconstruction techniques.
Empirical study of the role of the topology in spreading on communication networks
NASA Astrophysics Data System (ADS)
Medvedev, Alexey; Kertesz, Janos
2017-03-01
Topological aspects, like community structure, and temporal activity patterns, like burstiness, have been shown to severely influence the speed of spreading in temporal networks. We study the influence of the topology on the susceptible-infected (SI) spreading on time stamped communication networks, as obtained from a dataset of mobile phone records. We consider city level networks with intra- and inter-city connections. The networks using only intra-city links are usually sparse, where the spreading depends mainly on the average degree. The inter-city links serve as bridges in spreading, speeding up considerably the process. We demonstrate the effect also on model simulations.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Galaxy redshift surveys with sparse sampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro
2013-12-01
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V{sub survey} ∼ 10Gpc{sup 3}) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V{sub survey}, we observe only a fraction of the volume. The distribution of observed regions should bemore » chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V{sub survey} (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V{sub survey} (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.« less
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan
2017-04-01
Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials, the results obtained in our experiments demonstrate the sparse acquisition STEM imaging is potentially capable of reducing the electron dose by at least 20 times expanding the frontiers of our characterization capabilities for investigation of biological/organic molecules, polymers, soft materials and nanostructures in general.
Sample-Starved Large Scale Network Analysis
2016-05-05
As reported in our journal publication (G. Marjanovic and A. O. Hero, ”l0 Sparse Inverse Covariance Estimation,” IEEE Trans on Signal Processing, vol... Marjanovic and A. O. Hero, ”l0 Sparse Inverse Covariance Estimation,” in IEEE Trans on Signal Processing, vol. 63, no. 12, pp. 3218-3231, May 2015. 6. G
Two-dimensional sparse wavenumber recovery for guided wavefields
NASA Astrophysics Data System (ADS)
Sabeti, Soroosh; Harley, Joel B.
2018-04-01
The multi-modal and dispersive behavior of guided waves is often characterized by their dispersion curves, which describe their frequency-wavenumber behavior. In prior work, compressive sensing based techniques, such as sparse wavenumber analysis (SWA), have been capable of recovering dispersion curves from limited data samples. A major limitation of SWA, however, is the assumption that the structure is isotropic. As a result, SWA fails when applied to composites and other anisotropic structures. There have been efforts to address this issue in the literature, but they either are not easily generalizable or do not sufficiently express the data. In this paper, we enhance the existing approaches by employing a two-dimensional wavenumber model to account for direction-dependent velocities in anisotropic media. We integrate this model with tools from compressive sensing to reconstruct a wavefield from incomplete data. Specifically, we create a modified two-dimensional orthogonal matching pursuit algorithm that takes an undersampled wavefield image, with specified unknown elements, and determines its sparse wavenumber characteristics. We then recover the entire wavefield from the sparse representations obtained with our small number of data samples.
Sparsely-distributed organization of face and limb activations in human ventral temporal cortex
Weiner, Kevin S.; Grill-Spector, Kalanit
2011-01-01
Functional magnetic resonance imaging (fMRI) has identified face- and body part-selective regions, as well as distributed activation patterns for object categories across human ventral temporal cortex (VTC), eliciting a debate regarding functional organization in VTC and neural coding of object categories. Using high-resolution fMRI, we illustrate that face- and limb-selective activations alternate in a series of largely nonoverlapping clusters in lateral VTC along the inferior occipital gyrus (IOG), fusiform gyrus (FG), and occipitotemporal sulcus (OTS). Both general linear model (GLM) and multivoxel pattern (MVP) analyses show that face- and limb-selective activations minimally overlap and that this organization is consistent across experiments and days. We provide a reliable method to separate two face-selective clusters on the middle and posterior FG (mFus and pFus), and another on the IOG using their spatial relation to limb-selective activations and retinotopic areas hV4, VO-1/2, and hMT+. Furthermore, these activations show a gradient of increasing face selectivity and decreasing limb selectivity from the IOG to the mFus. Finally, MVP analyses indicate that there is differential information for faces in lateral VTC (containing weakly- and highly-selective voxels) relative to non-selective voxels in medial VTC. These findings suggest a sparsely-distributed organization where sparseness refers to the presence of several face- and limb-selective clusters in VTC, and distributed refers to the presence of different amounts of information in highly-, weakly-, and non-selective voxels. Consequently, theories of object recognition should consider the functional and spatial constraints of neural coding across a series of nonoverlapping category-selective clusters that are themselves distributed. PMID:20457261
Arvind, Hemamalini; Klistorner, Alexander; Graham, Stuart L; Grigg, John R
2006-05-01
Multifocal visual evoked potentials (mfVEPs) have demonstrated good diagnostic capabilities in glaucoma and optic neuritis. This study aimed at evaluating the possibility of simultaneously recording mfVEP for both eyes with dichoptic stimulation using virtual reality goggles and also to determine the stimulus characteristics that yield maximum amplitude. ten healthy volunteers were recruited and temporally sparse pattern pulse stimuli were presented dichoptically using virtual reality goggles. Experiment 1 involved recording responses to dichoptically presented checkerboard stimuli and also confirming true topographic representation by switching off specific segments. Experiment 2 involved monocular stimulation and comparison of amplitude with Experiment 1. In Experiment 3, orthogonally oriented gratings were dichoptically presented. Experiment 4 involved dichoptic presentation of checkerboard stimuli at different levels of sparseness (5.0 times/s, 2.5 times/s, 1.66 times/s and 1.25 times/s), where stimulation of corresponding segments of two eyes were separated by 16.7, 66.7,116.7 & 166.7 ms respectively. Experiment 1 demonstrated good traces in all regions and confirmed topographic representation. However, there was suppression of amplitude of responses to dichoptic stimulation by 17.9+/-5.4% compared to monocular stimulation. Experiment 3 demonstrated similar suppression between orthogonal and checkerboard stimuli (p = 0.08). Experiment 4 demonstrated maximum amplitude and least suppression (4.8%) with stimulation at 1.25 times/s with 166.7 ms separation between eyes. It is possible to record mfVEP for both eyes during dichoptic stimulation using virtual reality goggles, which present binocular simultaneous patterns driven by independent sequences. Interocular suppression can be almost eliminated by using a temporally sparse stimulus of 1.25 times/s with a separation of 166.7 ms between stimulation of corresponding segments of the two eyes.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking
NASA Astrophysics Data System (ADS)
Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.
2017-03-01
Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.
Rust, Nicole C.; DiCarlo, James J.
2012-01-01
While popular accounts suggest that neurons along the ventral visual processing stream become increasingly selective for particular objects, this appears at odds with the fact that inferior temporal cortical (IT) neurons are broadly tuned. To explore this apparent contradiction, we compared processing in two ventral stream stages (V4 and IT) in the rhesus macaque monkey. We confirmed that IT neurons are indeed more selective for conjunctions of visual features than V4 neurons, and that this increase in feature conjunction selectivity is accompanied by an increase in tolerance (“invariance”) to identity-preserving transformations (e.g. shifting, scaling) of those features. We report here that V4 and IT neurons are, on average, tightly matched in their tuning breadth for natural images (“sparseness”), and that the average V4 or IT neuron will produce a robust firing rate response (over 50% of its peak observed firing rate) to ~10% of all natural images. We also observed that sparseness was positively correlated with conjunction selectivity and negatively correlated with tolerance within both V4 and IT, consistent with selectivity-building and invariance-building computations that offset one another to produce sparseness. Our results imply that the conjunction-selectivity-building and invariance-building computations necessary to support object recognition are implemented in a balanced fashion to maintain sparseness at each stage of processing. PMID:22836252
Dynamic Textures Modeling via Joint Video Dictionary Learning.
Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng
2017-04-06
Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.
Zeng, Dong; Xie, Qi; Cao, Wenfei; Lin, Jiahui; Zhang, Hao; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Meng, Deyu; Xu, Zongben; Liang, Zhengrong; Chen, Wufan
2017-01-01
Dynamic cerebral perfusion computed tomography (DCPCT) has the ability to evaluate the hemodynamic information throughout the brain. However, due to multiple 3-D image volume acquisitions protocol, DCPCT scanning imposes high radiation dose on the patients with growing concerns. To address this issue, in this paper, based on the robust principal component analysis (RPCA, or equivalently the low-rank and sparsity decomposition) model and the DCPCT imaging procedure, we propose a new DCPCT image reconstruction algorithm to improve low dose DCPCT and perfusion maps quality via using a powerful measure, called Kronecker-basis-representation tensor sparsity regularization, for measuring low-rankness extent of a tensor. For simplicity, the first proposed model is termed tensor-based RPCA (T-RPCA). Specifically, the T-RPCA model views the DCPCT sequential images as a mixture of low-rank, sparse, and noise components to describe the maximum temporal coherence of spatial structure among phases in a tensor framework intrinsically. Moreover, the low-rank component corresponds to the “background” part with spatial–temporal correlations, e.g., static anatomical contribution, which is stationary over time about structure, and the sparse component represents the time-varying component with spatial–temporal continuity, e.g., dynamic perfusion enhanced information, which is approximately sparse over time. Furthermore, an improved nonlocal patch-based T-RPCA (NL-T-RPCA) model which describes the 3-D block groups of the “background” in a tensor is also proposed. The NL-T-RPCA model utilizes the intrinsic characteristics underlying the DCPCT images, i.e., nonlocal self-similarity and global correlation. Two efficient algorithms using alternating direction method of multipliers are developed to solve the proposed T-RPCA and NL-T-RPCA models, respectively. Extensive experiments with a digital brain perfusion phantom, preclinical monkey data, and clinical patient data clearly demonstrate that the two proposed models can achieve more gains than the existing popular algorithms in terms of both quantitative and visual quality evaluations from low-dose acquisitions, especially as low as 20 mAs. PMID:28880164
Validation Of TRMM For Hazard Assessment In The Remote Context Of Tropical Africa
NASA Astrophysics Data System (ADS)
Monsieurs, E.; Kirschbaum, D.; Tan, J.; Jacobs, L.; Kervyn, M.; Demoulin, A.; Dewitte, O.
2017-12-01
Accurate rainfall data is fundamental for understanding and mitigating the disastrous effects of many rainfall-triggered hazards, especially when one considers the challenges arising from climate change and rainfall variability. In tropical Africa in particular, the sparse operational rainfall gauging network hampers the ability to understand these hazards. Satellite rainfall estimates (SRE) can therefore be of great value. Yet, rigorous validation is required to identify the uncertainties when using SRE for hazard applications. We evaluated the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 Research Derived Daily Product from 1998 to 2017, at 0.25° x 0.25° spatial and 24 h temporal resolution. The validation was done over the western branch of the East African Rift, with the perspective of regional landslide hazard assessment in mind. Even though we collected an unprecedented dataset of 47 gauges with a minimum temporal resolution of 24 h, the sparse and heterogeneous temporal coverage in a region with high rainfall variability poses challenges for validation. In addition, the discrepancy between local-scale gauge data and spatially averaged ( 775 km²) TMPA data in the context of local convective storms and orographic rainfall is a crucial source of uncertainty. We adopted a flexible framework for SRE validation that fosters explorative research in a remote context. Results show that TMPA performs reasonably well during the rainy seasons for rainfall intensities <20 mm/day. TMPA systematically underestimates rainfall, but most problematic is the decreasing probability of detection of high intensity rainfalls. We suggest that landslide hazard might be efficiently assessed if we take account of the systematic biases in TMPA data and determine rainfall thresholds modulated by controls on, and uncertainties of, TMPA revealed in this study. Moreover, it is found relevant in mapping regional-scale rainfall-triggered hazards that are in any case poorly covered by the sparse available gauges. We anticipate validation of TMPA's successor (Integrated Multi-satellitE Retrievals for Global Precipitation Measurement; 10 km × 10 km, half-hourly) using the proposed framework, as soon as this product will be available in early 2018 for the 1998-present period.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Observing System Simulations for Small Satellite Formations Estimating Bidirectional Reflectance
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Gatebe, Charles K.; de Weck, Olivier
2015-01-01
The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: Use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.
Observing system simulations for small satellite formations estimating bidirectional reflectance
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Gatebe, Charles K.; Weck, Olivier de
2015-12-01
The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.
Open-target sparse sensing of biological agents using DNA microarray
2011-01-01
Background Current biosensors are designed to target and react to specific nucleic acid sequences or structural epitopes. These 'target-specific' platforms require creation of new physical capture reagents when new organisms are targeted. An 'open-target' approach to DNA microarray biosensing is proposed and substantiated using laboratory generated data. The microarray consisted of 12,900 25 bp oligonucleotide capture probes derived from a statistical model trained on randomly selected genomic segments of pathogenic prokaryotic organisms. Open-target detection of organisms was accomplished using a reference library of hybridization patterns for three test organisms whose DNA sequences were not included in the design of the microarray probes. Results A multivariate mathematical model based on the partial least squares regression (PLSR) was developed to detect the presence of three test organisms in mixed samples. When all 12,900 probes were used, the model correctly detected the signature of three test organisms in all mixed samples (mean(R2)) = 0.76, CI = 0.95), with a 6% false positive rate. A sampling algorithm was then developed to sparsely sample the probe space for a minimal number of probes required to capture the hybridization imprints of the test organisms. The PLSR detection model was capable of correctly identifying the presence of the three test organisms in all mixed samples using only 47 probes (mean(R2)) = 0.77, CI = 0.95) with nearly 100% specificity. Conclusions We conceived an 'open-target' approach to biosensing, and hypothesized that a relatively small, non-specifically designed, DNA microarray is capable of identifying the presence of multiple organisms in mixed samples. Coupled with a mathematical model applied to laboratory generated data, and sparse sampling of capture probes, the prototype microarray platform was able to capture the signature of each organism in all mixed samples with high sensitivity and specificity. It was demonstrated that this new approach to biosensing closely follows the principles of sparse sensing. PMID:21801424
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.
Visual saliency detection based on in-depth analysis of sparse representation
NASA Astrophysics Data System (ADS)
Wang, Xin; Shen, Siqiu; Ning, Chen
2018-03-01
Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Friedel, Michael J.
2011-01-01
Few studies attempt to model the range of possible post-fire hydrologic and geomorphic hazards because of the sparseness of data and the coupled, nonlinear, spatial, and temporal relationships among landscape variables. In this study, a type of unsupervised artificial neural network, called a self-organized map (SOM), is trained using data from 540 burned basins in the western United States. The sparsely populated data set includes variables from independent numerical landscape categories (climate, land surface form, geologic texture, and post-fire condition), independent landscape classes (bedrock geology and state), and dependent initiation processes (runoff, landslide, and runoff and landslide combination) and responses (debris flows, floods, and no events). Pattern analysis of the SOM-based component planes is used to identify and interpret relations among the variables. Application of the Davies-Bouldin criteria following k-means clustering of the SOM neurons identified eight conceptual regional models for focusing future research and empirical model development. A split-sample validation on 60 independent basins (not included in the training) indicates that simultaneous predictions of initiation process and response types are at least 78% accurate. As climate shifts from wet to dry conditions, forecasts across the burned landscape reveal a decreasing trend in the total number of debris flow, flood, and runoff events with considerable variability among individual basins. These findings suggest the SOM may be useful in forecasting real-time post-fire hazards, and long-term post-recovery processes and effects of climate change scenarios.
Motion-compensated compressed sensing for dynamic imaging
NASA Astrophysics Data System (ADS)
Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali
2010-08-01
The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.
Morphological Constraints on Cerebellar Granule Cell Combinatorial Diversity.
Gilmer, Jesse I; Person, Abigail L
2017-12-13
Combinatorial expansion by the cerebellar granule cell layer (GCL) is fundamental to theories of cerebellar contributions to motor control and learning. Granule cells (GrCs) sample approximately four mossy fiber inputs and are thought to form a combinatorial code useful for pattern separation and learning. We constructed a spatially realistic model of the cerebellar GCL and examined how GCL architecture contributes to GrC combinatorial diversity. We found that GrC combinatorial diversity saturates quickly as mossy fiber input diversity increases, and that this saturation is in part a consequence of short dendrites, which limit access to diverse inputs and favor dense sampling of local inputs. This local sampling also produced GrCs that were combinatorially redundant, even when input diversity was extremely high. In addition, we found that mossy fiber clustering, which is a common anatomical pattern, also led to increased redundancy of GrC input combinations. We related this redundancy to hypothesized roles of temporal expansion of GrC information encoding in service of learned timing, and we show that GCL architecture produces GrC populations that support both temporal and combinatorial expansion. Finally, we used novel anatomical measurements from mice of either sex to inform modeling of sparse and filopodia-bearing mossy fibers, finding that these circuit features uniquely contribute to enhancing GrC diversification and redundancy. Our results complement information theoretic studies of granule layer structure and provide insight into the contributions of granule layer anatomical features to afferent mixing. SIGNIFICANCE STATEMENT Cerebellar granule cells are among the simplest neurons, with tiny somata and, on average, just four dendrites. These characteristics, along with their dense organization, inspired influential theoretical work on the granule cell layer as a combinatorial expander, where each granule cell represents a unique combination of inputs. Despite the centrality of these theories to cerebellar physiology, the degree of expansion supported by anatomically realistic patterns of inputs is unknown. Using modeling and anatomy, we show that realistic input patterns constrain combinatorial diversity by producing redundant combinations, which nevertheless could support temporal diversification of like combinations, suitable for learned timing. Our study suggests a neural substrate for producing high levels of both combinatorial and temporal diversity in the granule cell layer. Copyright © 2017 the authors 0270-6474/17/3712153-14$15.00/0.
Compressive sampling by artificial neural networks for video
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt
2011-06-01
We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.
Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A.
2014-01-01
This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP–BOLD) MRI. CP–BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by (a) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and (b) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease. PMID:24691119
Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A
2014-07-01
This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for cardiac phase-resolved blood-oxygen-level-dependent (CP-BOLD) MRI. CP-BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by 1) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and 2) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease.
NASA Astrophysics Data System (ADS)
De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan
2016-11-01
A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, Tatsuya J.; Nofiele, Joris; Yuan, Qing
Purpose: Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of “true” information due to greater reliance on a priori information. Methods: Lung tumor motion trajectories in the superior–inferior direction, previously recorded from ten lung cancer patients, were replayed usingmore » a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. Results: In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3–0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. Conclusions: The authors’ initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy.« less
Arai, Tatsuya J; Nofiele, Joris; Madhuranthakam, Ananth J; Yuan, Qing; Pedrosa, Ivan; Chopra, Rajiv; Sawant, Amit
2016-06-01
Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of "true" information due to greater reliance on a priori information. Lung tumor motion trajectories in the superior-inferior direction, previously recorded from ten lung cancer patients, were replayed using a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3-0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. The authors' initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy.
Arai, Tatsuya J.; Nofiele, Joris; Madhuranthakam, Ananth J.; Yuan, Qing; Pedrosa, Ivan; Chopra, Rajiv; Sawant, Amit
2016-01-01
Purpose: Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of “true” information due to greater reliance on a priori information. Methods: Lung tumor motion trajectories in the superior–inferior direction, previously recorded from ten lung cancer patients, were replayed using a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. Results: In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3–0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. Conclusions: The authors’ initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy. PMID:27277029
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Steinson, M.
2016-12-01
Accurate and reliable real-time monitoring and dissemination of observations of precipitation and surface weather conditions in general is critical for a variety of research studies and applications. Surface precipitation observations provide important reference information for evaluating satellite (e.g., GPM) precipitation estimates. High quality surface observations of precipitation, temperature, moisture, and winds are important for applications such as agriculture, water resource monitoring, health, and hazardous weather early warning systems. In many regions of the World, surface weather station and precipitation gauge networks are sparsely located and/or of poor quality. Existing stations have often been sited incorrectly, not well-maintained, and have limited communications established at the site for real-time monitoring. The University Corporation for Atmospheric Research (UCAR)/National Center for Atmospheric Research (NCAR), with support from USAID, has started an initiative to develop and deploy low-cost weather instrumentation including tipping bucket and weighing-type precipitation gauges in sparsely observed regions of the world. The goal is to improve the number of observations (temporally and spatially) for the evaluation of satellite precipitation estimates in data-sparse regions and to improve the quality of applications for environmental monitoring and early warning alert systems on a regional to global scale. One important aspect of this initiative is to make the data open to the community. The weather station instrumentation have been developed using innovative new technologies such as 3D printers, Raspberry Pi computing systems, and wireless communications. An initial pilot project have been implemented in the country of Zambia. This effort could be expanded to other data sparse regions around the globe. The presentation will provide an overview and demonstration of 3D printed weather station development and initial evaluation of observed precipitation datasets.
Sparse-sampling with time-encoded (TICO) stimulated Raman scattering for fast image acquisition
NASA Astrophysics Data System (ADS)
Hakert, Hubertus; Eibl, Matthias; Karpf, Sebastian; Huber, Robert
2017-07-01
Modern biomedical imaging modalities aim to provide researchers a multimodal contrast for a deeper insight into a specimen under investigation. A very promising technique is stimulated Raman scattering (SRS) microscopy, which can unveil the chemical composition of a sample with a very high specificity. Although the signal intensities are enhanced manifold to achieve a faster acquisition of images if compared to standard Raman microscopy, there is a trade-off between specificity and acquisition speed. Commonly used SRS concepts either probe only very few Raman transitions as the tuning of the applied laser sources is complicated or record whole spectra with a spectrometer based setup. While the first approach is fast, it reduces the specificity and the spectrometer approach records whole spectra -with energy differences where no Raman information is present-, which limits the acquisition speed. Therefore, we present a new approach based on the TICO-Raman concept, which we call sparse-sampling. The TICO-sparse-sampling setup is fully electronically controllable and allows probing of only the characteristic peaks of a Raman spectrum instead of always acquiring a whole spectrum. By reducing the spectral points to the relevant peaks, the acquisition time can be greatly reduced compared to a uniformly, equidistantly sampled Raman spectrum while the specificity and the signal to noise ratio (SNR) are maintained. Furthermore, all laser sources are completely fiber based. The synchronized detection enables a full resolution of the Raman signal, whereas the analogue and digital balancing allows shot noise limited detection. First imaging results with polystyrene (PS) and polymethylmethacrylate (PMMA) beads confirm the advantages of TICO sparse-sampling. We achieved a pixel dwell time as low as 35 μs for an image differentiating both species. The mechanical properties of the applied voice coil stage for scanning the sample currently limits even faster acquisition.
Nonlinear Estimation With Sparse Temporal Measurements
2016-09-01
Kalman filter , the extended Kalman filter (EKF) and unscented Kalman filter (UKF) are commonly used in practical application. The Kalman filter is an...optimal estimator for linear systems; the EKF and UKF are sub-optimal approximations of the Kalman filter . The EKF uses a first-order Taylor series...propagated covariance is compared for similarity with a Monte Carlo propagation. The similarity of the covariance matrices is shown to predict filter
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Iaroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
The dark matter of galaxy voids
NASA Astrophysics Data System (ADS)
Sutter, P. M.; Lavaux, Guilhem; Wandelt, Benjamin D.; Weinberg, David H.; Warren, Michael S.
2014-03-01
How do observed voids relate to the underlying dark matter distribution? To examine the spatial distribution of dark matter contained within voids identified in galaxy surveys, we apply Halo Occupation Distribution models representing sparsely and densely sampled galaxy surveys to a high-resolution N-body simulation. We compare these galaxy voids to voids found in the halo distribution, low-resolution dark matter and high-resolution dark matter. We find that voids at all scales in densely sampled surveys - and medium- to large-scale voids in sparse surveys - trace the same underdensities as dark matter, but they are larger in radius by ˜20 per cent, they have somewhat shallower density profiles and they have centres offset by ˜ 0.4Rv rms. However, in void-to-void comparison we find that shape estimators are less robust to sampling, and the largest voids in sparsely sampled surveys suffer fragmentation at their edges. We find that voids in galaxy surveys always correspond to underdensities in the dark matter, though the centres may be offset. When this offset is taken into account, we recover almost identical radial density profiles between galaxies and dark matter. All mock catalogues used in this work are available at http://www.cosmicvoids.net.
NASA Astrophysics Data System (ADS)
Wunch, D.; Toon, G. C.; Hedelius, J.; Vizenor, N.; Roehl, C. M.; Saad, K.; Blavier, J. F.; Blake, D. R.; Wennberg, P. O.
2016-12-01
In California's South Coast Air Basin (SoCAB), the methane emissions inferred from atmospheric measurements exceed estimates based on inventories. We seek to provide insight into the sources of the discrepancy with two records of atmospheric trace gas total column abundances in the SoCAB: one temporally sparse dataset that began in the late 1980s, and a temporally dense dataset that began in 2012. We use their measurements of ethane and methane to partition the sources of the excess methane. The early few years of the sparse record show a rapid decline in ethane emissions at a much faster rate than decreasing vehicle exhaust or natural gas and crude oil production can explain. Between 2010 and 2015, ethane emissions have grown gradually, which is in contrast to the steady production of natural gas liquids over that time. Since 2012, ethane to methane ratios in the natural gas withdrawn from a storage facility within the SoCAB have been increasing; these ratios are tracked in our atmospheric measurements with about half of the rate of increase. From this, we infer that about half of the excess methane in the SoCAB between 2012-2015 is attributable to losses from the natural gas infrastructure.
Robust registration of sparsely sectioned histology to ex-vivo MRI of temporal lobe resections
NASA Astrophysics Data System (ADS)
Goubran, Maged; Khan, Ali R.; Crukley, Cathie; Buchanan, Susan; Santyr, Brendan; deRibaupierre, Sandrine; Peters, Terry M.
2012-02-01
Surgical resection of epileptic foci is a typical treatment for drug-resistant epilepsy, however, accurate preoperative localization is challenging and often requires invasive sub-dural or intra-cranial electrode placement. The presence of cellular abnormalities in the resected tissue can be used to validate the effectiveness of multispectralMagnetic Resonance Imaging (MRI) in pre-operative foci localization and surgical planning. If successful, these techniques can lead to improved surgical outcomes and less invasive procedures. Towards this goal, a novel pipeline is presented here for post-operative imaging of temporal lobe specimens involving MRI and digital histology, and present and evaluate methods for bringing these images into spatial correspondence. The sparsely-sectioned histology images of resected tissue represents a challenge for 3D reconstruction which we address with a combined 3D and 2D rigid registration algorithm that alternates between slice-based and volume-based registration with the ex-vivo MRI. We also evaluate four methods for non-rigid within-plane registration using both images and fiducials, with the top performing method resulting in a target registration error of 0.87 mm. This work allows for the spatially-local comparison of histology with post-operative MRI and paves the way for eventual registration with pre-operative MRI images.
Locality-preserving sparse representation-based classification in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting
2016-10-01
This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.
What can neuromorphic event-driven precise timing add to spike-based pattern recognition?
Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad
2015-03-01
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.
Hari, Riitta
2017-06-07
Experimental data about brain function accumulate faster than does our understanding of how the brain works. To tackle some general principles at the grain level of behavior, I start from the omnipresent brain-environment connection that forces regularities of the physical world to shape the brain. Based on top-down processing, added by sparse sensory information, people are able to form individual "caricature worlds," which are similar enough to be shared among other people and which allow quick and purposeful reactions to abrupt changes. Temporal dynamics and social interaction in natural environments serve as further essential organizing principles of human brain function. Copyright © 2017 Elsevier Inc. All rights reserved.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.
Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases. PMID:26571112
A coarse-to-fine approach for medical hyperspectral image classification with sparse representation
NASA Astrophysics Data System (ADS)
Chang, Lan; Zhang, Mengmeng; Li, Wei
2017-10-01
A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.
Wen, Zaidao; Hou, Zaidao; Jiao, Licheng
2017-11-01
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
Robust Spectral Unmixing of Sparse Multispectral Lidar Waveforms using Gamma Markov Random Fields
Altmann, Yoann; Maccarone, Aurora; McCarthy, Aongus; ...
2017-05-10
Here, this paper presents a new Bayesian spectral un-mixing algorithm to analyse remote scenes sensed via sparse multispectral Lidar measurements. To a first approximation, in the presence of a target, each Lidar waveform consists of a main peak, whose position depends on the target distance and whose amplitude depends on the wavelength of the laser source considered (i.e, on the target reflectivity). Besides, these temporal responses are usually assumed to be corrupted by Poisson noise in the low photon count regime. When considering multiple wavelengths, it becomes possible to use spectral information in order to identify and quantify the mainmore » materials in the scene, in addition to estimation of the Lidar-based range profiles. Due to its anomaly detection capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows robust estimation of depth images together with abundance and outlier maps associated with the observed 3D scene. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data acquired in a controlled environment. The results demonstrate the possibility to unmix spectral responses constructed from extremely sparse photon counts (less than 10 photons per pixel and band).« less
Wang, Li-wen; Wei, Ya-xing; Niu, Zheng
2008-06-01
1 km MODIS NDVI time series data combining with decision tree classification, supervised classification and unsupervised classification was used to classify land cover type of Qinghai Province into 14 classes. In our classification system, sparse grassland and sparse shrub were emphasized, and their spatial distribution locations were labeled. From digital elevation model (DEM) of Qinghai Province, five elevation belts were achieved, and we utilized geographic information system (GIS) software to analyze vegetation cover variation on different elevation belts. Our research result shows that vegetation cover in Qinghai Province has been improved in recent five years. Vegetation cover area increases from 370047 km2 in 2001 to 374576 km2 in 2006, and vegetation cover rate increases by 0.63%. Among five grade elevation belts, vegetation cover ratio of high mountain belt is the highest (67.92%). The area of middle density grassland in high mountain belt is the largest, of which area is 94 003 km2. Increased area of dense grassland in high mountain belt is the greatest (1280 km2). During five years, the biggest variation is the conversion from sparse grassland to middle density grassland in high mountain belt, of which area is 15931 km2.
A compressed sensing X-ray camera with a multilayer architecture
NASA Astrophysics Data System (ADS)
Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.
2018-01-01
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
SD-SEM: sparse-dense correspondence for 3D reconstruction of microscopic samples.
Baghaie, Ahmadreza; Tafti, Ahmad P; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-06-01
Scanning electron microscopy (SEM) imaging has been a principal component of many studies in biomedical, mechanical, and materials sciences since its emergence. Despite the high resolution of captured images, they remain two-dimensional (2D). In this work, a novel framework using sparse-dense correspondence is introduced and investigated for 3D reconstruction of stereo SEM images. SEM micrographs from microscopic samples are captured by tilting the specimen stage by a known angle. The pair of SEM micrographs is then rectified using sparse scale invariant feature transform (SIFT) features/descriptors and a contrario RANSAC for matching outlier removal to ensure a gross horizontal displacement between corresponding points. This is followed by dense correspondence estimation using dense SIFT descriptors and employing a factor graph representation of the energy minimization functional and loopy belief propagation (LBP) as means of optimization. Given the pixel-by-pixel correspondence and the tilt angle of the specimen stage during the acquisition of micrographs, depth can be recovered. Extensive tests reveal the strength of the proposed method for high-quality reconstruction of microscopic samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Efficient Computation of Anharmonic Force Constants via q-space, with Application to Graphene
NASA Astrophysics Data System (ADS)
Kornbluth, Mordechai; Marianetti, Chris
We present a new approach for extracting anharmonic force constants from a sparse sampling of the anharmonic dynamical tensor. We calculate the derivative of the energy with respect to q-space displacements (phonons) and strain, which guarantees the absence of supercell image errors. Central finite differences provide a well-converged quadratic error tail for each derivative, separating the contribution of each anharmonic order. These derivatives populate the anharmonic dynamical tensor in a sparse mesh that bounds the Brillouin Zone, which ensures comprehensive sampling of q-space while exploiting small-cell calculations for efficient, high-throughput computation. This produces a well-converged and precisely-defined dataset, suitable for big-data approaches. We transform this sparsely-sampled anharmonic dynamical tensor to real-space anharmonic force constants that obey full space-group symmetries by construction. Machine-learning techniques identify the range of real-space interactions. We show the entire process executed for graphene, up to and including the fifth-order anharmonic force constants. This method successfully calculates strain-based phonon renormalization in graphene, even under large strains, which solves a major shortcoming of previous potentials.
Sparse imaging for fast electron microscopy
NASA Astrophysics Data System (ADS)
Anderson, Hyrum S.; Ilic-Helms, Jovana; Rohrer, Brandon; Wheeler, Jason; Larson, Kurt
2013-02-01
Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited, large collections can lead to weeks of around-the-clock imaging time. To increase data collection speed, we propose and demonstrate on an operational SEM a fast method to sparsely sample and reconstruct smooth images. To accurately localize the electron probe position at fast scan rates, we model the dynamics of the scan coils, and use the model to rapidly and accurately visit a randomly selected subset of pixel locations. Images are reconstructed from the undersampled data by compressed sensing inversion using image smoothness as a prior. We report image fidelity as a function of acquisition speed by comparing traditional raster to sparse imaging modes. Our approach is equally applicable to other domains of nanometer microscopy in which the time to position a probe is a limiting factor (e.g., atomic force microscopy), or in which excessive electron doses might otherwise alter the sample being observed (e.g., scanning transmission electron microscopy).
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
Holcomb, David A; Messier, Kyle P; Serre, Marc L; Rowny, Jakob G; Stewart, Jill R
2018-06-25
Predictive modeling is promising as an inexpensive tool to assess water quality. We developed geostatistical predictive models of microbial water quality that empirically modeled spatiotemporal autocorrelation in measured fecal coliform (FC) bacteria concentrations to improve prediction. We compared five geostatistical models featuring different autocorrelation structures, fit to 676 observations from 19 locations in North Carolina's Jordan Lake watershed using meteorological and land cover predictor variables. Though stream distance metrics (with and without flow-weighting) failed to improve prediction over the Euclidean distance metric, incorporating temporal autocorrelation substantially improved prediction over the space-only models. We predicted FC throughout the stream network daily for one year, designating locations "impaired", "unimpaired", or "unassessed" if the probability of exceeding the state standard was ≥90%, ≤10%, or >10% but <90%, respectively. We could assign impairment status to more of the stream network on days any FC were measured, suggesting frequent sample-based monitoring remains necessary, though implementing spatiotemporal predictive models may reduce the number of concurrent sampling locations required to adequately assess water quality. Together, these results suggest that prioritizing sampling at different times and conditions using geographically sparse monitoring networks is adequate to build robust and informative geostatistical models of water quality impairment.
Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging
Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528
Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.
Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-05-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-01-01
Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.
Sparse-View Ultrasound Diffraction Tomography Using Compressed Sensing with Nonuniform FFT
2014-01-01
Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed. PMID:24868241
Effective dimension reduction for sparse functional data
YAO, F.; LEI, E.; WU, Y.
2015-01-01
Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293
NASA Astrophysics Data System (ADS)
Su, Wei; Zhou, Ti; Zhang, Peng; Zhou, Hong; Li, Hui
2018-01-01
Some biological surfaces were proved to have excellent anti-wear performance. Being inspired, Nd:YAG pulsed laser was used to create striated biomimetic laser hardening tracks on medium carbon steel samples. Dry sliding wear tests biomimetic samples were performed to investigate specific influence of distribution of laser hardening tracks on sliding wear resistance of biomimetic samples. After comparing wear weight loss of biomimetic samples, quenched sample and untreated sample, it can be suggested that the sample covered with dense laser tracks (3.5 mm spacing) has lower wear weight loss than the one covered with sparse laser tracks (4.5 mm spacing); samples distributed with only dense laser tracks or sparse laser tracks (even distribution) were proved to have better wear resistance than samples distributed with both dense and sparse tracks (uneven distribution). Wear mechanisms indicate that laser track and exposed substrate of biomimetic sample can be regarded as hard zone and soft zone respectively. Inconsecutive striated hard regions, on the one hand, can disperse load into small branches, on the other hand, will hinder sliding abrasives during wear. Soft regions with small range are beneficial in consuming mechanical energy and storing lubricative oxides, however, soft zone with large width (>0.5 mm) will be harmful to abrasion resistance of biomimetic sample because damages and material loss are more obvious on surface of soft phase. As for the reason why samples with even distributed bionic laser tracks have better wear resistance, it can be explained by the fact that even distributed laser hardening tracks can inhibit severe worn of local regions, thus sliding process can be more stable and wear extent can be alleviated as well.
Ream, Justin M; Doshi, Ankur; Lala, Shailee V; Kim, Sooah; Rusinek, Henry; Chandarana, Hersh
2015-06-01
The purpose of this article was to assess the feasibility of golden-angle radial acquisition with compress sensing reconstruction (Golden-angle RAdial Sparse Parallel [GRASP]) for acquiring high temporal resolution data for pharmacokinetic modeling while maintaining high image quality in patients with Crohn disease terminal ileitis. Fourteen patients with biopsy-proven Crohn terminal ileitis were scanned using both contrast-enhanced GRASP and Cartesian breath-hold (volume-interpolated breath-hold examination [VIBE]) acquisitions. GRASP data were reconstructed with 2.4-second temporal resolution and fitted to the generalized kinetic model using an individualized arterial input function to derive the volume transfer coefficient (K(trans)) and interstitial volume (v(e)). Reconstructions, including data from the entire GRASP acquisition and Cartesian VIBE acquisitions, were rated for image quality, artifact, and detection of typical Crohn ileitis features. Inflamed loops of ileum had significantly higher K(trans) (3.36 ± 2.49 vs 0.86 ± 0.49 min(-1), p < 0.005) and v(e) (0.53 ± 0.15 vs 0.20 ± 0.11, p < 0.005) compared with normal bowel loops. There were no significant differences between GRASP and Cartesian VIBE for overall image quality (p = 0.180) or detection of Crohn ileitis features, although streak artifact was worse with the GRASP acquisition (p = 0.001). High temporal resolution data for pharmacokinetic modeling and high spatial resolution data for morphologic image analysis can be achieved in the same acquisition using GRASP.
NASA Astrophysics Data System (ADS)
Milshteyn, Eugene; von Morze, Cornelius; Reed, Galen D.; Shang, Hong; Shin, Peter J.; Larson, Peder E. Z.; Vigneron, Daniel B.
2018-05-01
Acceleration of dynamic 2D (T2 Mapping) and 3D hyperpolarized 13C MRI acquisitions using the balanced steady-state free precession sequence was achieved with a specialized reconstruction method, based on the combination of low rank plus sparse and local low rank reconstructions. Methods were validated using both retrospectively and prospectively undersampled in vivo data from normal rats and tumor-bearing mice. Four-fold acceleration of 1-2 mm isotropic 3D dynamic acquisitions with 2-5 s temporal resolution and two-fold acceleration of 0.25-1 mm2 2D dynamic acquisitions was achieved. This enabled visualization of the biodistribution of [2-13C]pyruvate, [1-13C]lactate, [13C, 15N2]urea, and HP001 within heart, kidneys, vasculature, and tumor, as well as calculation of high resolution T2 maps.
Nonparametric estimation of stochastic differential equations with sparse Gaussian processes.
García, Constantino A; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G
2017-08-01
The application of stochastic differential equations (SDEs) to the analysis of temporal data has attracted increasing attention, due to their ability to describe complex dynamics with physically interpretable equations. In this paper, we introduce a nonparametric method for estimating the drift and diffusion terms of SDEs from a densely observed discrete time series. The use of Gaussian processes as priors permits working directly in a function-space view and thus the inference takes place directly in this space. To cope with the computational complexity that requires the use of Gaussian processes, a sparse Gaussian process approximation is provided. This approximation permits the efficient computation of predictions for the drift and diffusion terms by using a distribution over a small subset of pseudosamples. The proposed method has been validated using both simulated data and real data from economy and paleoclimatology. The application of the method to real data demonstrates its ability to capture the behavior of complex systems.
A guided wave dispersion compensation method based on compressed sensing
NASA Astrophysics Data System (ADS)
Xu, Cai-bin; Yang, Zhi-bo; Chen, Xue-feng; Tian, Shao-hua; Xie, Yong
2018-03-01
The ultrasonic guided wave has emerged as a promising tool for structural health monitoring (SHM) and nondestructive testing (NDT) due to their capability to propagate over long distances with minimal loss and sensitivity to both surface and subsurface defects. The dispersion effect degrades the temporal and spatial resolution of guided waves. A novel ultrasonic guided wave processing method for both single mode and multi-mode guided waves dispersion compensation is proposed in this work based on compressed sensing, in which a dispersion signal dictionary is built by utilizing the dispersion curves of the guided wave modes in order to sparsely decompose the recorded dispersive guided waves. Dispersion-compensated guided waves are obtained by utilizing a non-dispersion signal dictionary and the results of sparse decomposition. Numerical simulations and experiments are implemented to verify the effectiveness of the developed method for both single mode and multi-mode guided waves.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
NASA Astrophysics Data System (ADS)
Garry, Freya; McDonagh, Elaine; Blaker, Adam; Roberts, Chris; Desbruyères, Damien; King, Brian
2017-04-01
Estimates of heat content change in the deep oceans (below 2000 m) over the last thirty years are obtained from temperature measurements made by hydrographic survey ships. Cruises occupy the same tracks across an ocean basin approximately every 5+ years. Measurements may not be sufficiently frequent in time or space to allow accurate evaluation of total ocean heat content (OHC) and its rate of change. It is widely thought that additional deep ocean sampling will also aid understanding of the mechanisms for OHC change on annual to decadal timescales, including how OHC varies regionally under natural and anthropogenically forced climate change. Here a 0.25˚ ocean model is used to investigate the magnitude of uncertainties and biases that exist in estimates of deep ocean temperature change from hydrographic sections due to their infrequent timing and sparse spatial distribution during 1990 - 2010. Biases in the observational data may be due to lack of spatial coverage (not enough sections covering the basin), lack of data between occupations (typically 5-10 years apart) and due to occupations not closely spanning the time period of interest. Between 1990 - 2010, the modelled biases globally are comparatively small in the abyssal ocean below 3500 m although regionally certain biases in heat flux into the 4000 - 6000 m layer can be up to 0.05 Wm-2. Biases in the heat flux into the deep 2000 - 4000 m layer due to either temporal or spatial sampling uncertainties are typically much larger and can be over 0.1 Wm-2 across an ocean. Overall, 82% of the warming trend below 2000 m is captured by observational-style sampling in the model. However, at 2500 m (too deep for additional temperature information to be inferred from upper ocean Argo) less than two thirds of the magnitude of the global warming trend is obtained, and regionally large biases exist in the Atlantic, Southern and Indian Oceans, highlighting the need for widespread improved deep ocean temperature sampling. In addition to bias due to infrequent sampling, moving the timings of occupations by a few months generates relatively large uncertainty due to intra-annual variability in deep ocean model temperature, further strengthening the case for high temporal frequency observations in the deep ocean (as could be achieved using deep ocean autonomous float technologies). Biases due to different uncertainties can have opposing signs and differ in relative importance both regionally and with depth revealing the importance of reducing all uncertainties (both spatial and temporal) simultaneously in future deep ocean observing design.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Functional Additive Mixed Models
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2014-01-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592
Functional Additive Mixed Models.
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2015-04-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.
Fiber-bundle-basis sparse reconstruction for high resolution wide-field microendoscopy.
Mekhail, Simon Peter; Abudukeyoumu, Nilupaer; Ward, Jonathan; Arbuthnott, Gordon; Chormaic, Síle Nic
2018-04-01
In order to observe deep regions of the brain, we propose the use of a fiber bundle for microendoscopy. Fiber bundles allow for the excitation and collection of fluorescence as well as wide field imaging while remaining largely impervious to image distortions brought on by bending. Furthermore, their thin diameter, from 200-500 µ m, means their impact on living tissue, though not absent, is minimal. Although wide field imaging with a bundle allows for a high temporal resolution since no scanning is involved, the largest criticism of bundle imaging is the drastically lowered spatial resolution. In this paper, we make use of sparsity in the object being imaged to up sample the low resolution images from the fiber bundle with compressive sensing. We take each image in a single shot by using a measurement basis dictated by the quasi-crystalline arrangement of the bundle's cores. We find that this technique allows us to increase the resolution of a typical image taken through a fiber bundle.
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui
2017-08-24
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
Jenison, Rick L.; Reale, Richard A.; Armstrong, Amanda L.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.
2015-01-01
Spectro-Temporal Receptive Fields (STRFs) were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM). A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl’s gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl’s gyrus recordings elicited by click-train stimuli. PMID:26367010
Kandler, Anne; Shennan, Stephen
2015-12-06
Cultural change can be quantified by temporal changes in frequency of different cultural artefacts and it is a central question to identify what underlying cultural transmission processes could have caused the observed frequency changes. Observed changes, however, often describe the dynamics in samples of the population of artefacts, whereas transmission processes act on the whole population. Here we develop a modelling framework aimed at addressing this inference problem. To do so, we firstly generate population structures from which the observed sample could have been drawn randomly and then determine theoretical samples at a later time t2 produced under the assumption that changes in frequencies are caused by a specific transmission process. Thereby we also account for the potential effect of time-averaging processes in the generation of the observed sample. Subsequent statistical comparisons (e.g. using Bayesian inference) of the theoretical and observed samples at t2 can establish which processes could have produced the observed frequency data. In this way, we infer underlying transmission processes directly from available data without any equilibrium assumption. We apply this framework to a dataset describing pottery from settlements of some of the first farmers in Europe (the LBK culture) and conclude that the observed frequency dynamic of different types of decorated pottery is consistent with age-dependent selection, a preference for 'young' pottery types which is potentially indicative of fashion trends. © 2015 The Author(s).
Deep ensemble learning of sparse regression models for brain disease diagnosis.
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2017-04-01
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Image super-resolution via sparse representation.
Yang, Jianchao; Wright, John; Huang, Thomas S; Ma, Yi
2010-11-01
This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework.
Deep ensemble learning of sparse regression models for brain disease diagnosis
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2018-01-01
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer’s disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call ‘ Deep Ensemble Sparse Regression Network.’ To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. PMID:28167394
Statistical characterization of global Sea Surface Salinity for SMOS level 3 and 4 products
NASA Astrophysics Data System (ADS)
Gourrion, J.; Aretxabaleta, A. L.; Ballabrera, J.; Mourre, B.
2009-04-01
The Soil Moisture and Ocean Salinity (SMOS) mission of the European Space Agency will soon provide sea surface salinity (SSS) estimates to the scientific community. Because of the numerous geophysical contamination sources and the instrument complexity, the salinity products will have a low signal to noise ratio at level 2 (individual estimates??) that is expected to increase up to mission requirements (0.1 psu) at level 3 (global maps with regular distribution) after spatio-temporal accumulation of the observations. Geostatistical methods such as Optimal Interpolation are being implemented at the level 3/4 production centers to operate this noise reduction step. The methodologies require auxiliary information about SSS statistics that, under Gaussian assumption, consist in the mean field and the covariance of the departures from it. The present study is a contribution to the definition of the best estimates for mean field and covariances to be used in the near-future SMOS level 3 and 4 products. We use complementary information from sparse in-situ observations and imperfect outputs from state-of-art model simulations. Various estimates of the mean field are compared. An alternative is the use of a SSS climatology such as the one provided by the World Ocean Atlas 2005. An historical SSS dataset from the World Ocean Database 2005 is reanalyzed and combined with the recent global observations obtained by the Array for Real-Time Geostrophic Oceanography (ARGO). Regional tendencies in the long-term temporal evolution of the near-surface ocean salinity are evident, suggesting that the use of a SSS climatology to describe the current mean field may introduce biases of magnitude similar to the precision goal. Consequently, a recent SSS dataset may be preferred to define the mean field needed for SMOS level 3 and 4 production. The in-situ observation network allows a global mapping of the low frequency component of the variability, i.e. decadal, interannual and seasonal scales. Unfortunately, its sparse spatio-temporal sampling allows only an incomplete description of higher frequency variability. At this point, hindcasts from operational ocean prediction systems appear as a potential source for the characterization of high frequency SSS variance and spatial correlations. Preliminary validation of model outputs is performed. This work is part of the effort conducted at the SMOS Barcelona Expert Center (http://www.smos-bec.icm.csic.es) aiming at contributing to the ground segment of the SMOS mission.
Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.
Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib
2017-03-01
A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.
Revealing the Hidden Water Budget of an Alpine Volcanic Watershed Using a Bayesian Mixing Model
NASA Astrophysics Data System (ADS)
Markovich, K. H.; Arumi, J. L.; Dahlke, H. E.; Fogg, G. E.
2017-12-01
Climate change is altering alpine water budgets in observable ways, such as snow melting sooner or falling as rain, but also in hidden ways, such as shifting recharge timing and increased evapotranspiration demand leading to diminished summer low flows. The combination of complex hydrogeology and sparse availability of data make it difficult to predict the direction or magnitude of shifts in alpine water budgets, and thus difficult to inform decision-making. We present a data sparse watershed in the Andes Mountains of central Chile in which complex geology, interbasin flows, and surface water-groundwater interactions impede our ability to fully describe the water budget. We collected water samples for stable isotopes and major anions and cations, over the course of water year 2016-17 to characterize the spatial and temporal variability in endmember signatures (snow, rain, and groundwater). We use a Bayesian Hierarchical Model (BHM) to explicitly incorporate uncertainty and prior information into a mixing model, and predict the proportional contribution of snow, rain, and groundwater to streamflow throughout the year for the full catchment as well as its two sub-catchments. Preliminary results suggest that streamflow is likely more rainfall-dominated than previously thought, which not only alters our projections of climate change impacts, but make this watershed a potential example for other watersheds undergoing a snow to rain transition. Understanding how these proportions vary in space and time will help us elucidate key information on stores, fluxes, and timescales of water flow for improved current and future water resource management.
Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics
NASA Astrophysics Data System (ADS)
Gilles, Luc; Ellerbroek, Brent L.; Vogel, Curtis R.
2003-09-01
Multiconjugate adaptive optics (MCAO) systems with 104-105 degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wave-front control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 104 actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10-2 Hz, i.e., 4-5 orders of magnitude lower than the typical 103 Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
A compressed sensing X-ray camera with a multilayer architecture
Wang, Zhehui; Laroshenko, O.; Li, S.; ...
2018-01-25
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
A compressed sensing X-ray camera with a multilayer architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Laroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
Immunological memory is associative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D.J.; Forrest, S.; Perelson, A.S.
1996-12-31
The purpose of this paper is to show that immunological memory is an associative and robust memory that belongs to the class of sparse distributed memories. This class of memories derives its associative and robust nature by sparsely sampling the input space and distributing the data among many independent agents. Other members of this class include a model of the cerebellar cortex and Sparse Distributed Memory (SDM). First we present a simplified account of the immune response and immunological memory. Next we present SDM, and then we show the correlations between immunological memory and SDM. Finally, we show how associativemore » recall in the immune response can be both beneficial and detrimental to the fitness of an individual.« less
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.
Mostafa, Hesham
2017-08-01
Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.
Virtual mission stage I: Implications of a spaceborne surface water mission
NASA Astrophysics Data System (ADS)
Clark, E. A.; Alsdorf, D. E.; Bates, P.; Wilson, M. D.; Lettenmaier, D. P.
2004-12-01
The interannual and interseasonal variability of the land surface water cycle depend on the distribution of surface water in lakes, wetlands, reservoirs, and river systems; however, measurements of hydrologic variables are sparsely distributed, even in industrialized nations. Moreover, the spatial extent and storage variations of lakes, reservoirs, and wetlands are poorly known. We are developing a virtual mission to demonstrate the feasibility of observing surface water extent and variations from a spaceborne platform. In the first stage of the virtual mission, on which we report here, surface water area and fluxes are emulated using simulation modeling over three continental scale river basins, including the Ohio River, the Amazon River and an Arctic river. The Variable Infiltration Capacity (VIC) macroscale hydrologic model is used to simulate evapotranspiration, soil moisture, snow accumulation and ablation, and runoff and streamflow over each basin at one-eighth degree resolution. The runoff from this model is routed using a linear transfer model to provide input to a much more detailed flow hydraulics model. The flow hydraulics model then routes runoff through various channel and floodplain morphologies at a 250 m spatial and 20 second temporal resolution over a 100 km by 500 km domain. This information is used to evaluate trade-offs between spatial and temporal resolutions of a hypothetical high resolution spaceborne altimeter by synthetically sampling the resultant model-predicted water surface elevations.
Complex surface deformation of Akutan volcano, Alaska revealed from InSAR time series
NASA Astrophysics Data System (ADS)
Wang, Teng; DeGrandpre, Kimberly; Lu, Zhong; Freymueller, Jeffrey T.
2018-02-01
Akutan volcano is one of the most active volcanoes in the Aleutian arc. An intense swarm of volcano-tectonic earthquakes occurred across the island in 1996. Surface deformation after the 1996 earthquake sequence has been studied using Interferometric Synthetic Aperture Radar (InSAR), yet it is hard to determine the detailed temporal behavior and spatial extent of the deformation due to decorrelation and the sparse temporal sampling of SAR data. Atmospheric delay anomalies over Akutan volcano are also strong, bringing additional technical challenges. Here we present a time series InSAR analysis from 2003 to 2016 to reveal the surface deformation in more detail. Four tracks of Envisat data acquired from 2003 to 2010 and one track of TerraSAR-X data acquired from 2010 to 2016 are processed to produce high-resolution surface deformation, with a focus on studying two transient episodes of inflation in 2008 and 2014. For the TerraSAR-X data, the atmospheric delay is estimated and removed using the common-master stacking method. These derived deformation maps show a consistently uplifting area on the northeastern flank of the volcano. From the TerraSAR-X data, we quantify the velocity of the subsidence inside the caldera to be as high as 10 mm/year, and identify another subsidence area near the ground cracks created during the 1996 swarm.
Category-Specific Comparison of Univariate Alerting Methods for Biosurveillance Decision Support
Elbert, Yevgeniy; Hung, Vivian; Burkom, Howard
2013-01-01
Objective For a multi-source decision support application, we sought to match univariate alerting algorithms to surveillance data types to optimize detection performance. Introduction Temporal alerting algorithms commonly used in syndromic surveillance systems are often adjusted for data features such as cyclic behavior but are subject to overfitting or misspecification errors when applied indiscriminately. In a project for the Armed Forces Health Surveillance Center to enable multivariate decision support, we obtained 4.5 years of out-patient, prescription and laboratory test records from all US military treatment facilities. A proof-of-concept project phase produced 16 events with multiple evidence corroboration for comparison of alerting algorithms for detection performance. We used the representative streams from each data source to compare sensitivity of 6 algorithms to injected spikes, and we used all data streams from 16 known events to compare them for detection timeliness. Methods The six methods compared were: Holt-Winters generalized exponential smoothing method (1)automated choice between daily methods, regression and an exponential weighted moving average (2)adaptive daily Shewhart-type chartadaptive one-sided daily CUSUMEWMA applied to 7-day means with a trend correction; and7-day temporal scan statistic Sensitivity testing: We conducted comparative sensitivity testing for categories of time series with similar scales and seasonal behavior. We added multiples of the standard deviation of each time series as single-day injects in separate algorithm runs. For each candidate method, we then used as a sensitivity measure the proportion of these runs for which the output of each algorithm was below alerting thresholds estimated empirically for each algorithm using simulated data streams. We identified the algorithm(s) whose sensitivity was most consistently high for each data category. For each syndromic query applied to each data source (outpatient, lab test orders, and prescriptions), 502 authentic time series were derived, one for each reporting treatment facility. Data categories were selected in order to group time series with similar expected algorithm performance: Median > 100 < Median ≤ 10Median = 0Lag 7 Autocorrelation Coefficient ≥ 0.2Lag 7 Autocorrelation Coefficient < 0.2 Timeliness testing: For the timeliness testing, we avoided artificiality of simulated signals by measuring alerting detection delays in the 16 corroborated outbreaks. The multiple time series from these events gave a total of 141 time series with outbreak intervals for timeliness testing. The following measures were computed to quantify timeliness of detection: Median Detection Delay – median number of days to detect the outbreak.Penalized Mean Detection Delay –mean number of days to detect the outbreak with outbreak misses penalized as 1 day plus the maximum detection time. Results Based on the injection results, the Holt-Winters algorithm was most sensitive among time series with positive medians. The adaptive CUSUM and the Shewhart methods were most sensitive for data streams with median zero. Table 1 provides timeliness results using the 141 outbreak-associated streams on sparse (Median=0) and non-sparse data categories. [Insert table #1 here] Data median Detection Delay, days Holt-winters Regression EWMA Adaptive Shewhart Adaptive CUSUM 7-day Trend-adj. EWMA 7-day Temporal Scan Median 0 Median 3 2 4 2 4.5 2 Penalized Mean 7.2 7 6.6 6.2 7.3 7.6 Median >0 Median 2 2 2.5 2 6 4 Penalized Mean 6.1 7 7.2 7.1 7.7 6.6 The gray shading in the table 1 indicates methods with shortest detection delays for sparse and non-sparse data streams. The Holt-Winters method was again superior for non-sparse data. For data with median=0, the adaptive CUSUM was superior for a daily false alarm probability of 0.01, but the Shewhart method was timelier for more liberal thresholds. Conclusions Both kinds of detection performance analysis showed the method based on Holt-Winters exponential smoothing superior on non-sparse time series with day-of-week effects. The adaptive CUSUM and She-whart methods proved optimal on sparse data and data without weekly patterns.
Kim, Steve M; Ganguli, Surya; Frank, Loren M
2012-08-22
Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.
Ramkumar, Barathram; Sabarimalai Manikandan, M.
2017-01-01
Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal. PMID:28529758
Kremkow, Jens; Perrinet, Laurent U.; Monier, Cyril; Alonso, Jose-Manuel; Aertsen, Ad; Frégnac, Yves; Masson, Guillaume S.
2016-01-01
Neurons in the primary visual cortex are known for responding vigorously but with high variability to classical stimuli such as drifting bars or gratings. By contrast, natural scenes are encoded more efficiently by sparse and temporal precise spiking responses. We used a conductance-based model of the visual system in higher mammals to investigate how two specific features of the thalamo-cortical pathway, namely push-pull receptive field organization and fast synaptic depression, can contribute to this contextual reshaping of V1 responses. By comparing cortical dynamics evoked respectively by natural vs. artificial stimuli in a comprehensive parametric space analysis, we demonstrate that the reliability and sparseness of the spiking responses during natural vision is not a mere consequence of the increased bandwidth in the sensory input spectrum. Rather, it results from the combined impacts of fast synaptic depression and push-pull inhibition, the later acting for natural scenes as a form of “effective” feed-forward inhibition as demonstrated in other sensory systems. Thus, the combination of feedforward-like inhibition with fast thalamo-cortical synaptic depression by simple cells receiving a direct structured input from thalamus composes a generic computational mechanism for generating a sparse and reliable encoding of natural sensory events. PMID:27242445
Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M
2017-02-01
Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less
NASA Astrophysics Data System (ADS)
Hoffman, F. M.; Kumar, J.; Maddalena, D. M.; Langford, Z.; Hargrove, W. W.
2014-12-01
Disparate in situ and remote sensing time series data are being collected to understand the structure and function of ecosystems and how they may be affected by climate change. However, resource and logistical constraints limit the frequency and extent of observations, particularly in the harsh environments of the arctic and the tropics, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent variability at desired scales. These regions host large areas of potentially vulnerable ecosystems that are poorly represented in Earth system models (ESMs), motivating two new field campaigns, called Next Generation Ecosystem Experiments (NGEE) for the Arctic and Tropics, funded by the U.S. Department of Energy. Multivariate Spatio-Temporal Clustering (MSTC) provides a quantitative methodology for stratifying sampling domains, informing site selection, and determining the representativeness of measurement sites and networks. We applied MSTC to down-scaled general circulation model results and data for the State of Alaska at a 4 km2 resolution to define maps of ecoregions for the present (2000-2009) and future (2090-2099), showing how combinations of 37 bioclimatic characteristics are distributed and how they may shift in the future. Optimal representative sampling locations were identified on present and future ecoregion maps, and representativeness maps for candidate sampling locations were produced. We also applied MSTC to remotely sensed LiDAR measurements and multi-spectral imagery from the WorldView-2 satellite at a resolution of about 5 m2 within the Barrow Environmental Observatory (BEO) in Alaska. At this resolution, polygonal ground features—such as centers, edges, rims, and troughs—can be distinguished. Using these remote sensing data, we up-scaled vegetation distribution data collected on these polygonal ground features to a large area of the BEO to provide distributions of plant functional types that can be used to parameterize ESMs. In addition, we applied MSTC to 4 km2 global bioclimate data to define global ecoregions and understand the representativeness of CTFS-ForestGEO, Fluxnet, and RAINFOR sampling networks. These maps identify tropical forests underrepresented in existing observations of individual and combined networks.
NASA Astrophysics Data System (ADS)
Manago, K. F.; Hogue, T. S.; Hering, A. S.
2014-12-01
In the City of Los Angeles, groundwater accounts for 11% of the total water supply on average, and 30% during drought years. Due to ongoing drought in California, increased reliance on local water supply highlights the need for better understanding of regional groundwater dynamics and estimating sustainable groundwater supply. However, in an urban setting, such as Los Angeles, understanding or modeling groundwater levels is extremely complicated due to various anthropogenic influences such as groundwater pumping, artificial recharge, landscape irrigation, leaking infrastructure, seawater intrusion, and extensive impervious surfaces. This study analyzes anthropogenic effects on groundwater levels using groundwater monitoring well data from the County of Los Angeles Department of Public Works. The groundwater data is irregularly sampled with large gaps between samples, resulting in a sparsely populated dataset. A multiple imputation method is used to fill the missing data, allowing for multiple ensembles and improved error estimates. The filled data is interpolated to create spatial groundwater maps utilizing information from all wells. The groundwater data is evaluated at a monthly time step over the last several decades to analyze the effect of land cover and identify other influencing factors on groundwater levels spatially and temporally. Preliminary results show irrigated parks have the largest influence on groundwater fluctuations, resulting in large seasonal changes, exceeding changes in spreading grounds. It is assumed that these fluctuations are caused by watering practices required to sustain non-native vegetation. Conversely, high intensity urbanized areas resulted in muted groundwater fluctuations and behavior decoupling from climate patterns. Results provides improved understanding of anthropogenic effects on groundwater levels in addition to providing high quality datasets for validation of regional groundwater models.
Gratz, Marcel; Schlamann, Marc; Goericke, Sophia; Maderwald, Stefan; Quick, Harald H
2017-03-01
To assess the image quality of sparsely sampled contrast-enhanced MR angiography (sparse CE-MRA) providing high spatial resolution and whole-head coverage. Twenty-three patients scheduled for contrast-enhanced MR imaging of the head, (N = 19 with intracranial pathologies, N = 9 with vascular diseases), were included. Sparse CE-MRA at 3 Tesla was conducted using a single dose of contrast agent. Two neuroradiologists independently evaluated the data regarding vascular visibility and diagnostic value of overall 24 parameters and vascular segments on a 5-point ordinary scale (5 = very good, 1 = insufficient vascular visibility). Contrast bolus timing and the resulting arterio-venous overlap was also evaluated. Where available (N = 9), sparse CE-MRA was compared to intracranial Time-of-Flight MRA. The overall rating across all patients for sparse CE-MRA was 3.50 ± 1.07. Direct influence of the contrast bolus timing on the resulting image quality was observed. Overall mean vascular visibility and image quality across different features was rated good to intermediate (3.56 ± 0.95). The average performance of intracranial Time-of-Flight was rated 3.84 ± 0.87 across all patients and 3.54 ± 0.62 across all features. Sparse CE-MRA provides high-quality 3D MRA with high spatial resolution and whole-head coverage within short acquisition time. Accurate contrast bolus timing is mandatory. • Sparse CE-MRA enables fast vascular imaging with full brain coverage. • Volumes with sub-millimetre resolution can be acquired within 10 seconds. • Reader's ratings are good to intermediate and dependent on contrast bolus timing. • The method provides an excellent overview and allows screening for vascular pathologies.
NASA Astrophysics Data System (ADS)
Ashe, E.; Kopp, R. E.; Khan, N.; Horton, B.; Engelhart, S. E.
2016-12-01
Sea level varies over of both space and time. Prior to the instrumental period, the sea-level record depends upon geological reconstructions that contain vertical and temporal uncertainty. Spatio-temporal statistical models enable the interpretation of RSL and rates of change as well as the reconstruction of the entire sea-level field from such noisy data. Hierarchical models explicitly distinguish between a process level, which characterizes the spatio-temporal field, and a data level, by which sparse proxy data and its noise is recorded. A hyperparameter level depicts prior expectations about the structure of variability in the spatio-temporal field. Spatio-temporal hierarchical models are amenable to several analysis approaches, with tradeoffs regarding computational efficiency and comprehensiveness of uncertainty characterization. A fully-Bayesian hierarchical model (BHM), which places prior probability distributions upon the hyperparameters, is more computationally intensive than an empirical hierarchical model (EHM), which uses point estimates of hyperparameters, derived from the data [1]. Here, we assess the sensitivity of posterior estimates of relative sea level (RSL) and rates to different statistical approaches by varying prior assumptions about the spatial and temporal structure of sea-level variability and applying multiple analytical approaches to Holocene sea-level proxies along the Atlantic coast of North American and the Caribbean [2]. References: 1. N Cressie, Wikle CK (2011) Statistics for spatio-temporal data (John Wiley & Sons). 2. Kahn N et al. (2016). Quaternary Science Reviews (in revision).
Temporal overlap of humans and giant lizards (Varanidae; Squamata) in Pleistocene Australia
NASA Astrophysics Data System (ADS)
Price, Gilbert J.; Louys, Julien; Cramb, Jonathan; Feng, Yue-xing; Zhao, Jian-xin; Hocknull, Scott A.; Webb, Gregory E.; Nguyen, Ai Duc; Joannes-Boyau, Renaud
2015-10-01
An obvious but key prerequisite to testing hypotheses concerning the role of humans in the extinction of late Quaternary 'megafauna' is demonstrating that humans and the extinct taxa overlapped, both temporally and spatially. In many regions, a paucity of reliably dated fossil occurrences of megafauna makes it challenging, if not impossible, to test many of the leading extinction hypotheses. The giant monitor lizards of Australia are a case in point. Despite commonly being argued to have suffered extinction at the hands of the first human colonisers (who arrived by 50 ka), it has never been reliably demonstrated that giant monitors and humans temporally overlapped in Australia. Here we present the results of an integrated U-Th and 14C dating study of a late Pleistocene fossil deposit that has yielded the youngest dated remains of giant monitor lizards in Australia. The site, Colosseum Chamber, is a cave deposit in the Mt Etna region, central eastern Australia. Sixteen new dates were generated and demonstrate that the bulk of the material in the deposit accumulated since ca. 50 ka. The new monitor fossil is, minimally, 30 ky younger than the previous youngest reliably dated record for giant lizards in Australia and for the first time, demonstrates that on a continental scale, humans and giant lizards overlapped in time. The new record brings the existing geochronological dataset for Australian giant monitor lizards to seven dated occurrences. With such sparse data, we are hesitant to argue that our new date represents the time of their extinction from the continent. Rather, we suspect that future fossil collecting will yield new samples both older and younger than 50 ka. Nevertheless, we unequivocally demonstrate that humans and giant monitor lizards overlapped temporally in Australia, and thus, humans can only now be considered potential drivers for their extinction.
Asynchronous signal-dependent non-uniform sampler
NASA Astrophysics Data System (ADS)
Can-Cimino, Azime; Chaparro, Luis F.; Sejdić, Ervin
2014-05-01
Analog sparse signals resulting from biomedical and sensing network applications are typically non-stationary with frequency-varying spectra. By ignoring that the maximum frequency of their spectra is changing, uniform sampling of sparse signals collects unnecessary samples in quiescent segments of the signal. A more appropriate sampling approach would be signal-dependent. Moreover, in many of these applications power consumption and analog processing are issues of great importance that need to be considered. In this paper we present a signal dependent non-uniform sampler that uses a Modified Asynchronous Sigma Delta Modulator which consumes low-power and can be processed using analog procedures. Using Prolate Spheroidal Wave Functions (PSWF) interpolation of the original signal is performed, thus giving an asynchronous analog to digital and digital to analog conversion. Stable solutions are obtained by using modulated PSWFs functions. The advantage of the adapted asynchronous sampler is that range of frequencies of the sparse signal is taken into account avoiding aliasing. Moreover, it requires saving only the zero-crossing times of the non-uniform samples, or their differences, and the reconstruction can be done using their quantized values and a PSWF-based interpolation. The range of frequencies analyzed can be changed and the sampler can be implemented as a bank of filters for unknown range of frequencies. The performance of the proposed algorithm is illustrated with an electroencephalogram (EEG) signal.
U-Th-Pb zircon dating of the 13.8-Ma dacite volcanic dome at Cerro Rico de Potosi, Bolivia
Zartman, R.E.; Cunningham, C.G.
1995-01-01
The temporal relationship between the extrusion of the Miocene dacite volcanic dome at Cerro Rico de Potasi, Bolivia, and the associated Ag-Sn mineralization has an important bearing on the heat and metal sources for this world class mineral deposit. The present study uses U-Th-Pb dating of sparse zircon contained in the dacite to demonstrate that, at most, only several hundred thousand years separate dome emplacement from main stage mineralization. -from Authors
JESTR: Jupiter Exploration Science in the Time Regime
NASA Technical Reports Server (NTRS)
Noll, Keith S.; Simon-Miller, A. A.; Wong, M. H.; Choi, D. S.
2012-01-01
Solar system objects are inherently time-varying with changes that occur on timescales ranging from seconds to years. For all planets other than the Earth, temporal coverage of atmospheric phenomena is limited and sparse. Many important atmospheric phenomena, especially those related to atmospheric dynamics, can be studied in only very limited ways with current data. JESTR is a mission concept that would remedy this gap in our exploration of the solar system by ncar-continuous imaging and spectral monitoring of Jupiter over a multi-year mission lifetime.
NASA Astrophysics Data System (ADS)
Han, Hao; Gao, Hao; Xing, Lei
2017-08-01
Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.
Monitoring global vegetation using Nimbus-7 37 GHz data - Some empirical relations
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Tucker, C. J.
1987-01-01
The difference of the vertically and horizontally polarized brightness temperatures observed by the 37 GHz channel of the SMMR on board the Nimbus-7 satellite are correlated temporally with three indicators of vegetation density, namely the temporal variation of the atmospheric CO2 concentration at Mauna Loa (Hawaii), rainfall over the Sahel and the normalized difference vegetation index derived from the AVHRR on board the NOAA-7 satellite. SMMR 37 GHz and AVHRR provide complementary data sets for monitoring global vegetation, the 37 GHz data being more suitable for arid and semiarid regions as these data are more sensitive to changes in sparse vegetation. The 37-GHz data might be useful for understanding desertification and indexing Co2 exchange between the biosphere and the atmosphere.
HTM Spatial Pooler With Memristor Crossbar Circuits for Sparse Biometric Recognition.
James, Alex Pappachen; Fedorova, Irina; Ibrayev, Timur; Kudithipudi, Dhireesha
2017-06-01
Hierarchical Temporal Memory (HTM) is an online machine learning algorithm that emulates the neo-cortex. The development of a scalable on-chip HTM architecture is an open research area. The two core substructures of HTM are spatial pooler and temporal memory. In this work, we propose a new Spatial Pooler circuit design with parallel memristive crossbar arrays for the 2D columns. The proposed design was validated on two different benchmark datasets, face recognition, and speech recognition. The circuits are simulated and analyzed using a practical memristor device model and 0.18 μm IBM CMOS technology model. The databases AR, YALE, ORL, and UFI, are used to test the performance of the design in face recognition. TIMIT dataset is used for the speech recognition.
Sparse PCA with Oracle Property.
Gu, Quanquan; Wang, Zhaoran; Liu, Han
In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.
Sparse PCA with Oracle Property
Gu, Quanquan; Wang, Zhaoran; Liu, Han
2014-01-01
In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971
Improved analysis of SP and CoSaMP under total perturbations
NASA Astrophysics Data System (ADS)
Li, Haifeng
2016-12-01
Practically, in the underdetermined model y= A x, where x is a K sparse vector (i.e., it has no more than K nonzero entries), both y and A could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector x is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
Face recognition based on two-dimensional discriminant sparse preserving projection
NASA Astrophysics Data System (ADS)
Zhang, Dawei; Zhu, Shanan
2018-04-01
In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.
Weiss, Christian; Zoubir, Abdelhak M
2017-05-01
We propose a compressed sampling and dictionary learning framework for fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is generated from a model for the reflected sensor signal. Imperfect prior knowledge is considered in terms of uncertain local and global parameters. To estimate a sparse representation and the dictionary parameters, we present an alternating minimization algorithm that is equipped with a preprocessing routine to handle dictionary coherence. The support of the obtained sparse signal indicates the reflection delays, which can be used to measure impairments along the sensing fiber. The performance is evaluated by simulations and experimental data for a fiber sensor system with common core architecture.
Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O
2015-12-01
To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.
2018-01-01
Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.
Statistical regularities of art images and natural scenes: spectra, sparseness and nonlinearities.
Graham, Daniel J; Field, David J
2007-01-01
Paintings are the product of a process that begins with ordinary vision in the natural world and ends with manipulation of pigments on canvas. Because artists must produce images that can be seen by a visual system that is thought to take advantage of statistical regularities in natural scenes, artists are likely to replicate many of these regularities in their painted art. We have tested this notion by computing basic statistical properties and modeled cell response properties for a large set of digitized paintings and natural scenes. We find that both representational and non-representational (abstract) paintings from our sample (124 images) show basic similarities to a sample of natural scenes in terms of their spatial frequency amplitude spectra, but the paintings and natural scenes show significantly different mean amplitude spectrum slopes. We also find that the intensity distributions of paintings show a lower skewness and sparseness than natural scenes. We account for this by considering the range of luminances found in the environment compared to the range available in the medium of paint. A painting's range is limited by the reflective properties of its materials. We argue that artists do not simply scale the intensity range down but use a compressive nonlinearity. In our studies, modeled retinal and cortical filter responses to the images were less sparse for the paintings than for the natural scenes. But when a compressive nonlinearity was applied to the images, both the paintings' sparseness and the modeled responses to the paintings showed the same or greater sparseness compared to the natural scenes. This suggests that artists achieve some degree of nonlinear compression in their paintings. Because paintings have captivated humans for millennia, finding basic statistical regularities in paintings' spatial structure could grant insights into the range of spatial patterns that humans find compelling.
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Optimal sparse approximation with integrate and fire neurons.
Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher
2014-08-01
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi
2015-01-01
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2017-04-01
Forecasting the occurrence of flash floods and debris flows is fundamental to save lives and protect infrastructures and properties. These natural hazards are generated by high-intensity convective storms, on space-time scales that cannot be properly monitored by conventional instrumentation. Consequently, a number of early-warning systems are nowadays based on remote sensing precipitation observations, e.g. from weather radars or satellites, that proved effective in a wide range of situations. However, the uncertainty affecting rainfall estimates represents an important issue undermining the operational use of early-warning systems. The uncertainty related to remote sensing estimates results from (a) an instrumental component, intrinsic of the measurement operation, and (b) a discretization component, caused by the discretization of the continuous rainfall process. Improved understanding on these sources of uncertainty will provide crucial information to modelers and decision makers. This study aims at advancing knowledge on the (b) discretization component. To do so, we take advantage of an extremely-high resolution X-Band weather radar (60 m, 1 min) recently installed in the Eastern Mediterranean. The instrument monitors a semiarid to arid transition area also covered by an accurate C-Band weather radar and by a relatively sparse rain gauge network ( 1 gauge/ 450 km2). Radar quantitative precipitation estimation includes corrections reducing the errors due to ground echoes, orographic beam blockage and attenuation of the signal in heavy rain. Intense, convection-rich, flooding events recently occurred in the area serve as study cases. We (i) describe with very high detail the spatiotemporal characteristics of the convective cores, and (ii) quantify the uncertainty due to spatial aggregation (spatial discretization) and temporal sampling (temporal discretization) operated by coarser resolution remote sensing instruments. We show that instantaneous rain intensity decreases very steeply with the distance from the core of convection with intensity observed at 1 km (2 km) being 10-40% (1-20%) of the core value. The use of coarser temporal resolutions leads to gaps in the observed rainfall and even relatively high resolutions (5 min) can be affected by the problem. We conclude providing to the final user indications about the effects of the discretization component of estimation uncertainty and suggesting viable ways to decrease them.
Observability of global rivers with future SWOT observations
NASA Astrophysics Data System (ADS)
Fisher, Colby; Pan, Ming; Wood, Eric
2017-04-01
The Surface Water and Ocean Topography (SWOT) mission is designed to provide global observations of water surface elevation and slope from which river discharge can be estimated using a data assimilation system. This mission will provide increased spatial and temporal coverage compared to current altimeters, with an expected accuracy for water level elevations of 10 cm on rivers greater than 100 m wide. Within the 21-day repeat cycle, a river reach will be observed 2-4 times on average. Due to the relationship between the basin orientation and the orbit, these observations are not evenly distributed in time, which will impact the derived discharge values. There is, then, a need for a better understanding of how the mission will observe global river basins. In this study, we investigate how SWOT will observe global river basins and how the temporal and spatial sampling impacts the discharge estimated from assimilation. SWOT observations can be assimilated using the Inverse Streamflow Routing (ISR) model of Pan and Wood [2013] with a fixed interval Kalman smoother. Previous work has shown that the ISR assimilation method can be used to reproduce the spatial and temporal dynamics of discharge within many global basins: however, this performance was strongly impacted by the spatial and temporal availability of discharge observations. In this study, we apply the ISR method to 32 global basins with different geometries and crossing patterns for the future orbit, assimilating theoretical SWOT-retrieved "gauges". Results show that the model performance varies significantly across basins and is driven by the orientation, flow distance, and travel time in each. Based on these properties, we quantify the "observability" of each basin and relate this to the performance of the assimilation. Applying this metric globally to a large variety of basins we can gain a better understanding of the impact that SWOT observations may have across basin scales. By determining the availability of SWOT observations in this manner, hydrologic data assimilation approaches like ISR can be optimized to provide useful discharge estimates in sparsely gauged regions where spatially and temporally consistent discharge records are most valuable. Pan, M; Wood, E F 2013 Inverse streamflow routing, HESS 17(11):4577-4588
NASA Astrophysics Data System (ADS)
Mei, Kai; Kopp, Felix K.; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.; Baum, Thomas
2017-03-01
The trabecular bone microstructure is a key to the early diagnosis and advanced therapy monitoring of osteoporosis. Regularly measuring bone microstructure with conventional multi-detector computer tomography (MDCT) would expose patients with a relatively high radiation dose. One possible solution to reduce exposure to patients is sampling fewer projection angles. This approach can be supported by advanced reconstruction algorithms, with their ability to achieve better image quality under reduced projection angles or high levels of noise. In this work, we investigated the performance of iterative reconstruction from sparse sampled projection data on trabecular bone microstructure in in-vivo MDCT scans of human spines. The computed MDCT images were evaluated by calculating bone microstructure parameters. We demonstrated that bone microstructure parameters were still computationally distinguishable when half or less of the radiation dose was employed.
Sparse representation based image interpolation with nonlocal autoregressive modeling.
Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming
2013-04-01
Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.
Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.
Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli
2016-05-01
Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.
Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-01-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Gallé, Róbert; Urák, István; Nikolett, Gallé-Szpisjak; Hartel, Tibor
2017-01-01
The integration of food production and biodiversity conservation represents a key challenge for sustainability. Several studies suggest that even small structural elements in the landscape can make a substantial contribution to the overall biodiversity value of the agricultural landscapes. Pastures can have high biodiversity potential. However, their intensive and monofunctional use typically erodes its natural capital, including biodiversity. Here we address the ecological value of fine scale structural elements represented by sparsely scattered trees and shrubs for the spider communities in a moderately intensively grazed pasture in Transylvania, Eastern Europe. The pasture was grazed with sheep, cattle and buffalo (ca 1 Livestock Unit ha-1) and no chemical fertilizers were applied. Sampling sites covered the open pasture as well as the existing fine-scale heterogeneity created by scattered trees and shrub. 40 sampling locations each being represented by three 1 m2 quadrats were situated in a stratified design while assuring spatial independency of sampling locations. We identified 140 species of spiders, out of which 18 were red listed and four were new for the Romanian fauna. Spider species assemblages of open pasture, scattered trees, trees and shrubs and the forest edge were statistically distinct. Our study shows that sparsely scattered mature woody vegetation and shrubs substantially increases the ecological value of managed pastures. The structural complexity provided by scattered trees and shrubs makes possible the co-occurrence of high spider diversity with a moderately high intensity grazing possible in this wood-pasture. Our results are in line with recent empirical research showing that sparse trees and shrubs increases the biodiversity potential of pastures managed for commodity production.
Nikolett, Gallé-Szpisjak; Hartel, Tibor
2017-01-01
The integration of food production and biodiversity conservation represents a key challenge for sustainability. Several studies suggest that even small structural elements in the landscape can make a substantial contribution to the overall biodiversity value of the agricultural landscapes. Pastures can have high biodiversity potential. However, their intensive and monofunctional use typically erodes its natural capital, including biodiversity. Here we address the ecological value of fine scale structural elements represented by sparsely scattered trees and shrubs for the spider communities in a moderately intensively grazed pasture in Transylvania, Eastern Europe. The pasture was grazed with sheep, cattle and buffalo (ca 1 Livestock Unit ha-1) and no chemical fertilizers were applied. Sampling sites covered the open pasture as well as the existing fine-scale heterogeneity created by scattered trees and shrub. 40 sampling locations each being represented by three 1 m2 quadrats were situated in a stratified design while assuring spatial independency of sampling locations. We identified 140 species of spiders, out of which 18 were red listed and four were new for the Romanian fauna. Spider species assemblages of open pasture, scattered trees, trees and shrubs and the forest edge were statistically distinct. Our study shows that sparsely scattered mature woody vegetation and shrubs substantially increases the ecological value of managed pastures. The structural complexity provided by scattered trees and shrubs makes possible the co-occurrence of high spider diversity with a moderately high intensity grazing possible in this wood-pasture. Our results are in line with recent empirical research showing that sparse trees and shrubs increases the biodiversity potential of pastures managed for commodity production. PMID:28886058
Milshteyn, Eugene; von Morze, Cornelius; Reed, Galen D; Shang, Hong; Shin, Peter J; Larson, Peder E Z; Vigneron, Daniel B
2018-05-01
Acceleration of dynamic 2D (T 2 Mapping) and 3D hyperpolarized 13 C MRI acquisitions using the balanced steady-state free precession sequence was achieved with a specialized reconstruction method, based on the combination of low rank plus sparse and local low rank reconstructions. Methods were validated using both retrospectively and prospectively undersampled in vivo data from normal rats and tumor-bearing mice. Four-fold acceleration of 1-2 mm isotropic 3D dynamic acquisitions with 2-5 s temporal resolution and two-fold acceleration of 0.25-1 mm 2 2D dynamic acquisitions was achieved. This enabled visualization of the biodistribution of [2- 13 C]pyruvate, [1- 13 C]lactate, [ 13 C, 15 N 2 ]urea, and HP001 within heart, kidneys, vasculature, and tumor, as well as calculation of high resolution T 2 maps. Copyright © 2018 Elsevier Inc. All rights reserved.
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
NASA Astrophysics Data System (ADS)
Park, Suhyung; Park, Jaeseok
2015-05-01
Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k - t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k - t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k - t SPARKS incorporates Kalman-smoother self-calibration in k - t space and sparse signal recovery in x - f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k - t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k - t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.
Park, Suhyung; Park, Jaeseok
2015-05-07
Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k - t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k - t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k - t SPARKS incorporates Kalman-smoother self-calibration in k - t space and sparse signal recovery in x - f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k - t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k - t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.
Augmented l1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm. Revision 1
2012-10-17
nonzero and sampled from the standard Gaussian distribution (for Figure 2) or the Bernoulli distribution (for Figure 3). Both tests had the same sensing...dual variable y(k) Figure 3: Convergence of primal and dual variables of three algorithms on Bernoulli sparse x0 was the slowest. Besides the obvious...slower convergence than the final stage. Comparing the results of two tests, the convergence was faster on the Bernoulli sparse signal than the
APPLYING MULTIMETRIC INDICES AT HIGH RESOLUTION ...
Like many inland waters worldwide, streams and rivers of the Western U.S. are faced with a multitude of challenges stemming from past land use practices and changing future conditions. To address these issues, the USEPA has developed empirical tools for evaluating instream conditions and monitoring the status of our freshwater resources over time. These efforts have made substantial progress in integrating quantitative methods into multimetric indices (MMIs) used for national and regional assessments and have provided an enhanced understanding of condition patterns across the broader landscape. To examine the extent of spatial and temporal variability not captured by the sparse distribution of sample sites used in these large-scale assessments, we applied two existing MMIs to inter-seasonal fish and macroinvertebrate data from the Calapooia Basin in Oregon’s Willamette Valley. Our chosen indices revealed a high degree of variation in biotic condition within our study area. With notable exceptions, indices were seasonally robust, indicating potential flexibility for scheduling sampling. An increased understanding of condition patterns occurring at fine spatial scales and the natural and anthropogenic effects influencing them can help guide and prioritize restoration and management. Multimetric indices (MMIs) that incorporate data on the biological and physical characteristics of freshwater systems and provide meaningful indicators of instream conditions
NASA Astrophysics Data System (ADS)
Fenty, I. G.; Willis, J. K.; Rignot, E. J.
2016-12-01
Motivated by the need to understand the connection between the warming North Atlantic Ocean and increasing ice mass loss from the Greenland Ice Sheet, in 2015 we initiated "Oceans Melting Greenland" (OMG), a 5-year NASA sub-orbital mission. One component of OMG is a once-yearly sampling of full-depth vertical profiles of ocean temperature and salinity around Greenland's continental shelf at 250 locations. These measurements have the potential to provide an unprecedented view of ocean properties around Greenland, especially the warm, salty subsurface Atlantic Waters that have been implicated in tidewater glacier retreat, acceleration, and thinning. However, OMG'S ocean measurements are essentially large-scale synoptic snapshots of an ocean state whose characteristic scales of temporal and spatial variability around Greenland are largely unknown. In this talk we discuss how high-resolution numerical ocean modelling is being employed to quantitatively estimate the region's natural hydrographic variability for the dual purposes of (1) informing our pan-Greenland ocean sampling strategy and (2) informing our interpretation of temperature trends in the data. OMG hydrographic shelf data collected in ship-based CTDs (2015, 2016) and Airborne eXpendable CTDs (2016) will be examined in the context of this estimated ocean variability.
Ptychographic imaging with partially coherent plasma EUV sources
NASA Astrophysics Data System (ADS)
Bußmann, Jan; Odstrčil, Michal; Teramoto, Yusuke; Juschkin, Larissa
2017-12-01
We report on high-resolution lens-less imaging experiments based on ptychographic scanning coherent diffractive imaging (CDI) method employing compact plasma sources developed for extreme ultraviolet (EUV) lithography applications. Two kinds of discharge sources were used in our experiments: a hollow-cathode-triggered pinch plasma source operated with oxygen and for the first time a laser-assisted discharge EUV source with a liquid tin target. Ptychographic reconstructions of different samples were achieved by applying constraint relaxation to the algorithm. Our ptychography algorithms can handle low spatial coherence and broadband illumination as well as compensate for the residual background due to plasma radiation in the visible spectral range. Image resolution down to 100 nm is demonstrated even for sparse objects, and it is limited presently by the sample structure contrast and the available coherent photon flux. We could extract material properties by the reconstruction of the complex exit-wave field, gaining additional information compared to electron microscopy or CDI with longer-wavelength high harmonic laser sources. Our results show that compact plasma-based EUV light sources of only partial spatial and temporal coherence can be effectively used for lens-less imaging applications. The reported methods may be applied in combination with reflectometry and scatterometry for high-resolution EUV metrology.
A practical modification of horizontal line sampling for snag and cavity tree inventory
M. J. Ducey; G. J. Jordan; J. H. Gove; H. T. Valentine
2002-01-01
Snags and cavity trees are important structural features in forests, but they are often sparsely distributed, making efficient inventories problematic. We present a straightforward modification of horizontal line sampling designed to facilitate inventory of these features while remaining compatible with commonly employed sampling methods for the living overstory. The...
Miniature Laboratory for Detecting Sparse Biomolecules
NASA Technical Reports Server (NTRS)
Lin, Ying; Yu, Nan
2005-01-01
A miniature laboratory system has been proposed for use in the field to detect sparsely distributed biomolecules. By emphasizing concentration and sorting of specimens prior to detection, the underlying system concept would make it possible to attain high detection sensitivities without the need to develop ever more sensitive biosensors. The original purpose of the proposal is to aid the search for signs of life on a remote planet by enabling the detection of specimens as sparse as a few molecules or microbes in a large amount of soil, dust, rocks, water/ice, or other raw sample material. Some version of the system could prove useful on Earth for remote sensing of biological contamination, including agents of biological warfare. Processing in this system would begin with dissolution of the raw sample material in a sample-separation vessel. The solution in the vessel would contain floating microscopic magnetic beads coated with substances that could engage in chemical reactions with various target functional groups that are parts of target molecules. The chemical reactions would cause the targeted molecules to be captured on the surfaces of the beads. By use of a controlled magnetic field, the beads would be concentrated in a specified location in the vessel. Once the beads were thus concentrated, the rest of the solution would be discarded. This procedure would obviate the filtration steps and thereby also eliminate the filter-clogging difficulties of typical prior sample-concentration schemes. For ferrous dust/soil samples, the dissolution would be done first in a separate vessel before the solution is transferred to the microbead-containing vessel.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).
Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-05-16
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)
Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-01-01
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731
Preconditioned conjugate gradient wave-front reconstructors for multiconjugate adaptive optics.
Gilles, Luc; Ellerbroek, Brent L; Vogel, Curtis R
2003-09-10
Multiconjugate adaptive optics (MCAO) systems with 10(4)-10(5) degrees of freedom have been proposed for future giant telescopes. Using standard matrix methods to compute, optimize, and implement wavefront control algorithms for these systems is impractical, since the number of calculations required to compute and apply the reconstruction matrix scales respectively with the cube and the square of the number of adaptive optics degrees of freedom. We develop scalable open-loop iterative sparse matrix implementations of minimum variance wave-front reconstruction for telescope diameters up to 32 m with more than 10(4) actuators. The basic approach is the preconditioned conjugate gradient method with an efficient preconditioner, whose block structure is defined by the atmospheric turbulent layers very much like the layer-oriented MCAO algorithms of current interest. Two cost-effective preconditioners are investigated: a multigrid solver and a simpler block symmetric Gauss-Seidel (BSGS) sweep. Both options require off-line sparse Cholesky factorizations of the diagonal blocks of the matrix system. The cost to precompute these factors scales approximately as the three-halves power of the number of estimated phase grid points per atmospheric layer, and their average update rate is typically of the order of 10(-2) Hz, i.e., 4-5 orders of magnitude lower than the typical 10(3) Hz temporal sampling rate. All other computations scale almost linearly with the total number of estimated phase grid points. We present numerical simulation results to illustrate algorithm convergence. Convergence rates of both preconditioners are similar, regardless of measurement noise level, indicating that the layer-oriented BSGS sweep is as effective as the more elaborated multiresolution preconditioner.
Thermal infrared remote sensing of water temperature in riverine landscapes
Handcock, Rebecca N.; Torgersen, Christian E.; Cherkauer, Keith A.; Gillespie, Alan R.; Klement, Tockner; Faux, Russell N.; Tan, Jing; Carbonneau, Patrice E.; Piégay, Hervé
2012-01-01
Water temperature in riverine landscapes is an important regional indicator of water quality that is influenced by both ground- and surface-water inputs, and indirectly by land use in the surrounding watershed (Brown and Krygier, 1970; Beschta et al., 1987; Chen et al., 1998; Poole and Berman, 2001).Coldwater fishes such as salmon and trout are sensitive to elevated water temperature; therefore, water temperature must meet management guidelines and quality standards, which aim to create a healthy environment for endangered populations (McCullough et al., 2009). For example, in the USA, the Environmental Protection Agency (EPA) has established water quality standards to identify specific temperature criteria to protect coldwater fishes (Environmental Protection Agency, 2003). Trout and salmon can survive in cool-water refugia even when temperatures at other measurement locations are at or above the recommended maximums (Ebersole et al., 2001; Baird and Krueger, 2003; High et al., 2006). Spatially extensive measurements of water temperature are necessary to locate these refugia, to identify the location of ground- and surface-water inputs to the river channel, and to identify thermal pollution sources. Regional assessment of water temperature in streams and rivers has been limited by sparse sampling in both space and time. Water temperature has typically been measured using a network of widely distributed instream gages, which record the temporal change of the bulk, or kinetic, temperature of the water (Tk) at specific locations. For example, the State of Washington (USA) recorded water quality conditions at 76 stations within the Puget Lowlands eco region, which contains 12,721 km of streams and rivers (Washington Department of Ecology, 1998). Such gages are sparsely distributed, are typically located only in larger streams and rivers, and give limited information about the spatial distribution of water temperature.
Thermal infrared remote sensing of water temperature in riverine landscapes: Chapter 5
Carbonneau, Rebecca N.; Piégay, Hervé; Handcock, R.N; Torgersen, Christian E.; Cherkauer, K.A; Gillespie, A.R; Tockner, K; Faux, R. N.; Tan, Jing
2012-01-01
Water temperature in riverine landscapes is an important regional indicator of water quality that is influenced by both ground- and surface-water inputs, and indirectly by land use in the surrounding watershed (Brown and Krygier, 1970; Beschta et al., 1987; Chen et al., 1998; Poole and Berman, 2001). Coldwater fishes such as salmon and trout are sensitive to elevated water temperature; therefore, water temperature must meet management guidelines and quality standards, which aim to create a healthy environment for endangered populations (McCullough et al., 2009). For example, in the USA, the Environmental Protection Agency (EPA) has established water quality standards to identify specific temperature criteria to protect coldwater fishes (Environmental Protection Agency, 2003). Trout and salmon can survive in cool-water refugia even when temperatures at other measurement locations are at or above the recommended maximums (Ebersole et al., 2001; Baird and Krueger, 2003; High et al., 2006). Spatially extensive measurements of water temperature are necessary to locate these refugia, to identify the location of ground- and surface-water inputs to the river channel, and to identify thermal pollution sources. Regional assessment of water temperature in streams and rivers has been limited by sparse sampling in both space and time. Water temperature has typically been measured using a network of widely distributed instream gages, which record the temporal change of the bulk, or kinetic, temperature of the water (Tk) at specific locations. For example, the State of Washington (USA) recorded water quality conditions at 76 stations within the Puget Lowlands eco region, which contains 12,721 km of streams and rivers (Washington Department of Ecology, 1998). Such gages are sparsely distributed, are typically located only in larger streams and rivers, and give limited information about the spatial distribution of water temperature (Cherkauer et al., 2005).
Semi-blind sparse image reconstruction with application to MRFM.
Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O
2012-09-01
We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-09-01
We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.
Sequential Dictionary Learning From Correlated Data: Application to fMRI Data Analysis.
Seghouane, Abd-Krim; Iqbal, Asif
2017-03-22
Sequential dictionary learning via the K-SVD algorithm has been revealed as a successful alternative to conventional data driven methods such as independent component analysis (ICA) for functional magnetic resonance imaging (fMRI) data analysis. fMRI datasets are however structured data matrices with notions of spatio-temporal correlation and temporal smoothness. This prior information has not been included in the K-SVD algorithm when applied to fMRI data analysis. In this paper we propose three variants of the K-SVD algorithm dedicated to fMRI data analysis by accounting for this prior information. The proposed algorithms differ from the K-SVD in their sparse coding and dictionary update stages. The first two algorithms account for the known correlation structure in the fMRI data by using the squared Q, R-norm instead of the Frobenius norm for matrix approximation. The third and last algorithm account for both the known correlation structure in the fMRI data and the temporal smoothness. The temporal smoothness is incorporated in the dictionary update stage via regularization of the dictionary atoms obtained with penalization. The performance of the proposed dictionary learning algorithms are illustrated through simulations and applications on real fMRI data.
Kim, Yong-Hwan; Kim, Junghoe; Lee, Jong-Hwan
2012-12-01
This study proposes an iterative dual-regression (DR) approach with sparse prior regularization to better estimate an individual's neuronal activation using the results of an independent component analysis (ICA) method applied to a temporally concatenated group of functional magnetic resonance imaging (fMRI) data (i.e., Tc-GICA method). An ordinary DR approach estimates the spatial patterns (SPs) of neuronal activation and corresponding time courses (TCs) specific to each individual's fMRI data with two steps involving least-squares (LS) solutions. Our proposed approach employs iterative LS solutions to refine both the individual SPs and TCs with an additional a priori assumption of sparseness in the SPs (i.e., minimally overlapping SPs) based on L(1)-norm minimization. To quantitatively evaluate the performance of this approach, semi-artificial fMRI data were created from resting-state fMRI data with the following considerations: (1) an artificially designed spatial layout of neuronal activation patterns with varying overlap sizes across subjects and (2) a BOLD time series (TS) with variable parameters such as onset time, duration, and maximum BOLD levels. To systematically control the spatial layout variability of neuronal activation patterns across the "subjects" (n=12), the degree of spatial overlap across all subjects was varied from a minimum of 1 voxel (i.e., 0.5-voxel cubic radius) to a maximum of 81 voxels (i.e., 2.5-voxel radius) across the task-related SPs with a size of 100 voxels for both the block-based and event-related task paradigms. In addition, several levels of maximum percentage BOLD intensity (i.e., 0.5, 1.0, 2.0, and 3.0%) were used for each degree of spatial overlap size. From the results, the estimated individual SPs of neuronal activation obtained from the proposed iterative DR approach with a sparse prior showed an enhanced true positive rate and reduced false positive rate compared to the ordinary DR approach. The estimated TCs of the task-related SPs from our proposed approach showed greater temporal correlation coefficients with a reference hemodynamic response function than those of the ordinary DR approach. Moreover, the efficacy of the proposed DR approach was also successfully demonstrated by the results of real fMRI data acquired from left-/right-hand clenching tasks in both block-based and event-related task paradigms. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Casson, David; Werner, Micha; Weerts, Albrecht; Schellekens, Jaap; Solomatine, Dimitri
2017-04-01
Hydrological modelling in the Canadian Sub-Arctic is hindered by the limited spatial and temporal coverage of local meteorological data. Local watershed modelling often relies on data from a sparse network of meteorological stations with a rough density of 3 active stations per 100,000 km2. Global datasets hold great promise for application due to more comprehensive spatial and extended temporal coverage. A key objective of this study is to demonstrate the application of global datasets and data assimilation techniques for hydrological modelling of a data sparse, Sub-Arctic watershed. Application of available datasets and modelling techniques is currently limited in practice due to a lack of local capacity and understanding of available tools. Due to the importance of snow processes in the region, this study also aims to evaluate the performance of global SWE products for snowpack modelling. The Snare Watershed is a 13,300 km2 snowmelt driven sub-basin of the Mackenzie River Basin, Northwest Territories, Canada. The Snare watershed is data sparse in terms of meteorological data, but is well gauged with consistent discharge records since the late 1970s. End of winter snowpack surveys have been conducted every year from 1978-present. The application of global re-analysis datasets from the EU FP7 eartH2Observe project are investigated in this study. Precipitation data are taken from Multi-Source Weighted-Ensemble Precipitation (MSWEP) and temperature data from Watch Forcing Data applied to European Reanalysis (ERA)-Interim data (WFDEI). GlobSnow-2 is a global Snow Water Equivalent (SWE) measurement product funded by the European Space Agency (ESA) and is also evaluated over the local watershed. Downscaled precipitation, temperature and potential evaporation datasets are used as forcing data in a distributed version of the HBV model implemented in the WFLOW framework. Results demonstrate the successful application of global datasets in local watershed modelling, but that validation of actual frozen precipitation and snowpack conditions is very difficult. The distributed hydrological model shows good streamflow simulation performance based on statistical model evaluation techniques. Results are also promising for inter-annual variability, spring snowmelt onset and time to peak flows. It is expected that data assimilation of stream flow using an Ensemble Kalman Filter will further improve model performance. This study shows that global re-analysis datasets hold great potential for understanding the hydrology and snowpack dynamics of the expansive and data sparse sub-Arctic. However, global SWE products will require further validation and algorithm improvements, particularly over boreal forest and lake-rich regions.
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
Adaptive low-rank subspace learning with online optimization for robust visual tracking.
Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan
2017-04-01
In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of a sparseness constraint in multivariate curve resolution - Alternating least squares.
Hugelier, Siewert; Piqueras, Sara; Bedia, Carmen; de Juan, Anna; Ruckebusch, Cyril
2018-02-13
The use of sparseness in chemometrics is a concept that has increased in popularity. The advantage is, above all, a better interpretability of the results obtained. In this work, sparseness is implemented as a constraint in multivariate curve resolution - alternating least squares (MCR-ALS), which aims at reproducing raw (mixed) data by a bilinear model of chemically meaningful profiles. In many cases, the mixed raw data analyzed are not sparse by nature, but their decomposition profiles can be, as it is the case in some instrumental responses, such as mass spectra, or in concentration profiles linked to scattered distribution maps of powdered samples in hyperspectral images. To induce sparseness in the constrained profiles, one-dimensional and/or two-dimensional numerical arrays can be fitted using a basis of Gaussian functions with a penalty on the coefficients. In this work, a least squares regression framework with L 0 -norm penalty is applied. This L 0 -norm penalty constrains the number of non-null coefficients in the fit of the array constrained without having an a priori on the number and their positions. It has been shown that the sparseness constraint induces the suppression of values linked to uninformative channels and noise in MS spectra and improves the location of scattered compounds in distribution maps, resulting in a better interpretability of the constrained profiles. An additional benefit of the sparseness constraint is a lower ambiguity in the bilinear model, since the major presence of null coefficients in the constrained profiles also helps to limit the solutions for the profiles in the counterpart matrix of the MCR bilinear model. Copyright © 2017 Elsevier B.V. All rights reserved.
FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data
NASA Astrophysics Data System (ADS)
Min, Junhong; Vonesch, Cédric; Kirshner, Hagai; Carlini, Lina; Olivier, Nicolas; Holden, Seamus; Manley, Suliana; Ye, Jong Chul; Unser, Michael
2014-04-01
Super resolution microscopy such as STORM and (F)PALM is now a well known method for biological studies at the nanometer scale. However, conventional imaging schemes based on sparse activation of photo-switchable fluorescent probes have inherently slow temporal resolution which is a serious limitation when investigating live-cell dynamics. Here, we present an algorithm for high-density super-resolution microscopy which combines a sparsity-promoting formulation with a Taylor series approximation of the PSF. Our algorithm is designed to provide unbiased localization on continuous space and high recall rates for high-density imaging, and to have orders-of-magnitude shorter run times compared to previous high-density algorithms. We validated our algorithm on both simulated and experimental data, and demonstrated live-cell imaging with temporal resolution of 2.5 seconds by recovering fast ER dynamics.
FALCON: fast and unbiased reconstruction of high-density super-resolution microscopy data
Min, Junhong; Vonesch, Cédric; Kirshner, Hagai; Carlini, Lina; Olivier, Nicolas; Holden, Seamus; Manley, Suliana; Ye, Jong Chul; Unser, Michael
2014-01-01
Super resolution microscopy such as STORM and (F)PALM is now a well known method for biological studies at the nanometer scale. However, conventional imaging schemes based on sparse activation of photo-switchable fluorescent probes have inherently slow temporal resolution which is a serious limitation when investigating live-cell dynamics. Here, we present an algorithm for high-density super-resolution microscopy which combines a sparsity-promoting formulation with a Taylor series approximation of the PSF. Our algorithm is designed to provide unbiased localization on continuous space and high recall rates for high-density imaging, and to have orders-of-magnitude shorter run times compared to previous high-density algorithms. We validated our algorithm on both simulated and experimental data, and demonstrated live-cell imaging with temporal resolution of 2.5 seconds by recovering fast ER dynamics. PMID:24694686
Temporal variability of the Atlantic meridional overturning circulation at 26.5 degrees N.
Cunningham, Stuart A; Kanzow, Torsten; Rayner, Darren; Baringer, Molly O; Johns, William E; Marotzke, Jochem; Longworth, Hannah R; Grant, Elizabeth M; Hirschi, Joël J-M; Beal, Lisa M; Meinen, Christopher S; Bryden, Harry L
2007-08-17
The vigor of Atlantic meridional overturning circulation (MOC) is thought to be vulnerable to global warming, but its short-term temporal variability is unknown so changes inferred from sparse observations on the decadal time scale of recent climate change are uncertain. We combine continuous measurements of the MOC (beginning in 2004) using the purposefully designed transatlantic Rapid Climate Change array of moored instruments deployed along 26.5 degrees N, with time series of Gulf Stream transport and surface-layer Ekman transport to quantify its intra-annual variability. The year-long average overturning is 18.7 +/- 5.6 sverdrups (Sv) (range: 4.0 to 34.9 Sv, where 1 Sv = a flow of ocean water of 10(6) cubic meters per second). Interannual changes in the overturning can be monitored with a resolution of 1.5 Sv.
Tensor-based Dictionary Learning for Dynamic Tomographic Reconstruction
Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong
2015-01-01
In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. PMID:25779991
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION
Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong
2015-01-01
In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645
On the sparseness of 1-norm support vector machines.
Zhang, Li; Zhou, Weida
2010-04-01
There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis. Copyright 2009 Elsevier Ltd. All rights reserved.
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
Spatio-Temporal Variability of Groundwater Storage in India
NASA Technical Reports Server (NTRS)
Bhanja, Soumendra; Rodell, Matthew; Li, Bailing; Mukherjee, Abhijit
2016-01-01
Groundwater level measurements from 3907 monitoring wells, distributed within 22 major river basins of India, are assessed to characterize their spatial and temporal variability. Ground water storage (GWS) anomalies (relative to the long-term mean) exhibit strong seasonality, with annual maxima observed during the monsoon season and minima during pre-monsoon season. Spatial variability of GWS anomalies increases with the extent of measurements, following the power law relationship, i.e., log-(spatial variability) is linearly dependent on log-(spatial extent).In addition, the impact of well spacing on spatial variability and the power law relationship is investigated. We found that the mean GWS anomaly sampled at a 0.25 degree grid scale closes to unweighted average over all wells. The absolute error corresponding to each basin grows with increasing scale, i.e., from 0.25 degree to 1 degree. It was observed that small changes in extent could create very large changes in spatial variability at large grid scales. Spatial variability of GWS anomaly has been found to vary with climatic conditions. To our knowledge, this is the first study of the effects of well spacing on groundwater spatial variability. The results may be useful for interpreting large scale groundwater variations from unevenly spaced or sparse groundwater well observations or for siting and prioritizing wells in a network for groundwater management. The output of this study could be used to maintain a cost effective groundwater monitoring network in the study region and the approach can also be used in other parts of the globe.
Spatio-temporal variability of groundwater storage in India.
Bhanja, Soumendra N; Rodell, Matthew; Li, Bailing; Mukherjee, Abhijit
2017-01-01
Groundwater level measurements from 3907 monitoring wells, distributed within 22 major river basins of India, are assessed to characterize their spatial and temporal variability. Groundwater storage (GWS) anomalies (relative to the long-term mean) exhibit strong seasonality, with annual maxima observed during the monsoon season and minima during pre-monsoon season. Spatial variability of GWS anomalies increases with the extent of measurements, following the power law relationship, i.e., log-(spatial variability) is linearly dependent on log-(spatial extent). In addition, the impact of well spacing on spatial variability and the power law relationship is investigated. We found that the mean GWS anomaly sampled at a 0.25 degree grid scale closes to unweighted average over all wells. The absolute error corresponding to each basin grows with increasing scale, i.e., from 0.25 degree to 1 degree. It was observed that small changes in extent could create very large changes in spatial variability at large grid scales. Spatial variability of GWS anomaly has been found to vary with climatic conditions. To our knowledge, this is the first study of the effects of well spacing on groundwater spatial variability. The results may be useful for interpreting large scale groundwater variations from unevenly spaced or sparse groundwater well observations or for siting and prioritizing wells in a network for groundwater management. The output of this study could be used to maintain a cost effective groundwater monitoring network in the study region and the approach can also be used in other parts of the globe.
Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S
2018-05-25
Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
De Vleeschouwer, N.; Verhoest, N.; Pauwels, V. R. N.
2015-12-01
The continuous monitoring of soil moisture in a permanent network can yield an interesting data product for use in hydrological data assimilation. Major advantages of in situ observations compared to remote sensing products are the potential vertical extent of the measurements, the finer temporal resolution of the observation time series, the smaller impact of land cover variability on the observation bias, etc. However, two major disadvantages are the typical small integration volume of in situ measurements and the often large spacing between monitoring locations. This causes only a small part of the modelling domain to be directly observed. Furthermore, the spatial configuration of the monitoring network is typically temporally non-dynamic. Therefore two questions can be raised. Do spatially sparse in situ soil moisture observations contain a sufficient data representativeness to successfully assimilate them into the largely unobserved spatial extent of a distributed hydrological model? And if so, how is this assimilation best performed? Consequently two important factors that can influence the success of assimilating in situ monitored soil moisture are the spatial configuration of the monitoring network and the applied assimilation algorithm. In this research the influence of those factors is examined by means of synthetic data-assimilation experiments. The study area is the ± 100 km² catchment of the Bellebeek in Flanders, Belgium. The influence of the spatial configuration is examined by varying the amount of locations and their position in the landscape. The latter is performed using several techniques including temporal stability analysis and clustering. Furthermore the observation depth is considered by comparing assimilation of surface layer (5 cm) and deeper layer (50 cm) observations. The impact of the assimilation algorithm is assessed by comparing the performance obtained with two well-known algorithms: Newtonian nudging and the Ensemble Kalman Filter.
Thakur, Anil S.; Robin, Gautier; Guncar, Gregor; Saunders, Neil F. W.; Newman, Janet; Martin, Jennifer L.; Kobe, Bostjan
2007-01-01
Background Crystallization is a major bottleneck in the process of macromolecular structure determination by X-ray crystallography. Successful crystallization requires the formation of nuclei and their subsequent growth to crystals of suitable size. Crystal growth generally occurs spontaneously in a supersaturated solution as a result of homogenous nucleation. However, in a typical sparse matrix screening experiment, precipitant and protein concentration are not sampled extensively, and supersaturation conditions suitable for nucleation are often missed. Methodology/Principal Findings We tested the effect of nine potential heterogenous nucleating agents on crystallization of ten test proteins in a sparse matrix screen. Several nucleating agents induced crystal formation under conditions where no crystallization occurred in the absence of the nucleating agent. Four nucleating agents: dried seaweed; horse hair; cellulose and hydroxyapatite, had a considerable overall positive effect on crystallization success. This effect was further enhanced when these nucleating agents were used in combination with each other. Conclusions/Significance Our results suggest that the addition of heterogeneous nucleating agents increases the chances of crystal formation when using sparse matrix screens. PMID:17971854
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
A Modified Sparse Representation Method for Facial Expression Recognition.
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.
Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-05-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.
A Modified Sparse Representation Method for Facial Expression Recognition
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878
NASA Astrophysics Data System (ADS)
Pohle, Ina; Glendell, Miriam; Stutter, Marc I.; Helliwell, Rachel C.
2017-04-01
An understanding of catchment response to climate and land use change at a regional scale is necessary for the assessment of mitigation and adaptation options addressing diffuse nutrient pollution. It is well documented that the physicochemical properties of a river ecosystem respond to change in a non-linear fashion. This is particularly important when threshold water concentrations, relevant to national and EU legislation, are exceeded. Large scale (regional) model assessments required for regulatory purposes must represent the key processes and mechanisms that are more readily understood in catchments with water quantity and water quality data monitored at high spatial and temporal resolution. While daily discharge data are available for most catchments in Scotland, nitrate and phosphorus are mostly available on a monthly basis only, as typified by regulatory monitoring. However, high resolution (hourly to daily) water quantity and water quality data exist for a limited number of research catchments. To successfully implement adaptation measures across Scotland, an upscaling from data-rich to data-sparse catchments is required. In addition, the widespread availability of spatial datasets affecting hydrological and biogeochemical responses (e.g. soils, topography/geomorphology, land use, vegetation etc.) provide an opportunity to transfer predictions between data-rich and data-sparse areas by linking processes and responses to catchment attributes. Here, we develop a framework of catchment typologies as a prerequisite for transferring information from data-rich to data-sparse catchments by focusing on how hydrological catchment similarity can be used as an indicator of grouped behaviours in water quality response. As indicators of hydrological catchment similarity we use flow indices derived from observed discharge data across Scotland as well as hydrological model parameters. For the latter, we calibrated the lumped rainfall-runoff model TUWModel using multiple objective functions. The relationships between indicators of hydrological catchment similarity, physical catchment characteristics and nitrate and phosphorus concentrations in rivers are then investigated using multivariate statistics. This understanding of the relationship between catchment characteristics, hydrological processes and water quality will allow us to implement more efficient regulatory water quality monitoring strategies, to improve existing water quality models and to model mitigation and adaptation scenarios to global change in data-sparse catchments.
NASA Astrophysics Data System (ADS)
Saadi, Sameh; Boulet, Gilles; Bahir, Malik; Brut, Aurore; Delogu, Émilie; Fanise, Pascal; Mougenot, Bernard; Simonneaux, Vincent; Lili Chabaane, Zohra
2018-04-01
In semiarid areas, agricultural production is restricted by water availability; hence, efficient agricultural water management is a major issue. The design of tools providing regional estimates of evapotranspiration (ET), one of the most relevant water balance fluxes, may help the sustainable management of water resources. Remote sensing provides periodic data about actual vegetation temporal dynamics (through the normalized difference vegetation index, NDVI) and water availability under water stress (through the surface temperature Tsurf), which are crucial factors controlling ET. In this study, spatially distributed estimates of ET (or its energy equivalent, the latent heat flux LE) in the Kairouan plain (central Tunisia) were computed by applying the Soil Plant Atmosphere and Remote Sensing Evapotranspiration (SPARSE) model fed by low-resolution remote sensing data (Terra and Aqua MODIS). The work's goal was to assess the operational use of the SPARSE model and the accuracy of the modeled (i) sensible heat flux (H) and (ii) daily ET over a heterogeneous semiarid landscape with complex land cover (i.e., trees, winter cereals, summer vegetables). SPARSE was run to compute instantaneous estimates of H and LE fluxes at the satellite overpass times. The good correspondence (R2 = 0.60 and 0.63 and RMSE = 57.89 and 53.85 W m-2 for Terra and Aqua, respectively) between instantaneous H estimates and large aperture scintillometer (XLAS) H measurements along a path length of 4 km over the study area showed that the SPARSE model presents satisfactory accuracy. Results showed that, despite the fairly large scatter, the instantaneous LE can be suitably estimated at large scales (RMSE = 47.20 and 43.20 W m-2 for Terra and Aqua, respectively, and R2 = 0.55 for both satellites). Additionally, water stress was investigated by comparing modeled (SPARSE) and observed (XLAS) water stress values; we found that most points were located within a 0.2 confidence interval, thus the general tendencies are well reproduced. Even though extrapolation of instantaneous latent heat flux values to daily totals was less obvious, daily ET estimates are deemed acceptable.
NASA Astrophysics Data System (ADS)
Switzman, Harris; Coulibaly, Paulin; Adeel, Zafar
2015-01-01
Demand for freshwater in many dryland environments is exerting negative impacts on the quality and availability of groundwater resources, particularly in areas where demand is high due to irrigation or industrial water requirements to support dryland agricultural reclamation. Often however, information available to diagnose the drivers of groundwater degradation and assess management options through modeling is sparse, particularly in low and middle-income countries. This study presents an approach for generating transient groundwater model inputs to assess the long-term impacts of dryland agricultural land reclamation on groundwater resources in a highly data-sparse context. The approach was applied to the area of Wadi El Natrun in Northern Egypt, where dryland reclamation and the associated water use has been aggressive since the 1960s. Statistical distributions of water use information were constructed from a variety of sparse field and literature estimates and then combined with remote sensing data in spatio-temporal infilling model to produce the groundwater model inputs of well-pumping and surface recharge. An ensemble of groundwater model inputs were generated and used in a 3D groundwater flow (MODFLOW) of Wadi El Natrun's multi-layer aquifer system to analyze trends in water levels and water budgets over time. Validation of results against monitoring records, and model performance statistics demonstrated that despite the extremely sparse data, the approach used in this study was capable of simulating the cumulative impacts of agricultural land reclamation reasonably well. The uncertainty associated with the groundwater model itself was greater than that associated with the ensemble of well-pumping and surface recharge estimates. Water budget analysis of the groundwater model output revealed that groundwater recharge has not changed significantly over time, while pumping has. As a result of these trends, groundwater was estimated to be in a deficit of approximately 24 billion m3 (±15%) in 2011, compared to 1957. A significant trend in water level declines beginning in the 1990s that has been observed in monitoring records was evident in the model results and is directly attributed to abstraction.
Th-230 - U-238 series disequilibrium of the Olkaria basalts Gregory Rift Valley, Kenya
NASA Technical Reports Server (NTRS)
Black, S.; Macdonald, R.; Kelly, M.
1993-01-01
U-Th disequilibrium analyses of the Naivasha basalts show a very small (U-238/Th-230) ratios which are lower than any previously analyzed basalts. The broadly positive internal isochron trend from one sample indicates that the basalts may have source heterogeneities, this is supported by earlier work. The Naivasha complex comprises a bimodal suite of basalts and rhyolites. The basalts are divided into two stratigraphic groups each of a transitional nature. The early basalt series (EBS) which were erupted prior to the Group 1 comendites and, the late basalt series (LBS) which erupted temporally between the Broad Acres and the Ololbutot centers. The basalts represent a very small percentage of the overall eruptive volume of material at Naivasha (less than 2 percent). The analyzed samples come from four stratigraphic units in close proximity around Ndabibi, Hell's Gate and Akira areas. The earliest units occur as vesicular flows from the Ndabibi plain. These basalts are olivine-plagioclase phyric with the associated hawaiites being sparsely plagioclase phyric. An absolute age of 0.5Ma was estimated for these basalts. The next youngest basalts flows occur as younger tuft cones in the Ndabibi area and are mainly olivine-plagioclase-clinopyroxcene phyric with one purely plagioclase phyric sample. The final phase of activity at Ndabibi resulted in much younger tuft cones consisting of air fall ashes and lapilli tufts. Many of these contain resorbed plagioclase phenocrysts with sample number 120c also being clinopyroxene phyric. The isotopic evidence for the basalt formation is summarized.
A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2014-06-15
This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Qi, Jin; Yang, Zhiyong
2014-01-01
Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.
NASA Astrophysics Data System (ADS)
Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David
2012-04-01
The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.
Spatiotemporal predictions of soil properties and states in variably saturated landscapes
NASA Astrophysics Data System (ADS)
Franz, Trenton E.; Loecke, Terrance D.; Burgin, Amy J.; Zhou, Yuzhen; Le, Tri; Moscicki, David
2017-07-01
Understanding greenhouse gas (GHG) fluxes from landscapes with variably saturated soil conditions is challenging given the highly dynamic nature of GHG fluxes in both space and time, dubbed hot spots, and hot moments. On one hand, our ability to directly monitor these processes is limited by sparse in situ and surface chamber observational networks. On the other hand, remote sensing approaches provide spatial data sets but are limited by infrequent imaging over time. We use a robust statistical framework to merge sparse sensor network observations with reconnaissance style hydrogeophysical mapping at a well-characterized site in Ohio. We find that combining time-lapse electromagnetic induction surveys with empirical orthogonal functions provides additional environmental covariates related to soil properties and states at high spatial resolutions ( 5 m). A cross-validation experiment using eight different spatial interpolation methods versus 120 in situ soil cores indicated an 30% reduction in root-mean-square error for soil properties (clay weight percent and total soil carbon weight percent) using hydrogeophysical derived environmental covariates with regression kriging. In addition, the hydrogeophysical derived environmental covariates were found to be good predictors of soil states (soil temperature, soil water content, and soil oxygen). The presented framework allows for temporal gap filling of individual sensor data sets as well as provides flexible geometric interpolation to complex areas/volumes. We anticipate that the framework, with its flexible temporal and spatial monitoring options, will be useful in designing future monitoring networks as well as support the next generation of hyper-resolution hydrologic and biogeochemical models.
High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.
Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S
2018-03-05
A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.
Pareeth, Sajid; Bresciani, Mariano; Buzzi, Fabio; Leoni, Barbara; Lepori, Fabio; Ludovisi, Alessandro; Morabito, Giuseppe; Adrian, Rita; Neteler, Markus; Salmaso, Nico
2017-02-01
The availability of more than thirty years of historical satellite data is a valuable source which could be used as an alternative to the sparse in-situ data. We developed a new homogenised time series of daily day time Lake Surface Water Temperature (LSWT) over the last thirty years (1986-2015) at a spatial resolution of 1km from thirteen polar orbiting satellites. The new homogenisation procedure implemented in this study corrects for the different acquisition times of the satellites standardizing the derived LSWT to 12:00 UTC. In this study, we developed new time series of LSWT for five large lakes in Italy and evaluated the product with in-situ data from the respective lakes. Furthermore, we estimated the long-term annual and summer trends, the temporal coherence of mean LSWT between the lakes, and studied the intra-annual variations and long-term trends from the newly developed LSWT time series. We found a regional warming trend at a rate of 0.017°Cyr -1 annually and 0.032°Cyr -1 during summer. Mean annual and summer LSWT temporal patterns in these lakes were found to be highly coherent. Amidst the reported rapid warming of lakes globally, it is important to understand the long-term variations of surface temperature at a regional scale. This study contributes a new method to derive long-term accurate LSWT for lakes with sparse in-situ data thereby facilitating understanding of regional level changes in lake's surface temperature. Copyright © 2016 Elsevier B.V. All rights reserved.
Ponzi, Adam; Wickens, Jeff
2010-04-28
The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.
Sparse Multivariate Autoregressive Modeling for Mild Cognitive Impairment Classification
Li, Yang; Wee, Chong-Yaw; Jie, Biao; Peng, Ziwen
2014-01-01
Brain connectivity network derived from functional magnetic resonance imaging (fMRI) is becoming increasingly prevalent in the researches related to cognitive and perceptual processes. The capability to detect causal or effective connectivity is highly desirable for understanding the cooperative nature of brain network, particularly when the ultimate goal is to obtain good performance of control-patient classification with biological meaningful interpretations. Understanding directed functional interactions between brain regions via brain connectivity network is a challenging task. Since many genetic and biomedical networks are intrinsically sparse, incorporating sparsity property into connectivity modeling can make the derived models more biologically plausible. Accordingly, we propose an effective connectivity modeling of resting-state fMRI data based on the multivariate autoregressive (MAR) modeling technique, which is widely used to characterize temporal information of dynamic systems. This MAR modeling technique allows for the identification of effective connectivity using the Granger causality concept and reducing the spurious causality connectivity in assessment of directed functional interaction from fMRI data. A forward orthogonal least squares (OLS) regression algorithm is further used to construct a sparse MAR model. By applying the proposed modeling to mild cognitive impairment (MCI) classification, we identify several most discriminative regions, including middle cingulate gyrus, posterior cingulate gyrus, lingual gyrus and caudate regions, in line with results reported in previous findings. A relatively high classification accuracy of 91.89 % is also achieved, with an increment of 5.4 % compared to the fully-connected, non-directional Pearson-correlation-based functional connectivity approach. PMID:24595922
Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.
Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M
2014-02-10
Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.
Spatial-temporal variation of marginal land suitable for energy plants from 1990 to 2010 in China
NASA Astrophysics Data System (ADS)
Jiang, Dong; Hao, Mengmeng; Fu, Jingying; Zhuang, Dafang; Huang, Yaohuan
2014-07-01
Energy plants are the main source of bioenergy which will play an increasingly important role in future energy supplies. With limited cultivated land resources in China, the development of energy plants may primarily rely on the marginal land. In this study, based on the land use data from 1990 to 2010(every 5 years is a period) and other auxiliary data, the distribution of marginal land suitable for energy plants was determined using multi-factors integrated assessment method. The variation of land use type and spatial distribution of marginal land suitable for energy plants of different decades were analyzed. The results indicate that the total amount of marginal land suitable for energy plants decreased from 136.501 million ha to 114.225 million ha from 1990 to 2010. The reduced land use types are primarily shrub land, sparse forest land, moderate dense grassland and sparse grassland, and large variation areas are located in Guangxi, Tibet, Heilongjiang, Xinjiang and Inner Mongolia. The results of this study will provide more effective data reference and decision making support for the long-term planning of bioenergy resources.
Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao
2016-01-01
At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models
NASA Astrophysics Data System (ADS)
Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael
2016-06-01
We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
A modeling approach for aerosol optical depth analysis during forest fire events
NASA Astrophysics Data System (ADS)
Aube, Martin P.; O'Neill, Normand T.; Royer, Alain; Lavoue, David
2004-10-01
Measurements of aerosol optical depth (AOD) are important indicators of aerosol particle behavior. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as DDV (Dense Dark Vegetation) based inversion algorithms which yield AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new assimilation methodology that links AOD measurements and the predictions of a particulate matter Transport Model. This modelling package (AODSEM V2.0 for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution may be tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important and robust parameter. We applied this methodology to a significant smoke event that occurred over the eastern part of North America in July 2002.
Sparse modeling applied to patient identification for safety in medical physics applications
NASA Astrophysics Data System (ADS)
Lewkowitz, Stephanie
Every scheduled treatment at a radiation therapy clinic involves a series of safety protocol to ensure the utmost patient care. Despite safety protocol, on a rare occasion an entirely preventable medical event, an accident, may occur. Delivering a treatment plan to the wrong patient is preventable, yet still is a clinically documented error. This research describes a computational method to identify patients with a novel machine learning technique to combat misadministration. The patient identification program stores face and fingerprint data for each patient. New, unlabeled data from those patients are categorized according to the library. The categorization of data by this face-fingerprint detector is accomplished with new machine learning algorithms based on Sparse Modeling that have already begun transforming the foundation of Computer Vision. Previous patient recognition software required special subroutines for faces and different tailored subroutines for fingerprints. In this research, the same exact model is used for both fingerprints and faces, without any additional subroutines and even without adjusting the two hyperparameters. Sparse modeling is a powerful tool, already shown utility in the areas of super-resolution, denoising, inpainting, demosaicing, and sub-nyquist sampling, i.e. compressed sensing. Sparse Modeling is possible because natural images are inherently sparse in some bases, due to their inherent structure. This research chooses datasets of face and fingerprint images to test the patient identification model. The model stores the images of each dataset as a basis (library). One image at a time is removed from the library, and is classified by a sparse code in terms of the remaining library. The Locally Competitive Algorithm, a truly neural inspired Artificial Neural Network, solves the computationally difficult task of finding the sparse code for the test image. The components of the sparse representation vector are summed by ℓ1 pooling, and correct patient identification is consistently achieved 100% over 1000 trials, when either the face data or fingerprint data are implemented as a classification basis. The algorithm gets 100% classification when faces and fingerprints are concatenated into multimodal datasets. This suggests that 100% patient identification will be achievable in the clinal setting.
Agnan, Y; Séjalon-Delmas, N; Claustres, A; Probst, A
2015-10-01
Lichens and mosses were used as biomonitors to assess the atmospheric deposition of metals in forested ecosystems in various regions of France. The concentrations of 17 metals/metalloids (Al, As, Cd, Co, Cr, Cs, Cu, Fe, Mn, Ni, Pb, Sb, Sn, Sr, Ti, V, and Zn) indicated overall low atmospheric contamination in these forested environments, but a regionalism emerged from local contributions (anthropogenic activities, as well as local lithology). Taking into account the geochemical background and comparing to Italian data, the elements from both natural and anthropogenic activities, such as Cd, Pb, or Zn, did not show any obvious anomalies. However, elements mainly originating from lithogenic dust (e.g., Al, Fe, Ti) were more prevalent in sparse forests and in the Southern regions of France, whereas samples from dense forests showed an accumulation of elements from biological recycling (Mn and Zn). The combination of enrichment factors and Pb isotope ratios between current and herbarium samples indicated the historical evolution of metal atmospheric contamination: the high contribution of coal combustion beginning 150 years ago decreased at the end of the 20th century, and the influence of car traffic during the latter observed period decreased in the last few decades. In the South of France, obvious local influences were well preserved during the last century. Copyright © 2015 Elsevier B.V. All rights reserved.
"Submesoscale Soup" Vorticity and Tracer Statistics During the Lateral Mixing Experiment
NASA Astrophysics Data System (ADS)
Shcherbina, A.; D'Asaro, E. A.; Lee, C. M.; Molemaker, J.; McWilliams, J. C.
2012-12-01
A detailed view of upper-ocean velocity, vorticity, and tracer statistics was obtained by a unique synchronized two-vessel survey in the North Atlantic in winter 2012. In winter, North Atlantic Mode water region south of the Gulf Stream is filled with an energetic, homogeneous, and well-developed submesoscale turbulence field - the "submesoscale soup". Turbulence in the soup is produced by frontogenesis and the surface layer instability of mesoscale eddy flows in the vicinity of the Gulf Stream. This region is a convenient representation of the inertial range of the geophysical turbulence forward cascade spanning scales of o(1-100km). During the Lateral Mixing Experiment in February-March 2012, R/Vs Atlantis and Knorr were run on parallel tracks 1 km apart for 500 km in the submesoscale soup region. Synchronous ADCP sampling provided the first in-situ estimates of full 3-D vorticity and divergence without the usual mix of spatial and temporal aliasing. Tracer distributions were also simultaneously sampled by both vessels using the underway and towed instrumentation. Observed vorticity distribution in the mixed layer was markedly asymmetric, with sparse strands of strong anticyclonic vorticity embedded in a weak, predominantly cyclonic background. While the mean vorticity was close to zero, distribution skewness exceeded 2. These observations confirm theoretical and numerical model predictions for an active submesoscale turbulence field. Submesoscale vorticity spectra also agreed well with the model prediction.
Explore Efficient Local Features from RGB-D Data for One-Shot Learning Gesture Recognition.
Wan, Jun; Guo, Guodong; Li, Stan Z
2016-08-01
Availability of handy RGB-D sensors has brought about a surge of gesture recognition research and applications. Among various approaches, one shot learning approach is advantageous because it requires minimum amount of data. Here, we provide a thorough review about one-shot learning gesture recognition from RGB-D data and propose a novel spatiotemporal feature extracted from RGB-D data, namely mixed features around sparse keypoints (MFSK). In the review, we analyze the challenges that we are facing, and point out some future research directions which may enlighten researchers in this field. The proposed MFSK feature is robust and invariant to scale, rotation and partial occlusions. To alleviate the insufficiency of one shot training samples, we augment the training samples by artificially synthesizing versions of various temporal scales, which is beneficial for coping with gestures performed at varying speed. We evaluate the proposed method on the Chalearn gesture dataset (CGD). The results show that our approach outperforms all currently published approaches on the challenging data of CGD, such as translated, scaled and occluded subsets. When applied to the RGB-D datasets that are not one-shot (e.g., the Cornell Activity Dataset-60 and MSR Daily Activity 3D dataset), the proposed feature also produces very promising results under leave-one-out cross validation or one-shot learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Porosity estimation by semi-supervised learning with sparsely available labeled samples
NASA Astrophysics Data System (ADS)
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
Artificial neural network does better spatiotemporal compressive sampling
NASA Astrophysics Data System (ADS)
Lee, Soo-Young; Hsu, Charles; Szu, Harold
2012-06-01
Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.
Zhang, Chuncheng; Song, Sutao; Wen, Xiaotong; Yao, Li; Long, Zhiying
2015-04-30
Feature selection plays an important role in improving the classification accuracy of multivariate classification techniques in the context of fMRI-based decoding due to the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Recently, several sparse representation methods have been applied to the voxel selection of fMRI data. Despite the low computational efficiency of the sparse representation methods, they still displayed promise for applications that select features from fMRI data. In this study, we proposed the Laplacian smoothed L0 norm (LSL0) approach for feature selection of fMRI data. Based on the fast sparse decomposition using smoothed L0 norm (SL0) (Mohimani, 2007), the LSL0 method used the Laplacian function to approximate the L0 norm of sources. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of LSL0 for the sparse source estimation and feature selection. Simulated results indicated that LSL0 produced more accurate source estimation than SL0 at high noise levels. The classification accuracy using voxels that were selected by LSL0 was higher than that by SL0 in both simulated and real fMRI experiment. Moreover, both LSL0 and SL0 showed higher classification accuracy and required less time than ICA and t-test for the fMRI decoding. LSL0 outperformed SL0 in sparse source estimation at high noise level and in feature selection. Moreover, LSL0 and SL0 showed better performance than ICA and t-test for feature selection. Copyright © 2015 Elsevier B.V. All rights reserved.
Iwagami, Sho; Onda, Yuichi; Tsujimura, Maki; Hada, Manami; Pun, Ishwar
2017-11-01
Radiocesium ( 137 Cs) migration from headwater forested areas to downstream rivers has been investigated in many studies since the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident, which was triggered by a catastrophic earthquake and tsunami on 11 March 2011. The accident resulted in the release of a huge amount of radioactivity and its subsequent deposition in the environment. A large part of the radiocesium released has been shown to remain in the forest. The dissolved 137 Cs concentration and its temporal dynamics in river water, stream water, and groundwater have been reported, but reports of dissolved 137 Cs concentration in soil water remain sparse. In this study, soil water was sampled, and the dissolved 137 Cs concentrations were measured at five locations with different land-use types (mature/young cedar forest, broadleaf forest, meadow land, and pasture land) in Yamakiya District, located 35 km northwest of FDNPP from July 2011 to October 2012. Soil water samples were collected by suction lysimeters installed at three different depths at each site. Dissolved 137 Cs concentrations were analyzed using a germanium gamma ray detector. The dissolved 137 Cs concentrations in soil water were high, with a maximum value of 2.5 Bq/L in July 2011, and declined to less than 0.32 Bq/L by 2012. The declining trend of dissolved 137 Cs concentrations in soil water was fitted to a two-component exponential model. The rate of decline in dissolved 137 Cs concentrations in soil water (k 1 ) showed a good correlation with the radiocesium interception potential (RIP) of topsoil (0-5 cm) at the same site. Accounting for the difference of 137 Cs deposition density, we found that normalized dissolved 137 Cs concentrations of soil water in forest (mature/young cedar forest and broadleaf forest) were higher than those in grassland (meadow land and pasture land). Copyright © 2017 Elsevier Ltd. All rights reserved.
Massive land system changes impact water quality of the Jhelum River in Kashmir Himalaya.
Rather, Mohmmad Irshad; Rashid, Irfan; Shahi, Nuzhat; Murtaza, Khalid Omar; Hassan, Khalida; Yousuf, Abdul Rehman; Romshoo, Shakil Ahmad; Shah, Irfan Yousuf
2016-03-01
The pristine aquatic ecosystems in the Himalayas are facing an ever increasing threat from various anthropogenic pressures which necessitate better understanding of the spatial and temporal variability of pollutants, their sources, and possible remedies. This study demonstrates the multi-disciplinary approach utilizing the multivariate statistical techniques, data from remote sensing, lab, and field-based observations for assessing the impact of massive land system changes on water quality of the river Jhelum. Land system changes over a period of 38 years have been quantified using multi-spectral satellite data to delineate the extent of different anthropogenically driven land use types that are the main non-point sources of pollution. Fifteen water quality parameters, at 12 sampling sites distributed uniformly along the length of the Jhelum, have been assessed to identify the possible sources of pollution. Our analysis indicated that 18% of the forested area has degraded into sparse forest or scrublands from 1972 to 2010, and the areas under croplands have decreased by 24% as people shifted from irrigation-intensive agriculture to orchard farming while as settlements showed a 397% increase during the observation period. One-way ANOVA revealed that all the water quality parameters had significant spatio-temporal differences (p < 0.01). Cluster analysis (CA) helped us to classify all the sampling sites into three groups. Factor analysis revealed that 91.84% of the total variance was mainly explained by five factors. Drastic changes in water quality of the Jhelum since the past three decades are manifested by increases in nitrate-nitrogen, TDS, and electric conductivity. The especially high levels of nitrogen (858 ± 405 μgL(-1)) and phosphorus (273 ± 18 μgL(-1)) in the Jhelum could be attributed to the reckless application of fertilizers, pesticides, and unplanned urbanization in the area.
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.
Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke
2015-04-01
Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.
NREPS Applications for Water Supply and Management in California and Tennessee
NASA Technical Reports Server (NTRS)
Gatlin, P.; Scott, M.; Carery, L. D.; Petersen, W. A.
2011-01-01
Management of water resources is a balancing act between temporally and spatially limited sources and competitive needs which can often exceed the supply. In order to manage water resources over a region such as the San Joaquin Valley or the Tennessee River Valley, it is pertinent to know the amount of water that has fallen in the watershed and where the water is going within it. Since rain gauge networks are typically sparsely spaced, it is typical that the majority of rainfall on the region may not be measured. To mitigate this under-sampling of rainfall, weather radar has long been employed to provide areal rainfall estimates. The Next-Generation Weather Radars (NEXRAD) make it possible to estimate rainfall over the majority of the conterminous United States. The NEXRAD Rainfall Estimation Processing System (NREPS) was developed specifically for the purpose of using weather radar to estimate rainfall for water resources management. The NREPS is tailored to meet customer needs on spatial and temporal scales relevant to the hydrologic or land-surface models of the end-user. It utilizes several techniques to mitigate artifacts in the NEXRAD data from contaminating the rainfall field. These techniques include clutter filtering, correction for occultation by topography as well as accounting for the vertical profile of reflectivity. This presentation will focus on improvements made to the NREPS system to map rainfall in the San Joaquin Valley for NASA s Water Supply and Management Project in California, but also ongoing rainfall mapping work in the Tennessee River watershed for the Tennessee Valley Authority and possible future applications in other areas of the continent.
Li, Juanhua; Wu, Chao; Zheng, Yingjun; Li, Ruikeng; Li, Xuanzi; She, Shenglin; Wu, Haibo; Peng, Hongjun; Ning, Yuping; Li, Liang
2017-09-17
The superior temporal gyrus (STG) is involved in speech recognition against informational masking under cocktail-party-listening conditions. Compared to healthy listeners, people with schizophrenia perform worse in speech recognition under informational speech-on-speech masking conditions. It is not clear whether the schizophrenia-related vulnerability to informational masking is associated with certain changes in FC of the STG with some critical brain regions. Using sparse-sampling fMRI design, this study investigated the differences between people with schizophrenia and healthy controls in FC of the STG for target-speech listening against informational speech-on-speech masking, when a listening condition with either perceived spatial separation (PSS, with a spatial release of informational masking) or perceived spatial co-location (PSC, without the spatial release) between target speech and masking speech was introduced. The results showed that in healthy participants, but not participants with schizophrenia, the contrast of either the PSS or PSC condition against the masker-only condition induced an enhancement of functional connectivity (FC) of the STG with the left superior parietal lobule and the right precuneus. Compared to healthy participants, participants with schizophrenia showed declined FC of the STG with the bilateral precuneus, right SPL, and right supplementary motor area. Thus, FC of the STG with the parietal areas is normally involved in speech listening against informational masking under either the PSS or PSC conditions, and declined FC of the STG in people with schizophrenia with the parietal areas may be associated with the increased vulnerability to informational masking. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Phillips, Karran A; Epstein, David H; Preston, Kenzie L
2013-10-01
Real-time monitoring of behavior using Ecological Momentary Assessment (EMA) has provided detailed data about daily temporal patterns of craving and use in cigarette smokers. We have collected similar data from a sample of cocaine and heroin users. Here we analyzed it in the context of its relationship with a societal construct of daily temporal organization: 9-to-5 business hours. In a 28-week prospective study, 112 methadone-maintained polydrug-abusing individuals initiated an electronic-diary entry and provided data each time they used cocaine, heroin, or both during weeks 4 to 28. EMA data were collected for 10,781 person-days and included: 663 cocaine-craving events, 710 cocaine-use events, 288 heroin-craving events, 66 heroin-use events, 630 craving-both-drugs events, and 282 use-of-both-drugs events. At baseline, 34% of the participants reported full-time employment in the preceding 3-year period. Most participants' current employment status fluctuated throughout the study. In a generalized linear mixed model (SAS Proc Glimmix), cocaine use varied by time of day relative to business hours (p<0.0001) and there was a significant interaction between Day of the Week and Time Relative to Business Hours (p<0.002) regardless of current work status. Cocaine craving also varied by time of day relative to business hours (p<0.0001), however, there was no significant interaction between Day of the Week and Time Relative to Business Hours (p=.57). Heroin craving and use were mostly reported during business hours, but data were sparse. Cocaine craving is most frequent during business hours while cocaine use is more frequent after business hours. Cocaine use during business hours, but not craving, seems suppressed on most weekdays, but not weekends, suggesting that societal conventions reflected in business hours influence drug-use patterns even in individuals whose daily schedules are not necessarily dictated by employment during conventional business hours. Published by Elsevier Ltd.
Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
4D Infant Cortical Surface Atlas Construction using Spherical Patch-based Sparse Representation.
Wu, Zhengwang; Li, Gang; Meng, Yu; Wang, Li; Lin, Weili; Shen, Dinggang
2017-09-01
The 4D infant cortical surface atlas with densely sampled time points is highly needed for neuroimaging analysis of early brain development. In this paper, we build the 4D infant cortical surface atlas firstly covering 6 postnatal years with 11 time points (i.e., 1, 3, 6, 9, 12, 18, 24, 36, 48, 60, and 72 months), based on 339 longitudinal MRI scans from 50 healthy infants. To build the 4D cortical surface atlas, first , we adopt a two-stage groupwise surface registration strategy to ensure both longitudinal consistency and unbiasedness. Second , instead of simply averaging over the co-registered surfaces, a spherical patch-based sparse representation is developed to overcome possible surface registration errors across different subjects. The central idea is that, for each local spherical patch in the atlas space, we build a dictionary, which includes the samples of current local patches and their spatially-neighboring patches of all co-registered surfaces, and then the current local patch in the atlas is sparsely represented using the built dictionary. Compared to the atlas built with the conventional methods, the 4D infant cortical surface atlas constructed by our method preserves more details of cortical folding patterns, thus leading to boosted accuracy in registration of new infant cortical surfaces.
Khana, Diba; Rossen, Lauren M; Hedegaard, Holly; Warner, Margaret
2018-01-01
Hierarchical Bayes models have been used in disease mapping to examine small scale geographic variation. State level geographic variation for less common causes of mortality outcomes have been reported however county level variation is rarely examined. Due to concerns about statistical reliability and confidentiality, county-level mortality rates based on fewer than 20 deaths are suppressed based on Division of Vital Statistics, National Center for Health Statistics (NCHS) statistical reliability criteria, precluding an examination of spatio-temporal variation in less common causes of mortality outcomes such as suicide rates (SRs) at the county level using direct estimates. Existing Bayesian spatio-temporal modeling strategies can be applied via Integrated Nested Laplace Approximation (INLA) in R to a large number of rare causes of mortality outcomes to enable examination of spatio-temporal variations on smaller geographic scales such as counties. This method allows examination of spatiotemporal variation across the entire U.S., even where the data are sparse. We used mortality data from 2005-2015 to explore spatiotemporal variation in SRs, as one particular application of the Bayesian spatio-temporal modeling strategy in R-INLA to predict year and county-specific SRs. Specifically, hierarchical Bayesian spatio-temporal models were implemented with spatially structured and unstructured random effects, correlated time effects, time varying confounders and space-time interaction terms in the software R-INLA, borrowing strength across both counties and years to produce smoothed county level SRs. Model-based estimates of SRs were mapped to explore geographic variation.
Machine-learned Identification of RR Lyrae Stars from Sparse, Multi-band Data: The PS1 Sample
NASA Astrophysics Data System (ADS)
Sesar, Branimir; Hernitschek, Nina; Mitrović, Sandra; Ivezić, Željko; Rix, Hans-Walter; Cohen, Judith G.; Bernard, Edouard J.; Grebel, Eva K.; Martin, Nicolas F.; Schlafly, Edward F.; Burgett, William S.; Draper, Peter W.; Flewelling, Heather; Kaiser, Nick; Kudritzki, Rolf P.; Magnier, Eugene A.; Metcalfe, Nigel; Tonry, John L.; Waters, Christopher
2017-05-01
RR Lyrae stars may be the best practical tracers of Galactic halo (sub-)structure and kinematics. The PanSTARRS1 (PS1) 3π survey offers multi-band, multi-epoch, precise photometry across much of the sky, but a robust identification of RR Lyrae stars in this data set poses a challenge, given PS1's sparse, asynchronous multi-band light curves (≲ 12 epochs in each of five bands, taken over a 4.5 year period). We present a novel template fitting technique that uses well-defined and physically motivated multi-band light curves of RR Lyrae stars, and demonstrate that we get accurate period estimates, precise to 2 s in > 80 % of cases. We augment these light-curve fits with other features from photometric time-series and provide them to progressively more detailed machine-learned classification models. From these models, we are able to select the widest (three-fourths of the sky) and deepest (reaching 120 kpc) sample of RR Lyrae stars to date. The PS1 sample of ˜45,000 RRab stars is pure (90%) and complete (80% at 80 kpc) at high galactic latitudes. It also provides distances that are precise to 3%, measured with newly derived period-luminosity relations for optical/near-infrared PS1 bands. With the addition of proper motions from Gaia and radial velocity measurements from multi-object spectroscopic surveys, we expect the PS1 sample of RR Lyrae stars to become the premier source for studying the structure, kinematics, and the gravitational potential of the Galactic halo. The techniques presented in this study should translate well to other sparse, multi-band data sets, such as those produced by the Dark Energy Survey and the upcoming Large Synoptic Survey Telescope Galactic plane sub-survey.
Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression
NASA Astrophysics Data System (ADS)
Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang
2018-02-01
Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.
Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression
NASA Astrophysics Data System (ADS)
Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang
2018-05-01
Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.
Dada, Nsa; Jumas-Bilak, Estelle; Manguin, Sylvie; Seidu, Razak; Stenström, Thor-Axel; Overgaard, Hans J
2014-08-24
Domestic water storage containers constitute major Aedes aegypti breeding sites. We present for the first time a comparative analysis of the bacterial communities associated with Ae. aegypti larvae and water from domestic water containers. The 16S rRNA-temporal temperature gradient gel electrophoresis (TTGE) was used to identify and compare bacterial communities in fourth-instar Ae. aegypti larvae and water from larvae positive and negative domestic containers in a rural village in northeastern Thailand. Water samples were cultured for enteric bacteria in addition to TTGE. Sequences obtained from TTGE and bacterial cultures were clustered into operational taxonomic units (OTUs) for analyses. Significantly lower OTU abundance was found in fourth-instar Ae. aegypti larvae compared to mosquito positive water samples. There was no significant difference in OTU abundance between larvae and mosquito negative water samples or between mosquito positive and negative water samples. Larval samples had significantly different OTU diversity compared to mosquito positive and negative water samples, with no significant difference between mosquito positive and negative water samples. The TTGE identified 24 bacterial taxa, belonging to the phyla Proteobacteria, Firmicutes, Actinobacteria, Bacteroidetes and TM7 (candidate phylum). Seven of these taxa were identified in larval samples, 16 in mosquito positive and 13 in mosquito negative water samples. Only two taxa, belonging to the phyla Firmicutes and Actinobacteria, were common to both larvae and water samples. Bacilli was the most abundant bacterial class identified from Ae. aegypti larvae, Gammaproteobacteria from mosquito positive water samples, and Flavobacteria from mosquito negative water samples. Enteric bacteria belonging to the class Gammaproteobacteria were sparsely represented in TTGE, but were isolated from both mosquito positive and negative water samples by selective culture. Few bacteria from water samples were identified in fourth-instar Ae. aegypti larvae, suggesting that established larval bacteria, most likely acquired at earlier stages of development, control the larval microbiota. Further studies at all larval stages are needed to fully understand the dynamics involved. Isolation of enteric bacteria from water samples supports earlier outcomes of E. coli contamination in Ae. aegypti infested domestic containers, suggesting the need to further explore the role of enteric bacteria in Ae. aegypti infestation.
Chen, Bo; Chen, Minhua; Paisley, John; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S; Hero, Alfred; Lucas, Joseph; Dunson, David; Carin, Lawrence
2010-11-09
Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Nagarajaiah, Satish
2016-06-01
Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.
Spatio-temporal reconstruction of brain dynamics from EEG with a Markov prior.
Hansen, Sofie Therese; Hansen, Lars Kai
2017-03-01
Electroencephalography (EEG) can capture brain dynamics in high temporal resolution. By projecting the scalp EEG signal back to its origin in the brain also high spatial resolution can be achieved. Source localized EEG therefore has potential to be a very powerful tool for understanding the functional dynamics of the brain. Solving the inverse problem of EEG is however highly ill-posed as there are many more potential locations of the EEG generators than EEG measurement points. Several well-known properties of brain dynamics can be exploited to alleviate this problem. More short ranging connections exist in the brain than long ranging, arguing for spatially focal sources. Additionally, recent work (Delorme et al., 2012) argues that EEG can be decomposed into components having sparse source distributions. On the temporal side both short and long term stationarity of brain activation are seen. We summarize these insights in an inverse solver, the so-called "Variational Garrote" (Kappen and Gómez, 2013). Using a Markov prior we can incorporate flexible degrees of temporal stationarity. Through spatial basis functions spatially smooth distributions are obtained. Sparsity of these are inherent to the Variational Garrote solver. We name our method the MarkoVG and demonstrate its ability to adapt to the temporal smoothness and spatial sparsity in simulated EEG data. Finally a benchmark EEG dataset is used to demonstrate MarkoVG's ability to recover non-stationary brain dynamics. Copyright © 2016 Elsevier Inc. All rights reserved.
Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy
2011-01-01
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685
Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.
Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N
2011-04-15
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.
Pires, Mateus Marques; Kotzian, Carla Bender; Spies, Marcia Regina
2014-01-01
Abstract Farm ponds help maintain diversity in altered landscapes. However, studies on the features that drive this type of property in the Neotropics are still lacking, especially for the insect fauna. We analyzed the spatial and temporal distribution of odonate larval assemblages in farm ponds. Odonates were sampled monthly at four farm ponds from March 2008 to February 2009 in a temperate montane region of southern Brazil. A small number of genera were frequent and accounted for most of the dominant fauna. The dominant genera composition differed among ponds. Local spatial drivers such as area, hydroperiod, and margin vegetation structure likely explain these results more than spatial predictors due to the small size of the study area. Circular analysis detected seasonal effect on assemblage abundance but not on richness. Seasonality in abundance was related to the life cycles of a few dominant genera. This result was explained by temperature and not rainfall due to the temperate climate of the region studied. The persistence of dominant genera and the sparse occurrence of many taxa over time probably led to a lack in a seasonal pattern in assemblage richness. PMID:25527585
Miniaturized Monitors for Assessment of Exposure to Air Pollutants: A Review.
Borghi, Francesca; Spinazzè, Andrea; Rovelli, Sabrina; Campagnolo, Davide; Del Buono, Luca; Cattaneo, Andrea; Cavallo, Domenico M
2017-08-12
Air quality has a huge impact on different aspects of life quality, and for this reason, air quality monitoring is required by national and international regulations. Technical and procedural limitations of traditional fixed-site stations for monitoring or sampling of air pollutants are also well-known. Recently, a different type of miniaturized monitors has been developed. These monitors, due to their characteristics (e.g., low cost, small size, high portability) are becoming increasingly important for individual exposure assessment, especially since this kind of instrument can provide measurements at high spatial and temporal resolution, which is a notable advantage when approaching assessment of exposure to environmental contaminants. The aim of this study is indeed to provide information regarding current knowledge regarding the use of miniaturized air pollutant sensors. A systematic review was performed to identify original articles: a literature search was carried out using an appropriate query for the search of papers across three different databases, and the papers were selected using inclusion/exclusion criteria. The reviewed articles showed that miniaturized sensors are particularly versatile and could be applied in studies with different experimental designs, helping to provide a significant enhancement to exposure assessment, even though studies regarding their performance are still sparse.
Scanning silence: mental imagery of complex sounds.
Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz
2005-07-15
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.
Sampling of temporal networks: Methods and biases
NASA Astrophysics Data System (ADS)
Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter
2017-11-01
Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
Label-free optical imaging of membrane patches for atomic force microscopy
Churnside, Allison B.; King, Gavin M.; Perkins, Thomas T.
2010-01-01
In atomic force microscopy (AFM), finding sparsely distributed regions of interest can be difficult and time-consuming. Typically, the tip is scanned until the desired object is located. This process can mechanically or chemically degrade the tip, as well as damage fragile biological samples. Protein assemblies can be detected using the back-scattered light from a focused laser beam. We previously used back-scattered light from a pair of laser foci to stabilize an AFM. In the present work, we integrate these techniques to optically image patches of purple membranes prior to AFM investigation. These rapidly acquired optical images were aligned to the subsequent AFM images to ~40 nm, since the tip position was aligned to the optical axis of the imaging laser. Thus, this label-free imaging efficiently locates sparsely distributed protein assemblies for subsequent AFM study while simultaneously minimizing degradation of the tip and the sample. PMID:21164738
Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.
Ponzi, Adam; Wickens, Jeff
2012-01-01
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.
Input Dependent Cell Assembly Dynamics in a Model of the Striatal Medium Spiny Neuron Network
Ponzi, Adam; Wickens, Jeff
2012-01-01
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior. PMID:22438838
Estimation of Dynamic Sparse Connectivity Patterns From Resting State fMRI.
Cai, Biao; Zille, Pascal; Stephen, Julia M; Wilson, Tony W; Calhoun, Vince D; Wang, Yu Ping
2018-05-01
Functional connectivity (FC) estimated from functional magnetic resonance imaging (fMRI) time series, especially during resting state periods, provides a powerful tool to assess human brain functional architecture in health, disease, and developmental states. Recently, the focus of connectivity analysis has shifted toward the subnetworks of the brain, which reveals co-activating patterns over time. Most prior works produced a dense set of high-dimensional vectors, which are hard to interpret. In addition, their estimations to a large extent were based on an implicit assumption of spatial and temporal stationarity throughout the fMRI scanning session. In this paper, we propose an approach called dynamic sparse connectivity patterns (dSCPs), which takes advantage of both matrix factorization and time-varying fMRI time series to improve the estimation power of FC. The feasibility of analyzing dynamic FC with our model is first validated through simulated experiments. Then, we use our framework to measure the difference between young adults and children with real fMRI data set from the Philadelphia Neurodevelopmental Cohort (PNC). The results from the PNC data set showed significant FC differences between young adults and children in four different states. For instance, young adults had reduced connectivity between the default mode network and other subnetworks, as well as hyperconnectivity within the visual system in states 1 and 3, and hypoconnectivity in state 2. Meanwhile, they exhibited temporal correlation patterns that changed over time within functional subnetworks. In addition, the dSCPs model indicated that older people tend to spend more time within a relatively connected FC pattern. Overall, the proposed method provides a valid means to assess dynamic FC, which could facilitate the study of brain networks.
The impact of the resolution of meteorological datasets on catchment-scale drought studies
NASA Astrophysics Data System (ADS)
Hellwig, Jost; Stahl, Kerstin
2017-04-01
Gridded meteorological datasets provide the basis to study drought at a range of scales, including catchment scale drought studies in hydrology. They are readily available to study past weather conditions and often serve real time monitoring as well. As these datasets differ in spatial/temporal coverage and spatial/temporal resolution, for most studies there is a tradeoff between these features. Our investigation examines whether biases occur when studying drought on catchment scale with low resolution input data. For that, a comparison among the datasets HYRAS (covering Central Europe, 1x1 km grid, daily data, 1951 - 2005), E-OBS (Europe, 0.25° grid, daily data, 1950-2015) and GPCC (whole world, 0.5° grid, monthly data, 1901 - 2013) is carried out. Generally, biases in precipitation increase with decreasing resolution. Most important variations are found during summer. In low mountain range of Central Europe the datasets of sparse resolution (E-OBS, GPCC) overestimate dry days and underestimate total precipitation since they are not able to describe high spatial variability. However, relative measures like the correlation coefficient reveal good consistencies of dry and wet periods, both for absolute precipitation values and standardized indices like the Standardized Precipitation Index (SPI) or Standardized Precipitation Evaporation Index (SPEI). Particularly the most severe droughts derived from the different datasets match very well. These results indicate that absolute values of sparse resolution datasets applied to catchment scale might be critical to use for an assessment of the hydrological drought at catchment scale, whereas relative measures for determining periods of drought are more trustworthy. Therefore, studies on drought, that downscale meteorological data, should carefully consider their data needs and focus on relative measures for dry periods if sufficient for the task.
Decorrelation scales for Arctic Ocean hydrography - Part I: Amerasian Basin
NASA Astrophysics Data System (ADS)
Sumata, Hiroshi; Kauker, Frank; Karcher, Michael; Rabe, Benjamin; Timmermans, Mary-Louise; Behrendt, Axel; Gerdes, Rüdiger; Schauer, Ursula; Shimada, Koji; Cho, Kyoung-Ho; Kikuchi, Takashi
2018-03-01
Any use of observational data for data assimilation requires adequate information of their representativeness in space and time. This is particularly important for sparse, non-synoptic data, which comprise the bulk of oceanic in situ observations in the Arctic. To quantify spatial and temporal scales of temperature and salinity variations, we estimate the autocorrelation function and associated decorrelation scales for the Amerasian Basin of the Arctic Ocean. For this purpose, we compile historical measurements from 1980 to 2015. Assuming spatial and temporal homogeneity of the decorrelation scale in the basin interior (abyssal plain area), we calculate autocorrelations as a function of spatial distance and temporal lag. The examination of the functional form of autocorrelation in each depth range reveals that the autocorrelation is well described by a Gaussian function in space and time. We derive decorrelation scales of 150-200 km in space and 100-300 days in time. These scales are directly applicable to quantify the representation error, which is essential for use of ocean in situ measurements in data assimilation. We also describe how the estimated autocorrelation function and decorrelation scale should be applied for cost function calculation in a data assimilation system.
Monitoring air quality in mountains: Designing an effective network
Peterson, D.L.
2000-01-01
A quantitatively robust yet parsimonious air-quality monitoring network in mountainous regions requires special attention to relevant spatial and temporal scales of measurement and inference. The design of monitoring networks should focus on the objectives required by public agencies, namely: 1) determine if some threshold has been exceeded (e.g., for regulatory purposes), and 2) identify spatial patterns and temporal trends (e.g., to protect natural resources). A short-term, multi-scale assessment to quantify spatial variability in air quality is a valuable asset in designing a network, in conjunction with an evaluation of existing data and simulation-model output. A recent assessment in Washington state (USA) quantified spatial variability in tropospheric ozone distribution ranging from a single watershed to the western third of the state. Spatial and temporal coherence in ozone exposure modified by predictable elevational relationships ( 1.3 ppbv ozone per 100 m elevation gain) extends from urban areas to the crest of the Cascade Range. This suggests that a sparse network of permanent analyzers is sufficient at all spatial scales, with the option of periodic intensive measurements to validate network design. It is imperative that agencies cooperate in the design of monitoring networks in mountainous regions to optimize data collection and financial efficiencies.
Achieving Consistent Doppler Measurements from SDO/HMI Vector Field Inversions
NASA Technical Reports Server (NTRS)
Schuck, Peter W.; Antiochos, S. K.; Leka, K. D.; Barnes, Graham
2016-01-01
NASA's Solar Dynamics Observatory is delivering vector magnetic field observations of the full solar disk with unprecedented temporal and spatial resolution; however, the satellite is in a highly inclined geosynchronous orbit. The relative spacecraft-Sun velocity varies by +/-3 kms-1 over a day, which introduces major orbital artifacts in the Helioseismic Magnetic Imager (HMI) data. We demonstrate that the orbital artifacts contaminate all spatial and temporal scales in the data. We describe a newly developed three-stage procedure for mitigating these artifacts in the Doppler data obtained from the Milne-Eddington inversions in the HMI pipeline. The procedure ultimately uses 32 velocity-dependent coefficients to adjust 10 million pixels-a remarkably sparse correction model given the complexity of the orbital artifacts. This procedure was applied to full-disk images of AR 11084 to produce consistent Dopplergrams. The data adjustments reduce the power in the orbital artifacts by 31 dB. Furthermore, we analyze in detail the corrected images and show that our procedure greatly improves the temporal and spectral properties of the data without adding any new artifacts. We conclude that this new procedure makes a dramatic improvement in the consistency of the HMI data and in its usefulness for precision scientific studies.
Blind compressive sensing dynamic MRI
Lingala, Sajan Goud; Jacob, Mathews
2013-01-01
We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951
Positive health practices and temporal perspective in low-income adults.
Thompson, Cheryl W; Fitzpatrick, Joyce J
2008-07-01
The purpose of this study was to describe health-promoting behaviours and temporal perspective in low-income adults. Positive health practices represent a broad range of health-promoting behaviours. The ability to adopt positive health practices may be influenced by many factors, one of which is temporal perspective, the perceived relationship between past, present and future times. This exploratory study was conducted in a south central Pennsylvania community with a convenience sample of individuals who were eligible for a subsidized low-income housing programme. Positive health practices were measured using the Personal Lifestyle Questionnaire. Temporal perspective was measured with the Circles Test. The sample consisted of 75 subjects, 61 women (81%) and 14 men (19%). Positive health practices were relatively high (mean = 70 out of a possible score of 96). Forty three per cent of the subjects expressed future temporal dominance and 80% of the subjects in this study expressed non-continuous temporal relatedness. Health-promoting behaviours in this low-income sample were similar to those reported in other samples in middle-class adult samples. The percentage of subjects who expressed future dominance was similar to findings in other samples. The percentage of subjects who expressed non-continuous temporal relatedness was different from findings reported in other samples, suggesting that perception of the relationship of past, present and future times is different in this low-income sample compared with other samples. The temporal relatedness findings suggest that low-income individuals may not believe that adopting positive health practices will influence future health.
Functional fixedness in a technologically sparse culture.
German, Tim P; Barrett, H Clark
2005-01-01
Problem solving can be inefficient when the solution requires subjects to generate an atypical function for an object and the object's typical function has been primed. Subjects become "fixed" on the design function of the object, and problem solving suffers relative to control conditions in which the object's function is not demonstrated. In the current study, such functional fixedness was demonstrated in a sample of adolescents (mean age of 16 years) among the Shuar of Ecuadorian Amazonia, whose technologically sparse culture provides limited access to large numbers of artifacts with highly specialized functions. This result suggests that design function may universally be the core property of artifact concepts in human semantic memory.
Joint fMRI analysis and subject clustering using sparse dictionary learning
NASA Astrophysics Data System (ADS)
Kim, Seung-Jun; Dontaraju, Krishna K.
2017-08-01
Multi-subject fMRI data analysis methods based on sparse dictionary learning are proposed. In addition to identifying the component spatial maps by exploiting the sparsity of the maps, clusters of the subjects are learned by postulating that the fMRI volumes admit a subspace clustering structure. Furthermore, in order to tune the associated hyper-parameters systematically, a cross-validation strategy is developed based on entry-wise sampling of the fMRI dataset. Efficient algorithms for solving the proposed constrained dictionary learning formulations are developed. Numerical tests performed on synthetic fMRI data show promising results and provides insights into the proposed technique.
Moving Beam-Blocker-Based Low-Dose Cone-Beam CT
NASA Astrophysics Data System (ADS)
Lee, Taewon; Lee, Changwoo; Baek, Jongduk; Cho, Seungryong
2016-10-01
This paper experimentally demonstrates a feasibility of moving beam-blocker-based low-dose cone-beam CT (CBCT) and exploits the beam-blocking configurations to reach an optimal one that leads to the highest contrast-to-noise ratio (CNR). Sparse-view CT takes projections at sparse view angles and provides a viable option to reducing dose. We have earlier proposed a many-view under-sampling (MVUS) technique as an alternative to sparse-view CT. Instead of switching the x-ray tube power, one can place a reciprocating multi-slit beam-blocker between the x-ray tube and the patient to partially block the x-ray beam. We used a bench-top circular cone-beam CT system with a lab-made moving beam-blocker. For image reconstruction, we used a modified total-variation minimization (TV) algorithm that masks the blocked data in the back-projection step leaving only the measured data through the slits to be used in the computation. The number of slits and the reciprocation frequency have been varied and the effects of them on the image quality were investigated. For image quality assessment, we used CNR and the detectability. We also analyzed the sampling efficiency in the context of compressive sensing: the sampling density and data incoherence in each case. We tested three sets of slits with their number of 6, 12 and 18, each at reciprocation frequencies of 10, 30, 50 and 70 Hz/rot. The optimum condition out of the tested sets was found to be using 12 slits at 30 Hz/rot.
2014-01-01
and proportional correctors. The weighting function evaluates nearby data samples to determine the utility of each correction style , eliminating the...sparse methods may be of use. As for other multi-fidelity techniques, true cokriging in the style described by geo-statisticians[93] is beyond the...sampling style between sampling points predicted to fall near the contour and sampling points predicted to be farther from the contour but with
Distant failure prediction for early stage NSCLC by analyzing PET with sparse representation
NASA Astrophysics Data System (ADS)
Hao, Hongxia; Zhou, Zhiguo; Wang, Jing
2017-03-01
Positron emission tomography (PET) imaging has been widely explored for treatment outcome prediction. Radiomicsdriven methods provide a new insight to quantitatively explore underlying information from PET images. However, it is still a challenging problem to automatically extract clinically meaningful features for prognosis. In this work, we develop a PET-guided distant failure predictive model for early stage non-small cell lung cancer (NSCLC) patients after stereotactic ablative radiotherapy (SABR) by using sparse representation. The proposed method does not need precalculated features and can learn intrinsically distinctive features contributing to classification of patients with distant failure. The proposed framework includes two main parts: 1) intra-tumor heterogeneity description; and 2) dictionary pair learning based sparse representation. Tumor heterogeneity is initially captured through anisotropic kernel and represented as a set of concatenated vectors, which forms the sample gallery. Then, given a test tumor image, its identity (i.e., distant failure or not) is classified by applying the dictionary pair learning based sparse representation. We evaluate the proposed approach on 48 NSCLC patients treated by SABR at our institute. Experimental results show that the proposed approach can achieve an area under the characteristic curve (AUC) of 0.70 with a sensitivity of 69.87% and a specificity of 69.51% using a five-fold cross validation.
NASA Astrophysics Data System (ADS)
Miorelli, Roberto; Reboud, Christophe
2018-04-01
Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
ERIC Educational Resources Information Center
Vista, Alvin; Care, Esther
2011-01-01
Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…
NASA Astrophysics Data System (ADS)
Heim, B.; Beamish, A. L.; Walker, D. A.; Epstein, H. E.; Sachs, T.; Chabrillat, S.; Buchhorn, M.; Prakash, A.
2016-12-01
Ground data for the validation of satellite-derived terrestrial Essential Climate Variables (ECVs) at high latitudes are sparse. Also for regional model evaluation (e.g. climate models, land surface models, permafrost models), we lack accurate ranges of terrestrial ground data and face the problem of a large mismatch in scale. Within the German research programs `Regional Climate Change' (REKLIM) and the Environmental Mapping and Analysis Program (EnMAP), we conducted a study on ground data representativeness for vegetation-related variables within a monitoring grid at the Toolik Lake Long-Term Ecological Research station; the Toolik Lake station lies in the Kuparuk River watershed on the North Slope of the Brooks Mountain Range in Alaska. The Toolik Lake grid covers an area of 1 km2 containing Eight five grid points spaced 100 meters apart. Moist acidic tussock tundra is the most dominant vegetation type within the grid. Eight five permanent 1 m2 plots were also established to be representative of the individual gridpoints. Researchers from the University of Alaska Fairbanks have undertaken assessments at these plots, including Leaf Area Index (LAI) and field spectrometry to derive the Normalized Difference Vegetation Index (NDVI). During summer 2016, we conducted field spectrometry and LAI measurements at selected plots during early, peak and late summer. We experimentally measured LAI on more spatially extensive Elementary Sampling Units (ESUs) to investigate the spatial representativeness of the permanent 1 m2 plots and to map ESUs for various tundra types. LAI measurements are potentially influenced by landscape-inherent microtopography, sparse vascular plant cover, and dead woody matter. From field spectrometer measurements, we derived a clear-sky mid-day Fraction of Absorbed Photosynthetically Active Radiation (FAPAR). We will present the first data analyses comparing FAPAR and LAI, and maps of biophysically-focused ESUs for evaluation of the use of remote sensing data to estimate these ecosystem properties.
Multiclass classification of microarray data samples with a reduced number of genes
2011-01-01
Background Multiclass classification of microarray data samples with a reduced number of genes is a rich and challenging problem in Bioinformatics research. The problem gets harder as the number of classes is increased. In addition, the performance of most classifiers is tightly linked to the effectiveness of mandatory gene selection methods. Critical to gene selection is the availability of estimates about the maximum number of genes that can be handled by any classification algorithm. Lack of such estimates may lead to either computationally demanding explorations of a search space with thousands of dimensions or classification models based on gene sets of unrestricted size. In the former case, unbiased but possibly overfitted classification models may arise. In the latter case, biased classification models unable to support statistically significant findings may be obtained. Results A novel bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification algorithms of microarray data samples is presented. The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples. Conclusions A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples. PMID:21342522
Self-Taught Learning Based on Sparse Autoencoder for E-Nose in Wound Infection Detection
He, Peilin; Jia, Pengfei; Qiao, Siqi; Duan, Shukai
2017-01-01
For an electronic nose (E-nose) in wound infection distinguishing, traditional learning methods have always needed large quantities of labeled wound infection samples, which are both limited and expensive; thus, we introduce self-taught learning combined with sparse autoencoder and radial basis function (RBF) into the field. Self-taught learning is a kind of transfer learning that can transfer knowledge from other fields to target fields, can solve such problems that labeled data (target fields) and unlabeled data (other fields) do not share the same class labels, even if they are from entirely different distribution. In our paper, we obtain numerous cheap unlabeled pollutant gas samples (benzene, formaldehyde, acetone and ethylalcohol); however, labeled wound infection samples are hard to gain. Thus, we pose self-taught learning to utilize these gas samples, obtaining a basis vector θ. Then, using the basis vector θ, we reconstruct the new representation of wound infection samples under sparsity constraint, which is the input of classifiers. We compare RBF with partial least squares discriminant analysis (PLSDA), and reach a conclusion that the performance of RBF is superior to others. We also change the dimension of our data set and the quantity of unlabeled data to search the input matrix that produces the highest accuracy. PMID:28991154
Triffo, W. J.; Palsdottir, H.; McDonald, K. L.; Lee, J. K.; Inman, J. L.; Bissell, M. J.; Raphael, R. M.; Auer, M.
2009-01-01
Summary High-pressure freezing is the preferred method to prepare thick biological specimens for ultrastructural studies. However, the advantages obtained by this method often prove unattainable for samples that are difficult to handle during the freezing and substitution protocols. Delicate and sparse samples are difficult to manipulate and maintain intact throughout the sequence of freezing, infiltration, embedding and final orientation for sectioning and subsequent transmission electron microscopy. An established approach to surmount these difficulties is the use of cellulose microdialysis tubing to transport the sample. With an inner diameter of 200 µm, the tubing protects small and fragile samples within the thickness constraints of high-pressure freezing, and the tube ends can be sealed to avoid loss of sample. Importantly, the transparency of the tubing allows optical study of the specimen at different steps in the process. Here, we describe the use of a micromanipulator and microinjection apparatus to handle and position delicate specimens within the tubing. We report two biologically significant examples that benefit from this approach, 3D cultures of mammary epithelial cells and cochlear outer hair cells. We illustrate the potential for correlative light and electron microscopy as well as electron tomography. PMID:18445158
Relationships between milk culture results and milk yield in Norwegian dairy cattle.
Reksen, O; Sølverød, L; Østerås, O
2007-10-01
Associations between test-day milk yield and positive milk cultures for Staphylococcus aureus, Streptococcus spp., and other mastitis pathogens or a negative milk culture for mastitis pathogens were assessed in quarter milk samples from randomly sampled cows selected without regard to current or previous udder health status. Staphylococcus aureus was dichotomized according to sparse (< or =1,500 cfu/mL of milk) or rich (>1,500 cfu/mL of milk) growth of the bacteria. Quarter milk samples were obtained on 1 to 4 occasions from 2,740 cows in 354 Norwegian dairy herds, resulting in a total of 3,430 samplings. Measures of test-day milk yield were obtained monthly and related to 3,547 microbiological diagnoses at the cow level. Mixed model linear regression models incorporating an autoregressive covariance structure accounting for repeated test-day milk yields within cow and random effects at the herd and sample level were used to quantify the effect of positive milk cultures on test-day milk yields. Identical models were run separately for first-parity, second-parity, and third-parity or older cows. Fixed effects were days in milk, the natural logarithm of days in milk, sparse and rich growth of Staph. aureus (1/0), Streptococcus spp. (1/0), other mastitis pathogens (1/0), calving season, time of test-day milk yields relative to time of microbiological diagnosis (test day relative to time of diagnosis), and the interaction terms between microbiological diagnosis and test day relative to time of diagnosis. The models were run with the logarithmically transformed composite milk somatic cell count excluded and included. Rich growth of Staph. aureus was associated with decreased production levels in first-parity cows. An interaction between rich growth of Staph. aureus and test day relative to time of diagnosis also predicted a decline in milk production in third-parity or older cows. Interaction between sparse growth of Staph. aureus and test day relative to time of diagnosis predicted declining test-day milk yields in first-parity cows. Sparse growth of Staph. aureus was associated with high milk yields in third-parity or older cows after including the logarithmically transformed composite milk somatic cell count in the model, which illustrates that lower production levels are related to elevated somatic cell counts in high-producing cows. The same association with test-day milk yield was found among Streptococcus spp.-positive pluriparous cows.
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
Hall, Amee J; Brown, Trecia A; Grahn, Jessica A; Gati, Joseph S; Nixon, Pam L; Hughes, Sarah M; Menon, Ravi S; Lomber, Stephen G
2014-03-15
When conducting auditory investigations using functional magnetic resonance imaging (fMRI), there are inherent potential confounds that need to be considered. Traditional continuous fMRI acquisition methods produce sounds >90 dB which compete with stimuli or produce neural activation masking evoked activity. Sparse scanning methods insert a period of reduced MRI-related noise, between image acquisitions, in which a stimulus can be presented without competition. In this study, we compared sparse and continuous scanning methods to identify the optimal approach to investigate acoustically evoked cortical, thalamic and midbrain activity in the cat. Using a 7 T magnet, we presented broadband noise, 10 kHz tones, or 0.5 kHz tones in a block design, interleaved with blocks in which no stimulus was presented. Continuous scanning resulted in larger clusters of activation and more peak voxels within the auditory cortex. However, no significant activation was observed within the thalamus. Also, there was no significant difference found, between continuous or sparse scanning, in activations of midbrain structures. Higher magnitude activations were identified in auditory cortex compared to the midbrain using both continuous and sparse scanning. These results indicate that continuous scanning is the preferred method for investigations of auditory cortex in the cat using fMRI. Also, choice of method for future investigations of midbrain activity should be driven by other experimental factors, such as stimulus intensity and task performance during scanning. Copyright © 2014 Elsevier B.V. All rights reserved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Zhu, Xiangbin; Qiu, Huiling
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761
NASA Astrophysics Data System (ADS)
Chen, Duxin; Xu, Bowen; Zhu, Tao; Zhou, Tao; Zhang, Hai-Tao
2017-08-01
Coordination shall be deemed to the result of interindividual interaction among natural gregarious animal groups. However, revealing the underlying interaction rules and decision-making strategies governing highly coordinated motion in bird flocks is still a long-standing challenge. Based on analysis of high spatial-temporal resolution GPS data of three pigeon flocks, we extract the hidden interaction principle by using a newly emerging machine learning method, namely the sparse Bayesian learning. It is observed that the interaction probability has an inflection point at pairwise distance of 3-4 m closer than the average maximum interindividual distance, after which it decays strictly with rising pairwise metric distances. Significantly, the density of spatial neighbor distribution is strongly anisotropic, with an evident lack of interactions along individual velocity. Thus, it is found that in small-sized bird flocks, individuals reciprocally cooperate with a variational number of neighbors in metric space and tend to interact with closer time-varying neighbors, rather than interacting with a fixed number of topological ones. Finally, extensive numerical investigation is conducted to verify both the revealed interaction and decision-making principle during circular flights of pigeon flocks.
An approximation method for improving dynamic network model fitting.
Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M
There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.
Spatial-temporal variation of marginal land suitable for energy plants from 1990 to 2010 in China
Jiang, Dong; Hao, Mengmeng; Fu, Jingying; Zhuang, Dafang; Huang, Yaohuan
2014-01-01
Energy plants are the main source of bioenergy which will play an increasingly important role in future energy supplies. With limited cultivated land resources in China, the development of energy plants may primarily rely on the marginal land. In this study, based on the land use data from 1990 to 2010(every 5 years is a period) and other auxiliary data, the distribution of marginal land suitable for energy plants was determined using multi-factors integrated assessment method. The variation of land use type and spatial distribution of marginal land suitable for energy plants of different decades were analyzed. The results indicate that the total amount of marginal land suitable for energy plants decreased from 136.501 million ha to 114.225 million ha from 1990 to 2010. The reduced land use types are primarily shrub land, sparse forest land, moderate dense grassland and sparse grassland, and large variation areas are located in Guangxi, Tibet, Heilongjiang, Xinjiang and Inner Mongolia. The results of this study will provide more effective data reference and decision making support for the long-term planning of bioenergy resources. PMID:25056520
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altmann, Yoann; Maccarone, Aurora; McCarthy, Aongus
Here, this paper presents a new Bayesian spectral un-mixing algorithm to analyse remote scenes sensed via sparse multispectral Lidar measurements. To a first approximation, in the presence of a target, each Lidar waveform consists of a main peak, whose position depends on the target distance and whose amplitude depends on the wavelength of the laser source considered (i.e, on the target reflectivity). Besides, these temporal responses are usually assumed to be corrupted by Poisson noise in the low photon count regime. When considering multiple wavelengths, it becomes possible to use spectral information in order to identify and quantify the mainmore » materials in the scene, in addition to estimation of the Lidar-based range profiles. Due to its anomaly detection capability, the proposed hierarchical Bayesian model, coupled with an efficient Markov chain Monte Carlo algorithm, allows robust estimation of depth images together with abundance and outlier maps associated with the observed 3D scene. The proposed methodology is illustrated via experiments conducted with real multispectral Lidar data acquired in a controlled environment. The results demonstrate the possibility to unmix spectral responses constructed from extremely sparse photon counts (less than 10 photons per pixel and band).« less
Oweiss, Karim G
2006-07-01
This paper suggests a new approach for data compression during extracutaneous transmission of neural signals recorded by high-density microelectrode array in the cortex. The approach is based on exploiting the temporal and spatial characteristics of the neural recordings in order to strip the redundancy and infer the useful information early in the data stream. The proposed signal processing algorithms augment current filtering and amplification capability and may be a viable replacement to on chip spike detection and sorting currently employed to remedy the bandwidth limitations. Temporal processing is devised by exploiting the sparseness capabilities of the discrete wavelet transform, while spatial processing exploits the reduction in the number of physical channels through quasi-periodic eigendecomposition of the data covariance matrix. Our results demonstrate that substantial improvements are obtained in terms of lower transmission bandwidth, reduced latency and optimized processor utilization. We also demonstrate the improvements qualitatively in terms of superior denoising capabilities and higher fidelity of the obtained signals.
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
Characteristics of voxel prediction power in full-brain Granger causality analysis of fMRI data
NASA Astrophysics Data System (ADS)
Garg, Rahul; Cecchi, Guillermo A.; Rao, A. Ravishankar
2011-03-01
Functional neuroimaging research is moving from the study of "activations" to the study of "interactions" among brain regions. Granger causality analysis provides a powerful technique to model spatio-temporal interactions among brain regions. We apply this technique to full-brain fMRI data without aggregating any voxel data into regions of interest (ROIs). We circumvent the problem of dimensionality using sparse regression from machine learning. On a simple finger-tapping experiment we found that (1) a small number of voxels in the brain have very high prediction power, explaining the future time course of other voxels in the brain; (2) these voxels occur in small sized clusters (of size 1-4 voxels) distributed throughout the brain; (3) albeit small, these clusters overlap with most of the clusters identified with the non-temporal General Linear Model (GLM); and (4) the method identifies clusters which, while not determined by the task and not detectable by GLM, still influence brain activity.
Particle Filter Based Tracking in a Detection Sparse Discrete Event Simulation Environment
2007-03-01
obtained by disqualifying a large number of particles. 52 (a) (b) ( c ) Figure 31. Particle Disqualification via Sanitization b...1 B. RESEARCH APPROACH..............................................................................5 C . THESIS ORGANIZATION...38 b. Detection Distribution Sampling............................................43 c . Estimated Position Calculation
Comparison of temporal trends in VOCs as measured with PDB samplers and low-flow sampling methods
Harte, P.T.
2002-01-01
Analysis of temporal trends in tetrachloroethylene (PCE) concentration determined by two sample techniques showed that passive diffusion bag (pdb) samplers adequately sample the large variation in PCE concentrations at the site. The slopes of the temporal trends in concentrations were comparable between the two techniques, and the pdb sample concentration generally reflected the instantaneous concentration sampled by the low-flow technique. Thus, the pdb samplers provided an appropriate sampling technique for PCE at these wells. One or two wells did not make the case for widespread application of pdb samples at all sites. However, application of pdb samples in some circumstances was appropriate for evaluating temporal and spatial variations in VOC concentrations, thus, should be considered as a useful tool in hydrogeology.
Edgelist phase unwrapping algorithm for time series InSAR analysis.
Shanker, A Piyush; Zebker, Howard
2010-03-01
We present here a new integer programming formulation for phase unwrapping of multidimensional data. Phase unwrapping is a key problem in many coherent imaging systems, including time series synthetic aperture radar interferometry (InSAR), with two spatial and one temporal data dimensions. The minimum cost flow (MCF) [IEEE Trans. Geosci. Remote Sens. 36, 813 (1998)] phase unwrapping algorithm describes a global cost minimization problem involving flow between phase residues computed over closed loops. Here we replace closed loops by reliable edges as the basic construct, thus leading to the name "edgelist." Our algorithm has several advantages over current methods-it simplifies the representation of multidimensional phase unwrapping, it incorporates data from external sources, such as GPS, where available to better constrain the unwrapped solution, and it treats regularly sampled or sparsely sampled data alike. It thus is particularly applicable to time series InSAR, where data are often irregularly spaced in time and individual interferograms can be corrupted with large decorrelated regions. We show that, similar to the MCF network problem, the edgelist formulation also exhibits total unimodularity, which enables us to solve the integer program problem by using efficient linear programming tools. We apply our method to a persistent scatterer-InSAR data set from the creeping section of the Central San Andreas Fault and find that the average creep rate of 22 mm/Yr is constant within 3 mm/Yr over 1992-2004 but varies systematically with ground location, with a slightly higher rate in 1992-1998 than in 1999-2003.
Heard, Matthew; Van Rijn, Jason A.; Reina, Richard D.; Huveneers, Charlie
2014-01-01
Research on physiological stress and post-capture mortality of threatened species caught as bycatch is critical for the management of fisheries. The present study used laboratory simulations to examine the physiological stress response of sparsely spotted stingarees (Urolophus paucimaculatus) subjected to one of four different trawl treatments, including two different trawl durations as well as ancillary stressors of either air exposure or crowding. Physiological indicators (plasma lactate, urea, potassium and glucose) and changes in white blood cell counts were measured from blood samples taken throughout a 48 h recovery period. Mortality was low throughout this study (15% overall) and occurred only after >48 h following air exposure, crowding and 3 h trawl simulations. Plasma lactate, glucose and urea concentrations were identified as potential indicators of physiological stress, while plasma potassium and white blood cell counts were too variable to identify changes that would be expected to have biological consequences for stingarees. The characterization of the temporal profiles of physiological indicators facilitates a more accurate assessment of secondary stressors by identifying the best timing to sample stingaree blood when investigating post-capture stress physiology. High levels of lactate, increasing glucose and depressed urea were all recorded in response to air exposure following trawling, indicating that this is the primary source of stress in stingarees caught in trawling operations. These findings highlight the importance of improving bycatch sorting procedures to reduce the time out of the water for trawl-caught stingarees. PMID:27293661
How do auditory cortex neurons represent communication sounds?
Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc
2013-11-01
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
A Stimulus-Locked Vector Autoregressive Model for Slow Event-Related fMRI Designs
Siegle, Greg
2009-01-01
Summary Neuroscientists have become increasingly interested in exploring dynamic relationships among brain regions. Such a relationship, when directed from one region toward another, is denoted by “effective connectivity.” An fMRI experimental paradigm which is well-suited for examination of effective connectivity is the slow event-related design. This design presents stimuli at sufficient temporal spacing for determining within-trial trajectories of BOLD activation, allowing for the analysis of stimulus-locked temporal covariation of brain responses in multiple regions. This may be especially important for emotional stimuli processing, which can evolve over the course of several seconds, if not longer. However, while several methods have been devised for determining fMRI effective connectivity, few are adapted to event-related designs, which include non-stationary BOLD responses and multiple levels of nesting. We propose a model tailored for exploring effective connectivity of multiple brain regions in event-related fMRI designs - a semi-parametric adaptation of vector autoregressive (VAR) models, termed “stimulus-locked VAR” (SloVAR). Connectivity coefficients vary as a function of time relative to stimulus onset, are regularized via basis expansions, and vary randomly across subjects. SloVAR obtains flexible, data-driven estimates of effective connectivity and hence is useful for building connectivity models when prior information on dynamic regional relationships is sparse. Indices derived from the coefficient estimates can also be used to relate effective connectivity estimates to behavioral or clinical measures. We demonstrate the SloVAR model on a sample of clinically depressed and normal controls, showing that early but not late cortico-amygdala connectivity appears crucial to emotional control and early but not late cortico-cortico connectivity predicts depression severity in the depressed group, relationships that would have been missed in a more traditional VAR analysis. PMID:19236927
A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy.
Anas, Emran Mohammad Abu; Mousavi, Parvin; Abolmaesumi, Purang
2018-06-01
Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach. Copyright © 2018 Elsevier B.V. All rights reserved.
Rosenfeld, Adar; Dorman, Michael; Schwartz, Joel; Novack, Victor; Just, Allan C; Kloog, Itai
2017-11-01
Meteorological stations measure air temperature (Ta) accurately with high temporal resolution, but usually suffer from limited spatial resolution due to their sparse distribution across rural, undeveloped or less populated areas. Remote sensing satellite-based measurements provide daily surface temperature (Ts) data in high spatial and temporal resolution and can improve the estimation of daily Ta. In this study we developed spatiotemporally resolved models which allow us to predict three daily parameters: Ta Max (day time), 24h mean, and Ta Min (night time) on a fine 1km grid across the state of Israel. We used and compared both the Aqua and Terra MODIS satellites. We used linear mixed effect models, IDW (inverse distance weighted) interpolations and thin plate splines (using a smooth nonparametric function of longitude and latitude) to first calibrate between Ts and Ta in those locations where we have available data for both and used that calibration to fill in neighboring cells without surface monitors or missing Ts. Out-of-sample ten-fold cross validation (CV) was used to quantify the accuracy of our predictions. Our model performance was excellent for both days with and without available Ts observations for both Aqua and Terra (CV Aqua R 2 results for min 0.966, mean 0.986, and max 0.967; CV Terra R 2 results for min 0.965, mean 0.987, and max 0.968). Our research shows that daily min, mean and max Ta can be reliably predicted using daily MODIS Ts data even across Israel, with high accuracy even for days without Ta or Ts data. These predictions can be used as three separate Ta exposures in epidemiology studies for better diurnal exposure assessment. Copyright © 2017 Elsevier Inc. All rights reserved.
Geremew, Addisie; Stiers, Iris; Sierens, Tim; Kefalew, Alemayehu; Triest, Ludwig
2018-01-01
Land degradation and soil erosion in the upper catchments of tropical lakes fringed by papyrus vegetation can result in a sediment load gradient from land to lakeward. Understanding the dynamics of clonal modules (ramets and genets) and growth strategies of plants on such a gradient in both space and time is critical for exploring a species adaptation and processes regulating population structure and differentiation. We assessed the spatial and temporal dynamics in clonal growth, diversity, and structure of an emergent macrophyte, Cyperus papyrus (papyrus), in response to two contrasting sedimentation regimes by combining morphological traits and genotype data using 20 microsatellite markers. A total of 636 ramets from six permanent plots (18 x 30 m) in three Ethiopian papyrus swamps, each with discrete sedimentation regimes (high vs. low) were sampled for two years. We found that ramets under the high sedimentation regime (HSR) were significantly clumped and denser than the sparse and spreading ramets under the low sedimentation regime (LSR). The HSR resulted in significantly different ramets with short culm height and girth diameter as compared to the LSR. These results indicated that C. papyrus ameliorates the effect of sedimentation by shifting clonal growth strategy from guerrilla (in LSR) to phalanx (in HSR). Clonal richness, size, dominance, and clonal subrange differed significantly between sediment regimes and studied time periods. Each swamp under HSR revealed a significantly high clonal richness (R = 0.80) as compared to the LSR (R = 0.48). Such discrepancy in clonal richness reflected the occurrence of initial and repeated seedling recruitment strategies as a response to different sedimentation regimes. Overall, our spatial and short-term temporal observations highlighted that HSR enhances clonal richness and decreases clonal subrange owing to repeated seedling recruitment and genets turnover.
Stiers, Iris; Sierens, Tim; Kefalew, Alemayehu; Triest, Ludwig
2018-01-01
Land degradation and soil erosion in the upper catchments of tropical lakes fringed by papyrus vegetation can result in a sediment load gradient from land to lakeward. Understanding the dynamics of clonal modules (ramets and genets) and growth strategies of plants on such a gradient in both space and time is critical for exploring a species adaptation and processes regulating population structure and differentiation. We assessed the spatial and temporal dynamics in clonal growth, diversity, and structure of an emergent macrophyte, Cyperus papyrus (papyrus), in response to two contrasting sedimentation regimes by combining morphological traits and genotype data using 20 microsatellite markers. A total of 636 ramets from six permanent plots (18 x 30 m) in three Ethiopian papyrus swamps, each with discrete sedimentation regimes (high vs. low) were sampled for two years. We found that ramets under the high sedimentation regime (HSR) were significantly clumped and denser than the sparse and spreading ramets under the low sedimentation regime (LSR). The HSR resulted in significantly different ramets with short culm height and girth diameter as compared to the LSR. These results indicated that C. papyrus ameliorates the effect of sedimentation by shifting clonal growth strategy from guerrilla (in LSR) to phalanx (in HSR). Clonal richness, size, dominance, and clonal subrange differed significantly between sediment regimes and studied time periods. Each swamp under HSR revealed a significantly high clonal richness (R = 0.80) as compared to the LSR (R = 0.48). Such discrepancy in clonal richness reflected the occurrence of initial and repeated seedling recruitment strategies as a response to different sedimentation regimes. Overall, our spatial and short-term temporal observations highlighted that HSR enhances clonal richness and decreases clonal subrange owing to repeated seedling recruitment and genets turnover. PMID:29338034
Efficient ICCG on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Hammond, Steven W.; Schreiber, Robert
1989-01-01
Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, T; Nofiele, J; Sawant, A
2015-06-15
Purpose: Rapid MRI is an attractive, non-ionizing tool for soft-tissue-based monitoring of respiratory motion in thoracic and abdominal radiotherapy. One big challenge is to achieve high temporal resolution while maintaining adequate spatial resolution. K-t BLAST, sparse-sampling and reconstruction sequence based on a-priori information represents a potential solution. In this work, we investigated how much “true” motion information is lost as a-priori information is progressively added for faster imaging. Methods: Lung tumor motions in superior-inferior direction obtained from ten individuals were replayed into an in-house, MRI-compatible, programmable motion platform (50Hz refresh and 100microns precision). Six water-filled 1.5ml tubes were placed onmore » it as fiducial markers. Dynamic marker motion within a coronal slice (FOV: 32×32cm{sup 2}, resolution: 0.67×0.67mm{sup 2}, slice-thickness: 5mm) was collected on 3.0T body scanner (Ingenia, Philips). Balanced-FFE (TE/TR: 1.3ms/2.5ms, flip-angle: 40degrees) was used in conjunction with k-t BLAST. Each motion was repeated four times as four k-t acceleration factors 1, 2, 5, and 16 (corresponding frame rates were 2.5, 4.7, 9.8, and 19.1Hz, respectively) were compared. For each image set, one average motion trajectory was computed from six marker displacements. Root mean square error (RMS) was used as a metric of spatial accuracy where measured trajectories were compared to original data. Results: Tumor motion was approximately 10mm. The mean(standard deviation) of respiratory rates over ten patients was 0.28(0.06)Hz. Cumulative distributions of tumor motion frequency spectra (0–25Hz) obtained from the patients showed that 90% of motion fell on 3.88Hz or less. Therefore, the frame rate must be a double or higher for accurate monitoring. The RMS errors over patients for k-t factors of 1, 2, 5, and 16 were.10(.04),.17(.04), .21(.06) and.26(.06)mm, respectively. Conclusions: K-t factor of 5 or higher can cover the high frequency component of tumor respiratory motion, while the estimated error of spatial accuracy was approximately.2mm.« less
NASA Astrophysics Data System (ADS)
Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.
SparseBeads data: benchmarking sparsity-regularized computed tomography
NASA Astrophysics Data System (ADS)
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru
2016-03-30
As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.
Tensor-based dynamic reconstruction method for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.
2017-03-01
Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.
Das, Anup; Sampson, Aaron L.; Lainscsek, Claudia; Muller, Lyle; Lin, Wutu; Doyle, John C.; Cash, Sydney S.; Halgren, Eric; Sejnowski, Terrence J.
2017-01-01
The correlation method from brain imaging has been used to estimate functional connectivity in the human brain. However, brain regions might show very high correlation even when the two regions are not directly connected due to the strong interaction of the two regions with common input from a third region. One previously proposed solution to this problem is to use a sparse regularized inverse covariance matrix or precision matrix (SRPM) assuming that the connectivity structure is sparse. This method yields partial correlations to measure strong direct interactions between pairs of regions while simultaneously removing the influence of the rest of the regions, thus identifying regions that are conditionally independent. To test our methods, we first demonstrated conditions under which the SRPM method could indeed find the true physical connection between a pair of nodes for a spring-mass example and an RC circuit example. The recovery of the connectivity structure using the SRPM method can be explained by energy models using the Boltzmann distribution. We then demonstrated the application of the SRPM method for estimating brain connectivity during stage 2 sleep spindles from human electrocorticography (ECoG) recordings using an 8 × 8 electrode array. The ECoG recordings that we analyzed were from a 32-year-old male patient with long-standing pharmaco-resistant left temporal lobe complex partial epilepsy. Sleep spindles were automatically detected using delay differential analysis and then analyzed with SRPM and the Louvain method for community detection. We found spatially localized brain networks within and between neighboring cortical areas during spindles, in contrast to the case when sleep spindles were not present. PMID:28095202
Large-region acoustic source mapping using a movable array and sparse covariance fitting.
Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2017-01-01
Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
NASA Astrophysics Data System (ADS)
Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong
2015-09-01
This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.
2014-06-17
100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with
NASA Astrophysics Data System (ADS)
Yenier, E.; Baturan, D.; Karimi, S.
2016-12-01
Monitoring of seismicity related to oil and gas operations is routinely performed nowadays using a number of different surface and downhole seismic array configurations and technologies. Here, we provide a hydraulic fracture (HF) monitoring case study that compares the data set generated by a sparse local surface network of broadband seismometers to a data set generated by a single downhole geophone string. Our data was collected during a 5-day single-well HF operation, by a temporary surface network consisting of 10 stations deployed within 5 km of the production well. The downhole data was recorded by a 20 geophone string deployed in an observation well located 15 m from the production well. Surface network data processing included standard STA/LTA event triggering enhanced by template-matching subspace detection, grid search locations which was improved using the double-differencing re-location technique, as well as Richter (ML) and moment (Mw) magnitude computations for all detected events. In addition, moment tensors were computed from first motion polarities and amplitudes for the subset of highest SNR events. The resulting surface event catalog shows a very weak spatio-temporal correlation to HF operations with only 43% of recorded seismicity occurring during HF stages times. This along with source mechanisms shows that the surface-recorded seismicity delineates the activation of several pre-existing structures striking NNE-SSW and consistent with regional stress conditions as indicated by the orientation of SHmax. Comparison of the sparse-surface and single downhole string datasets allows us to perform a cost-benefit analysis of the two monitoring methods. Our findings show that although the downhole array recorded ten times as many events, the surface network provides a more coherent delineation of the underlying structure and more accurate magnitudes for larger magnitude events. We attribute this to the enhanced focal coverage provided by the surface network and the use of broadband instrumentation. The results indicate that sparse surface networks of high quality instruments can provide rich and reliable datasets for evaluation of the impact and effectiveness of hydraulic fracture operations in regions with favorable surface noise, local stress and attenuation characteristics.
Behar, Vera; Adam, Dan
2005-12-01
An effective aperture approach is used for optimization of a sparse synthetic transmit aperture (STA) imaging system with coded excitation and frequency division. A new two-stage algorithm is proposed for optimization of both the positions of the transmit elements and the weights of the receive elements. In order to increase the signal-to-noise ratio in a synthetic aperture system, temporal encoding of the excitation signals is employed. When comparing the excitation by linear frequency modulation (LFM) signals and phase shift key modulation (PSKM) signals, the analysis shows that chirps are better for excitation, since at the output of a compression filter the sidelobes generated are much smaller than those produced by the binary PSKM signals. Here, an implementation of a fast STA imaging is studied by spatial encoding with frequency division of the LFM signals. The proposed system employs a 64-element array with only four active elements used during transmit. The two-dimensional point spread function (PSF) produced by such a sparse STA system is compared to the PSF produced by an equivalent phased array system, using the Field II simulation program. The analysis demonstrates the superiority of the new sparse STA imaging system while using coded excitation and frequency division. Compared to a conventional phased array imaging system, this system acquires images of equivalent quality 60 times faster, when the transmit elements are fired in pairs consecutively and the power level used during transmit is very low. The fastest acquisition time is achieved when all transmit elements are fired simultaneously, which improves detectability, but at the cost of a slight degradation of the axial resolution. In real-time implementation, however, it must be borne in mind that the frame rate of a STA imaging system depends not only on the acquisition time of the data but also on the processing time needed for image reconstruction. Comparing to phased array imaging, a significant increase in the frame rate of a STA imaging system is possible if and only if an equivalent time efficient algorithm is used for image reconstruction.
NASA Astrophysics Data System (ADS)
Kibler, K. M.; Alipour, M.
2017-12-01
Diversion hydropower has been shown to significantly alter river flow regimes by dewatering diversion bypass reaches. Data scarcity is one of the foremost challenges to establishing environmental flow regimes below diversion hydropower dams, especially in regions of sparse hydro-meteorological observation. Herein, we test two prediction strategies for generating daily flows in rivers developed with diversion hydropower: a catchment similarity model, and a rainfall-runoff model selected by multi-objective optimization based on soft data. While both methods are designed for ungauged rivers embedded within large regions of sparse hydrologic observation, one is more complex and computationally-intensive. The objective of this study is to assess the benefit of using complex modeling tools in data-sparse landscapes to support design of environmental flow regimes. Models were tested in gauged catchments and then used to simulate a 28-year record of daily flows in 32 ungauged rivers. After perturbing flows with the hydropower diversion, we detect alteration using Indicators of Hydrologic Alteration (IHA) metrics and compare outcomes of the two modeling approaches. The catchment similarity model simulates low flows well (Nash-Sutcliff efficiency (NSE) = 0.91), but poorly represents moderate to high flows (overall NSE = 0.25). The multi-objective rainfall-runoff model performs well overall (NSE = 0.72). Both models agree that flow magnitudes and variability consistently decrease following diversion as temporally-dynamic flows are replaced by static minimal flows. Mean duration of events sustained below the pre-diversion Q75 and mean hydrograph rise and fall rates increase. While we see broad areas of agreement, significant effects and thresholds vary between models, particularly in the representation of moderate flows. Thus, use of simplified streamflow models may bias detected alterations or inadequately characterize pre-regulation flow regimes, providing inaccurate information as a basis for flow regime design. As an alternative, the multi-objective framework can be applied globally, and is robust to common challenges of flow prediction in ungauged rivers, such as equifinality and hydrologic dissimilarity of reference catchments.
NASA Astrophysics Data System (ADS)
Bilionis, I.; Koutsourelakis, P. S.
2012-05-01
The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.
Array signal recovery algorithm for a single-RF-channel DBF array
NASA Astrophysics Data System (ADS)
Zhang, Duo; Wu, Wen; Fang, Da Gang
2016-12-01
An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.
Recording 13C-15N HMQC 2D sparse spectra in solids in 30 s
NASA Astrophysics Data System (ADS)
Kupče, Ēriks; Trébosc, Julien; Perrone, Barbara; Lafon, Olivier; Amoureux, Jean-Paul
2018-03-01
We propose a dipolar HMQC Hadamard-encoded (D-HMQC-Hn) experiment for fast 2D correlations of abundant nuclei in solids. The main limitation of the Hadamard methods resides in the length of the encoding pulses, which results from a compromise between the selectivity and the sensitivity due to losses. For this reason, these methods should mainly be used with sparse spectra, and they profit from the increased separation of the resonances at high magnetic fields. In the case of the D-HMQC-Hn experiments, we give a simple rule that allows directly setting the optimum length of the selective pulses, versus the minimum separation of the resonances in the indirect dimension. The demonstration has been performed on a fully 13C,15N labelled f-MLF sample, and it allowed recording the build-up curves of the 13C-15N cross-peaks within 10 min. However, the method could also be used in the case of less sensitive samples, but with more accumulations.
Tomescu, M I; Rihs, T A; Rochas, V; Hardmeier, M; Britz, J; Allali, G; Fuhr, P; Eliez, S; Michel, C M
2018-06-01
While many insights on brain development and aging have been gained by studying resting-state networks with fMRI, relating these changes to cognitive functions is limited by the temporal resolution of fMRI. In order to better grasp short-lasting and dynamically changing mental activities, an increasing number of studies utilize EEG to define resting-state networks, thereby often using the concept of EEG microstates. These are brief (around 100 ms) periods of stable scalp potential fields that are influenced by cognitive states and are sensitive to neuropsychiatric diseases. Despite the rising popularity of the EEG microstate approach, information about age changes is sparse and nothing is known about sex differences. Here we investigated age and sex related changes of the temporal dynamics of EEG microstates in 179 healthy individuals (6-87 years old, 90 females, 204-channel EEG). We show strong sex-specific changes in microstate dynamics during adolescence as well as at older age. In addition, males and females differ in the duration and occurrence of specific microstates. These results are of relevance for the comparison of studies in populations of different age and sex and for the understanding of the changes in neuropsychiatric diseases. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
ACHIEVING CONSISTENT DOPPLER MEASUREMENTS FROM SDO /HMI VECTOR FIELD INVERSIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuck, Peter W.; Antiochos, S. K.; Leka, K. D.
NASA’s Solar Dynamics Observatory is delivering vector magnetic field observations of the full solar disk with unprecedented temporal and spatial resolution; however, the satellite is in a highly inclined geosynchronous orbit. The relative spacecraft–Sun velocity varies by ±3 km s{sup −1} over a day, which introduces major orbital artifacts in the Helioseismic Magnetic Imager (HMI) data. We demonstrate that the orbital artifacts contaminate all spatial and temporal scales in the data. We describe a newly developed three-stage procedure for mitigating these artifacts in the Doppler data obtained from the Milne–Eddington inversions in the HMI pipeline. The procedure ultimately uses 32more » velocity-dependent coefficients to adjust 10 million pixels—a remarkably sparse correction model given the complexity of the orbital artifacts. This procedure was applied to full-disk images of AR 11084 to produce consistent Dopplergrams. The data adjustments reduce the power in the orbital artifacts by 31 dB. Furthermore, we analyze in detail the corrected images and show that our procedure greatly improves the temporal and spectral properties of the data without adding any new artifacts. We conclude that this new procedure makes a dramatic improvement in the consistency of the HMI data and in its usefulness for precision scientific studies.« less
Temporal Restricted Visual Tracking Via Reverse-Low-Rank Sparse Learning.
Yang, Yehui; Hu, Wenrui; Xie, Yuan; Zhang, Wensheng; Zhang, Tianzhu
2017-02-01
An effective representation model, which aims to mine the most meaningful information in the data, plays an important role in visual tracking. Some recent particle-filter-based trackers achieve promising results by introducing the low-rank assumption into the representation model. However, their assumed low-rank structure of candidates limits the robustness when facing severe challenges such as abrupt motion. To avoid the above limitation, we propose a temporal restricted reverse-low-rank learning algorithm for visual tracking with the following advantages: 1) the reverse-low-rank model jointly represents target and background templates via candidates, which exploits the low-rank structure among consecutive target observations and enforces the temporal consistency of target in a global level; 2) the appearance consistency may be broken when target suffers from sudden changes. To overcome this issue, we propose a local constraint via l 1,2 mixed-norm, which can not only ensures the local consistency of target appearance, but also tolerates the sudden changes between two adjacent frames; and 3) to alleviate the inference of unreasonable representation values due to outlier candidates, an adaptive weighted scheme is designed to improve the robustness of the tracker. By evaluating on 26 challenge video sequences, the experiments show the effectiveness and favorable performance of the proposed algorithm against 12 state-of-the-art visual trackers.
2010-01-01
Background Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Results Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Conclusions Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data. PMID:21062443
Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhao, Shijie; Zhang, Shu; Zhang, Wei; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2018-06-01
Various studies in the brain mapping field have demonstrated that there exist multiple concurrent functional networks that are spatially overlapped and interacting with each other during specific task performance to jointly realize the total brain function. Assessing such spatial overlap patterns of functional networks (SOPFNs) based on functional magnetic resonance imaging (fMRI) has thus received increasing interest for brain function studies. However, there are still two crucial issues to be addressed. First, the SOPFNs are assessed over the entire fMRI scan assuming the temporal stationarity, while possibly time-dependent dynamics of the SOPFNs is not sufficiently explored. Second, the SOPFNs are assessed within individual subjects, while group-wise consistency of the SOPFNs is largely unknown. To address the two issues, we propose a novel computational framework of group-wise sparse representation of whole-brain fMRI temporal segments to assess the temporal dynamic spatial patterns of SOPFNs that are consistent across different subjects. Experimental results based on the recently publicly released Human Connectome Project grayordinate task fMRI data demonstrate that meaningful SOPFNs exhibiting dynamic spatial patterns across different time periods are effectively and robustly identified based on the reconstructed concurrent functional networks via the proposed framework. Specifically, those SOPFNs locate significantly more on gyral regions than on sulcal regions across different time periods. These results reveal novel functional architecture of cortical gyri and sulci. Moreover, these results help better understand functional dynamics mechanisms of cerebral cortex in the future.
Modelling daily PM2.5 concentrations at high spatio-temporal resolution across Switzerland.
de Hoogh, Kees; Héritier, Harris; Stafoggia, Massimo; Künzli, Nino; Kloog, Itai
2018-02-01
Spatiotemporal resolved models were developed predicting daily fine particulate matter (PM 2.5 ) concentrations across Switzerland from 2003 to 2013. Relatively sparse PM 2.5 monitoring data was supplemented by imputing PM 2.5 concentrations at PM 10 sites, using PM 2.5 /PM 10 ratios at co-located sites. Daily PM 2.5 concentrations were first estimated at a 1 × 1km resolution across Switzerland, using Multiangle Implementation of Atmospheric Correction (MAIAC) spectral aerosol optical depth (AOD) data in combination with spatiotemporal predictor data in a four stage approach. Mixed effect models (1) were used to predict PM 2.5 in cells with AOD but without PM 2.5 measurements (2). A generalized additive mixed model with spatial smoothing was applied to generate grid cell predictions for those grid cells where AOD was missing (3). Finally, local PM 2.5 predictions were estimated at each monitoring site by regressing the residuals from the 1 × 1km estimate against local spatial and temporal variables using machine learning techniques (4) and adding them to the stage 3 global estimates. The global (1 km) and local (100 m) models explained on average 73% of the total,71% of the spatial and 75% of the temporal variation (all cross validated) globally and on average 89% (total) 95% (spatial) and 88% (temporal) of the variation locally in measured PM 2.5 concentrations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lan, Ti-Yen; Wierman, Jennifer L.; Tate, Mark W.; Philipp, Hugh T.; Elser, Veit
2017-01-01
Recently, there has been a growing interest in adapting serial microcrystallography (SMX) experiments to existing storage ring (SR) sources. For very small crystals, however, radiation damage occurs before sufficient numbers of photons are diffracted to determine the orientation of the crystal. The challenge is to merge data from a large number of such ‘sparse’ frames in order to measure the full reciprocal space intensity. To simulate sparse frames, a dataset was collected from a large lysozyme crystal illuminated by a dim X-ray source. The crystal was continuously rotated about two orthogonal axes to sample a subset of the rotation space. With the EMC algorithm [expand–maximize–compress; Loh & Elser (2009). Phys. Rev. E, 80, 026705], it is shown that the diffracted intensity of the crystal can still be reconstructed even without knowledge of the orientation of the crystal in any sparse frame. Moreover, parallel computation implementations were designed to considerably improve the time and memory scaling of the algorithm. The results show that EMC-based SMX experiments should be feasible at SR sources. PMID:28808431
NASA Astrophysics Data System (ADS)
Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar
2018-01-01
We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.
Discriminative Bayesian Dictionary Learning for Classification.
Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal
2016-12-01
We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
McCoy, Alene T; Bartels, Michael J; Rick, David L; Saghir, Shakil A
2012-07-01
TK Modeler 1.0 is a Microsoft® Excel®-based pharmacokinetic (PK) modeling program created to aid in the design of toxicokinetic (TK) studies. TK Modeler 1.0 predicts the diurnal blood/plasma concentrations of a test material after single, multiple bolus or dietary dosing using known PK information. Fluctuations in blood/plasma concentrations based on test material kinetics are calculated using one- or two-compartment PK model equations and the principle of superposition. This information can be utilized for the determination of appropriate dosing regimens based on reaching a specific desired C(max), maintaining steady-state blood/plasma concentrations, or other exposure target. This program can also aid in the selection of sampling times for accurate calculation of AUC(24h) (diurnal area under the blood concentration time curve) using sparse-sampling methodologies (one, two or three samples). This paper describes the construction, use and validation of TK Modeler. TK Modeler accurately predicted blood/plasma concentrations of test materials and provided optimal sampling times for the calculation of AUC(24h) with improved accuracy using sparse-sampling methods. TK Modeler is therefore a validated, unique and simple modeling program that can aid in the design of toxicokinetic studies. Copyright © 2012 Elsevier Inc. All rights reserved.
Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.
Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir
2016-08-01
Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.
Okimoto, Gordon; Zeinalzadeh, Ashkan; Wenska, Tom; Loomis, Michael; Nation, James B; Fabre, Tiphaine; Tiirikainen, Maarit; Hernandez, Brenda; Chan, Owen; Wong, Linda; Kwee, Sandi
2016-01-01
Technological advances enable the cost-effective acquisition of Multi-Modal Data Sets (MMDS) composed of measurements for multiple, high-dimensional data types obtained from a common set of bio-samples. The joint analysis of the data matrices associated with the different data types of a MMDS should provide a more focused view of the biology underlying complex diseases such as cancer that would not be apparent from the analysis of a single data type alone. As multi-modal data rapidly accumulate in research laboratories and public databases such as The Cancer Genome Atlas (TCGA), the translation of such data into clinically actionable knowledge has been slowed by the lack of computational tools capable of analyzing MMDSs. Here, we describe the Joint Analysis of Many Matrices by ITeration (JAMMIT) algorithm that jointly analyzes the data matrices of a MMDS using sparse matrix approximations of rank-1. The JAMMIT algorithm jointly approximates an arbitrary number of data matrices by rank-1 outer-products composed of "sparse" left-singular vectors (eigen-arrays) that are unique to each matrix and a right-singular vector (eigen-signal) that is common to all the matrices. The non-zero coefficients of the eigen-arrays identify small subsets of variables for each data type (i.e., signatures) that in aggregate, or individually, best explain a dominant eigen-signal defined on the columns of the data matrices. The approximation is specified by a single "sparsity" parameter that is selected based on false discovery rate estimated by permutation testing. Multiple signals of interest in a given MDDS are sequentially detected and modeled by iterating JAMMIT on "residual" data matrices that result from a given sparse approximation. We show that JAMMIT outperforms other joint analysis algorithms in the detection of multiple signatures embedded in simulated MDDS. On real multimodal data for ovarian and liver cancer we show that JAMMIT identified multi-modal signatures that were clinically informative and enriched for cancer-related biology. Sparse matrix approximations of rank-1 provide a simple yet effective means of jointly reducing multiple, big data types to a small subset of variables that characterize important clinical and/or biological attributes of the bio-samples from which the data were acquired.
Local sparse bump hunting reveals molecular heterogeneity of colon tumors‡
Dazard, Jean-Eudes; Rao, J. Sunil; Markowitz, Sanford
2013-01-01
The question of molecular heterogeneity and of tumoral phenotype in cancer remains unresolved. To understand the underlying molecular basis of this phenomenon, we analyzed genome-wide expression data of colon cancer metastasis samples, as these tumors are the most advanced and hence would be anticipated to be the most likely heterogeneous group of tumors, potentially exhibiting the maximum amount of genetic heterogeneity. Casting a statistical net around such a complex problem proves difficult because of the high dimensionality and multi-collinearity of the gene expression space, combined with the fact that genes act in concert with one another and that not all genes surveyed might be involved. We devise a strategy to identify distinct subgroups of samples and determine the genetic/molecular signature that defines them. This involves use of the local sparse bump hunting algorithm, which provides a much more optimal and biologically faithful transformed space within which to search for bumps. In addition, thanks to the variable selection feature of the algorithm, we derived a novel sparse gene expression signature, which appears to divide all colon cancer patients into two populations: a population whose expression pattern can be molecularly encompassed within the bump and an outlier population that cannot be. Although all patients within any given stage of the disease, including the metastatic group, appear clinically homogeneous, our procedure revealed two subgroups in each stage with distinct genetic/molecular profiles. We also discuss implications of such a finding in terms of early detection, diagnosis and prognosis. PMID:22052459
Vegetation dynamics and responses to climate change and human activities in Central Asia.
Jiang, Liangliang; Guli Jiapaer; Bao, Anming; Guo, Hao; Ndayisaba, Felix
2017-12-01
Knowledge of the current changes and dynamics of different types of vegetation in relation to climatic changes and anthropogenic activities is critical for developing adaptation strategies to address the challenges posed by climate change and human activities for ecosystems. Based on a regression analysis and the Hurst exponent index method, this research investigated the spatial and temporal characteristics and relationships between vegetation greenness and climatic factors in Central Asia using the Normalized Difference Vegetation Index (NDVI) and gridded high-resolution station (land) data for the period 1984-2013. Further analysis distinguished between the effects of climatic change and those of human activities on vegetation dynamics by means of a residual analysis trend method. The results show that vegetation pixels significantly decreased for shrubs and sparse vegetation compared with those for the other vegetation types and that the degradation of sparse vegetation was more serious in the Karakum and Kyzylkum Deserts, the Ustyurt Plateau and the wetland delta of the Large Aral Sea than in other regions. The Hurst exponent results indicated that forests are more sustainable than grasslands, shrubs and sparse vegetation. Precipitation is the main factor affecting vegetation growth in the Kazakhskiy Melkosopochnik. Moreover, temperature is a controlling factor that influences the seasonal variation of vegetation greenness in the mountains and the Aral Sea basin. Drought is the main factor affecting vegetation degradation as a result of both increased temperature and decreased precipitation in the Kyzylkum Desert and the northern Ustyurt Plateau. The residual analysis highlighted that sparse vegetation and the degradation of some shrubs in the southern part of the Karakum Desert, the southern Ustyurt Plateau and the wetland delta of the Large Aral Sea were mainly triggered by human activities: the excessive exploitation of water resources in the upstream areas of the Amu Darya basin and oil and natural gas extraction in the southern part of the Karakum Desert and the southern Ustyurt Plateau. The results also indicated that after the collapse of the Soviet Union, abandoned pastures gave rise to increased vegetation in eastern Kazakhstan, Kyrgyzstan and Tajikistan, and abandoned croplands reverted to grasslands in northern Kazakhstan, leading to a decrease in cropland greenness. Shrubs and sparse vegetation were extremely sensitive to short-term climatic variations, and our results demonstrated that these vegetation types were the most seriously degraded by human activities. Therefore, regional governments should strive to restore vegetation to sustain this fragile arid ecological environment. Copyright © 2017 Elsevier B.V. All rights reserved.
Lesot, Philippe; Kazimierczuk, Krzysztof; Trébosc, Julien; Amoureux, Jean-Paul; Lafon, Olivier
2015-11-01
Unique information about the atom-level structure and dynamics of solids and mesophases can be obtained by the use of multidimensional nuclear magnetic resonance (NMR) experiments. Nevertheless, the acquisition of these experiments often requires long acquisition times. We review here alternative sampling methods, which have been proposed to circumvent this issue in the case of solids and mesophases. Compared to the spectra of solutions, those of solids and mesophases present some specificities because they usually display lower signal-to-noise ratios, non-Lorentzian line shapes, lower spectral resolutions and wider spectral widths. We highlight herein the advantages and limitations of these alternative sampling methods. A first route to accelerate the acquisition time of multidimensional NMR spectra consists in the use of sparse sampling schemes, such as truncated, radial or random sampling ones. These sparsely sampled datasets are generally processed by reconstruction methods differing from the Discrete Fourier Transform (DFT). A host of non-DFT methods have been applied for solids and mesophases, including the G-matrix Fourier transform, the linear least-square procedures, the covariance transform, the maximum entropy and the compressed sensing. A second class of alternative sampling consists in departing from the Jeener paradigm for multidimensional NMR experiments. These non-Jeener methods include Hadamard spectroscopy as well as spatial or orientational encoding of the evolution frequencies. The increasing number of high field NMR magnets and the development of techniques to enhance NMR sensitivity will contribute to widen the use of these alternative sampling methods for the study of solids and mesophases in the coming years. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Walker, Bruce E.; Panda, Jayanta; Sutliff, Daniel L.
2008-01-01
External Tank Cable Tray vibration data for three successive Space Shuttle flights were analyzed to assess response to buffet and the effect of removal of the Protuberance Air Loads (PAL) ramp. Waveform integration, spectral analysis, cross-correlation analysis and wavelet analysis were employed to estimate vibration modes and temporal development of vibration motion from a sparse array of accelerometers and an on-board system that acquired 16 channels of data for approximately the first 2 min of each flight. The flight data indicated that PAL ramp removal had minimal effect on the fluctuating loads on the cable tray. The measured vibration frequencies and modes agreed well with predicted structural response.
NASA Technical Reports Server (NTRS)
Walker, B. E.; Panda, B. E.; Sutliff, D. L.
2008-01-01
External Tank Cable Tray vibration data for three successive Space Shuttle flights were analyzed to assess response to buffet and the effect of removal of the Protuberance Air Loads (PAL) ramp. Waveform integration, spectral analysis, cross-correlation analysis and wavelet analysis were employed to estimate vibration modes and temporal development of vibration motion from a sparse array of accelerometers and an on-board system that acquired 16 channels of data for approximately the first two minutes of each flight. The flight data indicated that PAL ramp removal had minimal effect on the fluctuating loads on the cable tray. The measured vibration frequencies and modes agreed well with predicted structural response.
NASA Astrophysics Data System (ADS)
Saha, Abhijit; Vivas, A. Katherina
2017-12-01
Ongoing and future surveys with repeat imaging in multiple bands are producing (or will produce) time-spaced measurements of brightness, resulting in the identification of large numbers of variable sources in the sky. A large fraction of these are periodic variables: compilations of these are of scientific interest for a variety of purposes. Unavoidably, the data sets from many such surveys not only have sparse sampling, but also have embedded frequencies in the observing cadence that beat against the natural periodicities of any object under investigation. Such limitations can make period determination ambiguous and uncertain. For multiband data sets with asynchronous measurements in multiple passbands, we wish to maximally use the information on periodicity in a manner that is agnostic of differences in the light-curve shapes across the different channels. Given large volumes of data, computational efficiency is also at a premium. This paper develops and presents a computationally economic method for determining periodicity that combines the results from two different classes of period-determination algorithms. The underlying principles are illustrated through examples. The effectiveness of this approach for combining asynchronously sampled measurements in multiple observables that share an underlying fundamental frequency is also demonstrated.
MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis.
Yang, Wanqi; Gao, Yang; Shi, Yinghuan; Cao, Longbing
2015-11-01
Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-01-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations. PMID:28845484
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2015-10-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations.
Rank preserving sparse learning for Kinect based scene classification.
Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong
2013-10-01
With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.
The potential for geostationary remote sensing of NO2 to improve weather prediction
NASA Astrophysics Data System (ADS)
Liu, X.; Mizzi, A. P.; Anderson, J. L.; Fung, I. Y.; Cohen, R. C.
2017-12-01
Observations of surface winds remain sparse making it challenging to simulate and predict the weather in circumstances of light winds that are most important for poor air quality. Direct measurements of short-lived chemicals from space might be a solution to this challenge. Here we investigate the application of data assimilation of NO2 columns as will be observed from geostationary orbit to improve predictions and retrospective analysis of surface wind fields. Specifically, synthetic NO2 observations are sampled from a "nature run (NR)" regarded as the true atmosphere. Then NO2 observations are assimilated using EAKF methods into a "control run (CR)" which differs from the NR in the wind field. Wind errors are generated by introducing (1) errors in the initial conditions, (2) creating a model error by using two different formulations for the planetary boundary layer, (3) and by combining both of these effects. Assimilation of NO2 column observations succeeds in reducing wind errors, indicating the prospects for future geostationary atmospheric composition measurements to improve weather forecasting are substantial. We find that due to the temporal heterogeneity of wind errors, the success of this application favors chemical observations of high frequency, such as those from geostationary platform. We also show the potential to improve soil moisture field by assimilating NO2 columns.
The use of resighting data to estimate the rate of population growth of the snail kite in Florida
Dreitz, V.J.; Nichols, J.D.; Hines, J.E.; Bennetts, R.E.; Kitchens, W.M.; DeAngelis, D.L.
2002-01-01
The rate of population growth (lambda) is an important demographic parameter used to assess the viability of a population and to develop management and conservation agendas. We examined the use of resighting data to estimate lambda for the snail kite population in Florida from 1997-2000. The analyses consisted of (1) a robust design approach that derives an estimate of lambda from estimates of population size and (2) the Pradel (1996) temporal symmetry (TSM) approach that directly estimates lambda using an open-population capture-recapture model. Besides resighting data, both approaches required information on the number of unmarked individuals that were sighted during the sampling periods. The point estimates of lambda differed between the robust design and TSM approaches, but the 95% confidence intervals overlapped substantially. We believe the differences may be the result of sparse data and do not indicate the inappropriateness of either modelling technique. We focused on the results of the robust design because this approach provided estimates for all study years. Variation among these estimates was smaller than levels of variation among ad hoc estimates based on previously reported index statistics. We recommend that lambda of snail kites be estimated using capture-resighting methods rather than ad hoc counts.
Jones, Benjamin A; Stanton, Timothy K; Colosi, John A; Gauss, Roger C; Fialkowski, Joseph M; Michael Jech, J
2017-06-01
For horizontal-looking sonar systems operating at mid-frequencies (1-10 kHz), scattering by fish with resonant gas-filled swimbladders can dominate seafloor and surface reverberation at long-ranges (i.e., distances much greater than the water depth). This source of scattering, which can be difficult to distinguish from other sources of scattering in the water column or at the boundaries, can add spatio-temporal variability to an already complex acoustic record. Sparsely distributed, spatially compact fish aggregations were measured in the Gulf of Maine using a long-range broadband sonar with continuous spectral coverage from 1.5 to 5 kHz. Observed echoes, that are at least 15 decibels above background levels in the horizontal-looking sonar data, are classified spectrally by the resonance features as due to swimbladder-bearing fish. Contemporaneous multi-frequency echosounder measurements (18, 38, and 120 kHz) and net samples are used in conjunction with physics-based acoustic models to validate this approach. Furthermore, the fish aggregations are statistically characterized in the long-range data by highly non-Rayleigh distributions of the echo magnitudes. These distributions are accurately predicted by a computationally efficient, physics-based model. The model accounts for beam-pattern and waveguide effects as well as the scattering response of aggregations of fish.
Chang, Li; He, Yuanqing; Yang, Taibao; Du, Jiankuo; Niu, Hewen; Pu, Tao
2014-01-01
Ecological succession itself could be a theoretical reference for ecosystem restoration and reconstruction. Glacier forelands are ideal places for investigating plant succession because there are representative ecological succession records at long temporal scales. Based on field observations and experimental data on the foreland of Baishui number 1 Glacier on Mt. Yulong, the succession and dispersal mechanisms of dominant plant species were examined by using numerical classification and ordination methods. Fifty samples were first classified into nine community types and then into three succession stages. The three succession stages occurred about 9-13, 13-102, and 110-400 years ago, respectively. The earliest succession stage contained the association of Arenaria delavayi + Meconopsis horridula. The middle stage contained the associations of Arenaria delavayi + Kobresia fragilis, Carex capilliformis + Polygonum macrophyllum, Carex kansuensis, and also Pedicularis rupicola. The last stage included the associations of Kobresia fragilis + Carex capilliformis, Kobresia fragilis, Kobresia fragilis + Ligusticum rechingerana, and Kobresia fragilis + Ligusticum sikiangense. The tendency of the succession was from bare land to sparse vegetation and then to alpine meadow. In addition, three modes of dispersal were observed, namely, anemochory, mammalichory, and myrmecochory. The dispersal modes of dominant species in plant succession process were evolved from anemochory to zoochory.
He, Yuanqing; Yang, Taibao; Du, Jiankuo; Niu, Hewen; Pu, Tao
2014-01-01
Ecological succession itself could be a theoretical reference for ecosystem restoration and reconstruction. Glacier forelands are ideal places for investigating plant succession because there are representative ecological succession records at long temporal scales. Based on field observations and experimental data on the foreland of Baishui number 1 Glacier on Mt. Yulong, the succession and dispersal mechanisms of dominant plant species were examined by using numerical classification and ordination methods. Fifty samples were first classified into nine community types and then into three succession stages. The three succession stages occurred about 9–13, 13–102, and 110–400 years ago, respectively. The earliest succession stage contained the association of Arenaria delavayi + Meconopsis horridula. The middle stage contained the associations of Arenaria delavayi + Kobresia fragilis, Carex capilliformis + Polygonum macrophyllum, Carex kansuensis, and also Pedicularis rupicola. The last stage included the associations of Kobresia fragilis + Carex capilliformis, Kobresia fragilis, Kobresia fragilis + Ligusticum rechingerana, and Kobresia fragilis + Ligusticum sikiangense. The tendency of the succession was from bare land to sparse vegetation and then to alpine meadow. In addition, three modes of dispersal were observed, namely, anemochory, mammalichory, and myrmecochory. The dispersal modes of dominant species in plant succession process were evolved from anemochory to zoochory. PMID:25401125
Higher-order neural processing tunes motion neurons to visual ecology in three species of hawkmoths.
Stöckl, A L; O'Carroll, D; Warrant, E J
2017-06-28
To sample information optimally, sensory systems must adapt to the ecological demands of each animal species. These adaptations can occur peripherally, in the anatomical structures of sensory organs and their receptors; and centrally, as higher-order neural processing in the brain. While a rich body of investigations has focused on peripheral adaptations, our understanding is sparse when it comes to central mechanisms. We quantified how peripheral adaptations in the eyes, and central adaptations in the wide-field motion vision system, set the trade-off between resolution and sensitivity in three species of hawkmoths active at very different light levels: nocturnal Deilephila elpenor, crepuscular Manduca sexta , and diurnal Macroglossum stellatarum. Using optical measurements and physiological recordings from the photoreceptors and wide-field motion neurons in the lobula complex, we demonstrate that all three species use spatial and temporal summation to improve visual performance in dim light. The diurnal Macroglossum relies least on summation, but can only see at brighter intensities. Manduca, with large sensitive eyes, relies less on neural summation than the smaller eyed Deilephila , but both species attain similar visual performance at nocturnal light levels. Our results reveal how the visual systems of these three hawkmoth species are intimately matched to their visual ecologies. © 2017 The Author(s).
Chima, Charles C; Salemi, Jason L; Wang, Miranda; Mejia de Grubb, Maria C; Gonzalez, Sandra J; Zoorob, Roger J
2017-11-01
Information on the burden and risk factors for diabetes-depression comorbidity in the US is sparse. We used data from the largest all-payer, nationally-representative inpatient database in the US to estimate the prevalence, temporal trends, and risk factors for comorbid depression among adult diabetic inpatients. We conducted a retrospective analysis using the 2002-2014 Nationwide Inpatient Sample databases. Depression and other comorbidities were identified using ICD-9-CM codes. Logistic regression was used to investigate the association between patient characteristics and depression. The rate of depression among patients with type 2 diabetes increased from 7.6% in 2002 to 15.4% in 2014, while for type 1 diabetes the rate increased from 8.7% in 2002 to 19.6% in 2014. The highest rates of depression were observed among females, non-Hispanic whites, younger patients, and patients with five or more chronic comorbidities. The prevalence of comorbid depression among diabetic inpatients in the US is increasing rapidly. Although some portion of this increase could be explained by the rising prevalence of multimorbidity, increased awareness and likelihood of diagnosis of comorbid depression by physicians and better documentation as a result of the increased adoption of electronic health records likely contributed to this trend. Copyright © 2017 Elsevier Inc. All rights reserved.
Overland Flow Analysis Using Time Series of Suas-Derived Elevation Models
NASA Astrophysics Data System (ADS)
Jeziorska, J.; Mitasova, H.; Petrasova, A.; Petras, V.; Divakaran, D.; Zajkowski, T.
2016-06-01
With the advent of the innovative techniques for generating high temporal and spatial resolution terrain models from Unmanned Aerial Systems (UAS) imagery, it has become possible to precisely map overland flow patterns. Furthermore, the process has become more affordable and efficient through the coupling of small UAS (sUAS) that are easily deployed with Structure from Motion (SfM) algorithms that can efficiently derive 3D data from RGB imagery captured with consumer grade cameras. We propose applying the robust overland flow algorithm based on the path sampling technique for mapping flow paths in the arable land on a small test site in Raleigh, North Carolina. By comparing a time series of five flights in 2015 with the results of a simulation based on the most recent lidar derived DEM (2013), we show that the sUAS based data is suitable for overland flow predictions and has several advantages over the lidar data. The sUAS based data captures preferential flow along tillage and more accurately represents gullies. Furthermore the simulated water flow patterns over the sUAS based terrain models are consistent throughout the year. When terrain models are reconstructed only from sUAS captured RGB imagery, however, water flow modeling is only appropriate in areas with sparse or no vegetation cover.