Sparsely-synchronized brain rhythm in a small-world neural network
NASA Astrophysics Data System (ADS)
Kim, Sang-Yoon; Lim, Woochang
2013-07-01
Sparsely-synchronized cortical rhythms, associated with diverse cognitive functions, have been observed in electric recordings of brain activity. At the population level, cortical rhythms exhibit small-amplitude fast oscillations while at the cellular level, individual neurons show stochastic firings sparsely at a much lower rate than the population rate. We study the effect of network architecture on sparse synchronization in an inhibitory population of subthreshold Morris-Lecar neurons (which cannot fire spontaneously without noise). Previously, sparse synchronization was found to occur for cases of both global coupling ( i.e., regular all-to-all coupling) and random coupling. However, a real neural network is known to be non-regular and non-random. Here, we consider sparse Watts-Strogatz small-world networks which interpolate between a regular lattice and a random graph via rewiring. We start from a regular lattice with only short-range connections and then investigate the emergence of sparse synchronization by increasing the rewiring probability p for the short-range connections. For p = 0, the average synaptic path length between pairs of neurons becomes long; hence, only an unsynchronized population state exists because the global efficiency of information transfer is low. However, as p is increased, long-range connections begin to appear, and global effective communication between distant neurons may be available via shorter synaptic paths. Consequently, as p passes a threshold p th (}~ 0.044), sparsely-synchronized population rhythms emerge. However, with increasing p, longer axon wirings become expensive because of their material and energy costs. At an optimal value p* DE (}~ 0.24) of the rewiring probability, the ratio of the synchrony degree to the wiring cost is found to become maximal. In this way, an optimal sparse synchronization is found to occur at a minimal wiring cost in an economic small-world network through trade-off between synchrony and wiring cost.
Sparse bursts optimize information transmission in a multiplexed neural code.
Naud, Richard; Sprekeler, Henning
2018-06-22
Many cortical neurons combine the information ascending and descending the cortical hierarchy. In the classical view, this information is combined nonlinearly to give rise to a single firing-rate output, which collapses all input streams into one. We analyze the extent to which neurons can simultaneously represent multiple input streams by using a code that distinguishes spike timing patterns at the level of a neural ensemble. Using computational simulations constrained by experimental data, we show that cortical neurons are well suited to generate such multiplexing. Interestingly, this neural code maximizes information for short and sparse bursts, a regime consistent with in vivo recordings. Neurons can also demultiplex this information, using specific connectivity patterns. The anatomy of the adult mammalian cortex suggests that these connectivity patterns are used by the nervous system to maintain sparse bursting and optimal multiplexing. Contrary to firing-rate coding, our findings indicate that the physiology and anatomy of the cortex may be interpreted as optimizing the transmission of multiple independent signals to different targets. Copyright © 2018 the Author(s). Published by PNAS.
Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang
2017-01-01
Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l1-norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a “connectivity strength-weighted sparse group constraint.” In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. PMID:28150897
Yu, Renping; Zhang, Han; An, Le; Chen, Xiaobo; Wei, Zhihui; Shen, Dinggang
2017-05-01
Brain functional network analysis has shown great potential in understanding brain functions and also in identifying biomarkers for brain diseases, such as Alzheimer's disease (AD) and its early stage, mild cognitive impairment (MCI). In these applications, accurate construction of biologically meaningful brain network is critical. Sparse learning has been widely used for brain network construction; however, its l 1 -norm penalty simply penalizes each edge of a brain network equally, without considering the original connectivity strength which is one of the most important inherent linkwise characters. Besides, based on the similarity of the linkwise connectivity, brain network shows prominent group structure (i.e., a set of edges sharing similar attributes). In this article, we propose a novel brain functional network modeling framework with a "connectivity strength-weighted sparse group constraint." In particular, the network modeling can be optimized by considering both raw connectivity strength and its group structure, without losing the merit of sparsity. Our proposed method is applied to MCI classification, a challenging task for early AD diagnosis. Experimental results based on the resting-state functional MRI, from 50 MCI patients and 49 healthy controls, show that our proposed method is more effective (i.e., achieving a significantly higher classification accuracy, 84.8%) than other competing methods (e.g., sparse representation, accuracy = 65.6%). Post hoc inspection of the informative features further shows more biologically meaningful brain functional connectivities obtained by our proposed method. Hum Brain Mapp 38:2370-2383, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The storage capacity of Potts models for semantic memory retrieval
NASA Astrophysics Data System (ADS)
Kropff, Emilio; Treves, Alessandro
2005-08-01
We introduce and analyse a minimal network model of semantic memory in the human brain. The model is a global associative memory structured as a collection of N local modules, each coding a feature, which can take S possible values, with a global sparseness a (the average fraction of features describing a concept). We show that, under optimal conditions, the number cM of modules connected on average to a module can range widely between very sparse connectivity (high dilution, c_{M}/N\\to 0 ) and full connectivity (c_{M}\\to N ), maintaining a global network storage capacity (the maximum number pc of stored and retrievable concepts) that scales like pc~cMS2/a, with logarithmic corrections consistent with the constraint that each synapse may store up to a fraction of a bit.
Short-term memory capacity in networks via the restricted isometry property.
Charles, Adam S; Yap, Han Lun; Rozell, Christopher J
2014-06-01
Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.
Node Deployment Algorithm Based on Connected Tree for Underwater Sensor Networks
Jiang, Peng; Wang, Xingmin; Jiang, Lurong
2015-01-01
Designing an efficient deployment method to guarantee optimal monitoring quality is one of the key topics in underwater sensor networks. At present, a realistic approach of deployment involves adjusting the depths of nodes in water. One of the typical algorithms used in such process is the self-deployment depth adjustment algorithm (SDDA). This algorithm mainly focuses on maximizing network coverage by constantly adjusting node depths to reduce coverage overlaps between two neighboring nodes, and thus, achieves good performance. However, the connectivity performance of SDDA is irresolute. In this paper, we propose a depth adjustment algorithm based on connected tree (CTDA). In CTDA, the sink node is used as the first root node to start building a connected tree. Finally, the network can be organized as a forest to maintain network connectivity. Coverage overlaps between the parent node and the child node are then reduced within each sub-tree to optimize coverage. The hierarchical strategy is used to adjust the distance between the parent node and the child node to reduce node movement. Furthermore, the silent mode is adopted to reduce communication cost. Simulations show that compared with SDDA, CTDA can achieve high connectivity with various communication ranges and different numbers of nodes. Moreover, it can realize coverage as high as that of SDDA with various sensing ranges and numbers of nodes but with less energy consumption. Simulations under sparse environments show that the connectivity and energy consumption performances of CTDA are considerably better than those of SDDA. Meanwhile, the connectivity and coverage performances of CTDA are close to those depth adjustment algorithms base on connected dominating set (CDA), which is an algorithm similar to CTDA. However, the energy consumption of CTDA is less than that of CDA, particularly in sparse underwater environments. PMID:26184209
Thermodynamic characterization of synchronization-optimized oscillator networks
NASA Astrophysics Data System (ADS)
Yanagita, Tatsuo; Ichinomiya, Takashi
2014-12-01
We consider a canonical ensemble of synchronization-optimized networks of identical oscillators under external noise. By performing a Markov chain Monte Carlo simulation using the Kirchhoff index, i.e., the sum of the inverse eigenvalues of the Laplacian matrix (as a graph Hamiltonian of the network), we construct more than 1 000 different synchronization-optimized networks. We then show that the transition from star to core-periphery structure depends on the connectivity of the network, and is characterized by the node degree variance of the synchronization-optimized ensemble. We find that thermodynamic properties such as heat capacity show anomalies for sparse networks.
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.
Lottman, Kristin K; Kraguljac, Nina V; White, David M; Morgan, Charity J; Calhoun, Vince D; Butt, Allison; Lahti, Adrienne C
2017-01-01
Resting-state functional connectivity studies in schizophrenia evaluating average connectivity over the entire experiment have reported aberrant network integration, but findings are variable. Examining time-varying (dynamic) functional connectivity may help explain some inconsistencies. We assessed dynamic network connectivity using resting-state functional MRI in patients with schizophrenia, while unmedicated ( n = 34), after 1 week ( n = 29) and 6 weeks of treatment with risperidone ( n = 24), as well as matched controls at baseline ( n = 35) and after 6 weeks ( n = 19). After identifying 41 independent components (ICs) comprising resting-state networks, sliding window analysis was performed on IC timecourses using an optimal window size validated with linear support vector machines. Windowed correlation matrices were then clustered into three discrete connectivity states (a relatively sparsely connected state, a relatively abundantly connected state, and an intermediately connected state). In unmedicated patients, static connectivity was increased between five pairs of ICs and decreased between two pairs of ICs when compared to controls, dynamic connectivity showed increased connectivity between the thalamus and somatomotor network in one of the three states. State statistics indicated that, in comparison to controls, unmedicated patients had shorter mean dwell times and fraction of time spent in the sparsely connected state, and longer dwell times and fraction of time spent in the intermediately connected state. Risperidone appeared to normalize mean dwell times after 6 weeks, but not fraction of time. Results suggest that static connectivity abnormalities in schizophrenia may partly be related to altered brain network temporal dynamics rather than consistent dysconnectivity within and between functional networks and demonstrate the importance of implementing complementary data analysis techniques.
Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering.
Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi
2016-01-01
The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children.
Attractor neural networks with resource-efficient synaptic connectivity
NASA Astrophysics Data System (ADS)
Pehlevan, Cengiz; Sengupta, Anirvan
Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.
Optimal degrees of synaptic connectivity
Litwin-Kumar, Ashok; Harris, Kameron Decker; Axel, Richard; Sompolinsky, Haim; Abbott, L. F.
2017-01-01
Summary Synaptic connectivity varies widely across neuronal types. Cerebellar granule cells receive five orders of magnitude fewer inputs than the Purkinje cells they innervate, and cerebellum-like circuits including the insect mushroom body also exhibit large divergences in connectivity. In contrast, the number of inputs per neuron in cerebral cortex is more uniform and large. We investigate how the dimension of a representation formed by a population of neurons depends on how many inputs they each receive and what this implies for learning associations. Our theory predicts that the dimensions of the cerebellar granule-cell and Drosophila Kenyon-cell representations are maximized at degrees of synaptic connectivity that match those observed anatomically, showing that sparse connectivity is sometimes superior to dense connectivity. When input synapses are subject to supervised plasticity, however, dense wiring becomes advantageous, suggesting that the type of plasticity exhibited by a set of synapses is a major determinant of connection density. PMID:28215558
Design of a MIMD neural network processor
NASA Astrophysics Data System (ADS)
Saeks, Richard E.; Priddy, Kevin L.; Pap, Robert M.; Stowell, S.
1994-03-01
The Accurate Automation Corporation (AAC) neural network processor (NNP) module is a fully programmable multiple instruction multiple data (MIMD) parallel processor optimized for the implementation of neural networks. The AAC NNP design fully exploits the intrinsic sparseness of neural network topologies. Moreover, by using a MIMD parallel processing architecture one can update multiple neurons in parallel with efficiency approaching 100 percent as the size of the network increases. Each AAC NNP module has 8 K neurons and 32 K interconnections and is capable of 140,000,000 connections per second with an eight processor array capable of over one billion connections per second.
Sparse dictionary learning of resting state fMRI networks.
Eavani, Harini; Filipovych, Roman; Davatzikos, Christos; Satterthwaite, Theodore D; Gur, Raquel E; Gur, Ruben C
2012-07-02
Research in resting state fMRI (rsfMRI) has revealed the presence of stable, anti-correlated functional subnetworks in the brain. Task-positive networks are active during a cognitive process and are anti-correlated with task-negative networks, which are active during rest. In this paper, based on the assumption that the structure of the resting state functional brain connectivity is sparse, we utilize sparse dictionary modeling to identify distinct functional sub-networks. We propose two ways of formulating the sparse functional network learning problem that characterize the underlying functional connectivity from different perspectives. Our results show that the whole-brain functional connectivity can be concisely represented with highly modular, overlapping task-positive/negative pairs of sub-networks.
Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.
2010-01-01
A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624
Strahl, Stefan; Mertins, Alfred
2008-07-18
Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.
Sparse Learning with Stochastic Composite Optimization.
Zhang, Weizhong; Zhang, Lijun; Jin, Zhongming; Jin, Rong; Cai, Deng; Li, Xuelong; Liang, Ronghua; He, Xiaofei
2017-06-01
In this paper, we study Stochastic Composite Optimization (SCO) for sparse learning that aims to learn a sparse solution from a composite function. Most of the recent SCO algorithms have already reached the optimal expected convergence rate O(1/λT), but they often fail to deliver sparse solutions at the end either due to the limited sparsity regularization during stochastic optimization (SO) or due to the limitation in online-to-batch conversion. Even when the objective function is strongly convex, their high probability bounds can only attain O(√{log(1/δ)/T}) with δ is the failure probability, which is much worse than the expected convergence rate. To address these limitations, we propose a simple yet effective two-phase Stochastic Composite Optimization scheme by adding a novel powerful sparse online-to-batch conversion to the general Stochastic Optimization algorithms. We further develop three concrete algorithms, OptimalSL, LastSL and AverageSL, directly under our scheme to prove the effectiveness of the proposed scheme. Both the theoretical analysis and the experiment results show that our methods can really outperform the existing methods at the ability of sparse learning and at the meantime we can improve the high probability bound to approximately O(log(log(T)/δ)/λT).
A Novel Characterization of Amalgamated Networks in Natural Systems
Barranca, Victor J.; Zhou, Douglas; Cai, David
2015-01-01
Densely-connected networks are prominent among natural systems, exhibiting structural characteristics often optimized for biological function. To reveal such features in highly-connected networks, we introduce a new network characterization determined by a decomposition of network-connectivity into low-rank and sparse components. Based on these components, we discover a new class of networks we define as amalgamated networks, which exhibit large functional groups and dense connectivity. Analyzing recent experimental findings on cerebral cortex, food-web, and gene regulatory networks, we establish the unique importance of amalgamated networks in fostering biologically advantageous properties, including rapid communication among nodes, structural stability under attacks, and separation of network activity into distinct functional modules. We further observe that our network characterization is scalable with network size and connectivity, thereby identifying robust features significant to diverse physical systems, which are typically undetectable by conventional characterizations of connectivity. We expect that studying the amalgamation properties of biological networks may offer new insights into understanding their structure-function relationships. PMID:26035066
Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning
NASA Astrophysics Data System (ADS)
Li, Jun-Bao; Liu, Jing; Pan, Jeng-Shyang; Yao, Hongxun
2017-06-01
Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.
Angulo-Garcia, David; Berke, Joshua D; Torcini, Alessandro
2016-02-01
Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We find that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.
Energy Efficient Sparse Connectivity from Imbalanced Synaptic Plasticity Rules
Sacramento, João; Wichert, Andreas; van Rossum, Mark C. W.
2015-01-01
It is believed that energy efficiency is an important constraint in brain evolution. As synaptic transmission dominates energy consumption, energy can be saved by ensuring that only a few synapses are active. It is therefore likely that the formation of sparse codes and sparse connectivity are fundamental objectives of synaptic plasticity. In this work we study how sparse connectivity can result from a synaptic learning rule of excitatory synapses. Information is maximised when potentiation and depression are balanced according to the mean presynaptic activity level and the resulting fraction of zero-weight synapses is around 50%. However, an imbalance towards depression increases the fraction of zero-weight synapses without significantly affecting performance. We show that imbalanced plasticity corresponds to imposing a regularising constraint on the L 1-norm of the synaptic weight vector, a procedure that is well-known to induce sparseness. Imbalanced plasticity is biophysically plausible and leads to more efficient synaptic configurations than a previously suggested approach that prunes synapses after learning. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum. PMID:26046817
Li, Weikai; Wang, Zhengxia; Zhang, Limei; Qiao, Lishan; Shen, Dinggang
2017-01-01
Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L 1 -norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD) from normal controls (NC) based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Kim, Sang-Yoon; Lim, Woochang
2015-11-01
We consider a clustered network with small-world subnetworks of inhibitory fast spiking interneurons and investigate the effect of intermodular connection on the emergence of fast sparsely synchronized rhythms by varying both the intermodular coupling strength Jinter and the average number of intermodular links per interneuron Msyn(inter ). In contrast to the case of nonclustered networks, two kinds of sparsely synchronized states such as modular and global synchronization are found. For the case of modular sparse synchronization, the population behavior reveals the modular structure, because the intramodular dynamics of subnetworks make some mismatching. On the other hand, in the case of global sparse synchronization, the population behavior is globally identical, independently of the cluster structure, because the intramodular dynamics of subnetworks make perfect matching. We introduce a realistic cross-correlation modularity measure, representing the matching degree between the instantaneous subpopulation spike rates of the subnetworks, and examine whether the sparse synchronization is global or modular. Depending on its magnitude, the intermodular coupling strength Jinter seems to play "dual" roles for the pacing between spikes in each subnetwork. For large Jinter, due to strong inhibition it plays a destructive role to "spoil" the pacing between spikes, while for small Jinter it plays a constructive role to "favor" the pacing between spikes. Through competition between the constructive and the destructive roles of Jinter, there exists an intermediate optimal Jinter at which the pacing degree between spikes becomes maximal. In contrast, the average number of intermodular links per interneuron Msyn(inter ) seems to play a role just to favor the pacing between spikes. With increasing Msyn(inter ), the pacing degree between spikes increases monotonically thanks to the increase in the degree of effectiveness of global communication between spikes. Furthermore, we employ the realistic sub- and whole-population order parameters, based on the instantaneous sub- and whole-population spike rates, to determine the threshold values for the synchronization-unsynchronization transition in the sub- and whole populations, and the degrees of global and modular sparse synchronization are also measured in terms of the realistic sub- and whole-population statistical-mechanical spiking measures defined by considering both the occupation and the pacing degrees of spikes. It is expected that our results could have implications for the role of the brain plasticity in some functional behaviors associated with population synchronization.
Normalization for sparse encoding of odors by a wide-field interneuron.
Papadopoulou, Maria; Cassenaer, Stijn; Nowotny, Thomas; Laurent, Gilles
2011-05-06
Sparse coding presents practical advantages for sensory representations and memory storage. In the insect olfactory system, the representation of general odors is dense in the antennal lobes but sparse in the mushroom bodies, only one synapse downstream. In locusts, this transformation relies on the oscillatory structure of antennal lobe output, feed-forward inhibitory circuits, intrinsic properties of mushroom body neurons, and connectivity between antennal lobe and mushroom bodies. Here we show the existence of a normalizing negative-feedback loop within the mushroom body to maintain sparse output over a wide range of input conditions. This loop consists of an identifiable "giant" nonspiking inhibitory interneuron with ubiquitous connectivity and graded release properties.
Population coding in sparsely connected networks of noisy neurons.
Tripp, Bryan P; Orchard, Jeff
2012-01-01
This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.
Sparse and optimal acquisition design for diffusion MRI and beyond
Koay, Cheng Guan; Özarslan, Evren; Johnson, Kevin M.; Meyerand, M. Elizabeth
2012-01-01
Purpose: Diffusion magnetic resonance imaging (MRI) in combination with functional MRI promises a whole new vista for scientists to investigate noninvasively the structural and functional connectivity of the human brain—the human connectome, which had heretofore been out of reach. As with other imaging modalities, diffusion MRI data are inherently noisy and its acquisition time-consuming. Further, a faithful representation of the human connectome that can serve as a predictive model requires a robust and accurate data-analytic pipeline. The focus of this paper is on one of the key segments of this pipeline—in particular, the development of a sparse and optimal acquisition (SOA) design for diffusion MRI multiple-shell acquisition and beyond. Methods: The authors propose a novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and a novel and effective semistochastic and moderately greedy combinatorial search strategy with simulated annealing to locate the optimum design or configuration. The goal of the optimality criteria is threefold: first, to maximize uniformity of the diffusion measurements in each shell, which is equivalent to maximal incoherence in angular measurements; second, to maximize coverage of the diffusion measurements around each radial line to achieve maximal incoherence in radial measurements for multiple-shell acquisition; and finally, to ensure maximum uniformity of diffusion measurement directions in the limiting case when all the shells are coincidental as in the case of a single-shell acquisition. The approach taken in evaluating the stability of various acquisition designs is based on the condition number and the A-optimal measure of the design matrix. Results: Even though the number of distinct configurations for a given set of diffusion gradient directions is very large in general—e.g., in the order of 10232 for a set of 144 diffusion gradient directions, the proposed search strategy was found to be effective in finding the optimum configuration. It was found that the square design is the most robust (i.e., with stable condition numbers and A-optimal measures under varying experimental conditions) among many other possible designs of the same sample size. Under the same performance evaluation, the square design was found to be more robust than the widely used sampling schemes similar to that of 3D radial MRI and of diffusion spectrum imaging (DSI). Conclusions: A novel optimality criterion for sparse multiple-shell acquisition and quasimultiple-shell designs in diffusion MRI and an effective search strategy for finding the best configuration have been developed. The results are very promising, interesting, and practical for diffusion MRI acquisitions. PMID:22559620
Sparse network-based models for patient classification using fMRI
Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina
2015-01-01
Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459
Decentralized state estimation for a large-scale spatially interconnected system.
Liu, Huabo; Yu, Haisheng
2018-03-01
A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Effect of dilution in asymmetric recurrent neural networks.
Folli, Viola; Gosti, Giorgio; Leonetti, Marco; Ruocco, Giancarlo
2018-04-16
We study with numerical simulation the possible limit behaviors of synchronous discrete-time deterministic recurrent neural networks composed of N binary neurons as a function of a network's level of dilution and asymmetry. The network dilution measures the fraction of neuron couples that are connected, and the network asymmetry measures to what extent the underlying connectivity matrix is asymmetric. For each given neural network, we study the dynamical evolution of all the different initial conditions, thus characterizing the full dynamical landscape without imposing any learning rule. Because of the deterministic dynamics, each trajectory converges to an attractor, that can be either a fixed point or a limit cycle. These attractors form the set of all the possible limit behaviors of the neural network. For each network we then determine the convergence times, the limit cycles' length, the number of attractors, and the sizes of the attractors' basin. We show that there are two network structures that maximize the number of possible limit behaviors. The first optimal network structure is fully-connected and symmetric. On the contrary, the second optimal network structure is highly sparse and asymmetric. The latter optimal is similar to what observed in different biological neuronal circuits. These observations lead us to hypothesize that independently from any given learning model, an efficient and effective biologic network that stores a number of limit behaviors close to its maximum capacity tends to develop a connectivity structure similar to one of the optimal networks we found. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio
2018-06-19
Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.
Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-02-22
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.
Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-01-01
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Huiqiao; Yang, Yi; Tang, Xiangyang
2015-06-15
Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, whichmore » are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality for advanced clinical applications wherein only unevenly distributed sparse views are available. Research Grants: W81XWH-12-1-0138 (DoD), Sinovision Technologies.« less
Wang, Yubo; Veluvolu, Kalyana C
2017-06-14
It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.
Li, Weikai; Wang, Zhengxia; Zhang, Limei; Qiao, Lishan; Shen, Dinggang
2017-01-01
Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L1-norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD) from normal controls (NC) based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods. PMID:28912708
Colclough, Giles L; Woolrich, Mark W; Harrison, Samuel J; Rojas López, Pedro A; Valdes-Sosa, Pedro A; Smith, Stephen M
2018-05-07
A Bayesian model for sparse, hierarchical, inver-covariance estimation is presented, and applied to multi-subject functional connectivity estimation in the human brain. It enables simultaneous inference of the strength of connectivity between brain regions at both subject and population level, and is applicable to fMRI, MEG and EEG data. Two versions of the model can encourage sparse connectivity, either using continuous priors to suppress irrelevant connections, or using an explicit description of the network structure to estimate the connection probability between each pair of regions. A large evaluation of this model, and thirteen methods that represent the state of the art of inverse covariance modelling, is conducted using both simulated and resting-state functional imaging datasets. Our novel Bayesian approach has similar performance to the best extant alternative, Ng et al.'s Sparse Group Gaussian Graphical Model algorithm, which also is based on a hierarchical structure. Using data from the Human Connectome Project, we show that these hierarchical models are able to reduce the measurement error in MEG beta-band functional networks by 10%, producing concomitant increases in estimates of the genetic influence on functional connectivity. Copyright © 2018. Published by Elsevier Inc.
Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.
Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen
2017-08-29
In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Gao, Wei; Zhu, Linli; Wang, Kaiyun
2015-12-01
Ontology, a model of knowledge representation and storage, has had extensive applications in pharmaceutics, social science, chemistry and biology. In the age of “big data”, the constructed concepts are often represented as higher-dimensional data by scholars, and thus the sparse learning techniques are introduced into ontology algorithms. In this paper, based on the alternating direction augmented Lagrangian method, we present an ontology optimization algorithm for ontological sparse vector learning, and a fast version of such ontology technologies. The optimal sparse vector is obtained by an iterative procedure, and the ontology function is then obtained from the sparse vector. Four simulation experiments show that our ontological sparse vector learning model has a higher precision ratio on plant ontology, humanoid robotics ontology, biology ontology and physics education ontology data for similarity measuring and ontology mapping applications.
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
A Topological Criterion for Filtering Information in Complex Brain Networks
Latora, Vito; Chavez, Mario
2017-01-01
In many biological systems, the network of interactions between the elements can only be inferred from experimental measurements. In neuroscience, non-invasive imaging tools are extensively used to derive either structural or functional brain networks in-vivo. As a result of the inference process, we obtain a matrix of values corresponding to a fully connected and weighted network. To turn this into a useful sparse network, thresholding is typically adopted to cancel a percentage of the weakest connections. The structural properties of the resulting network depend on how much of the inferred connectivity is eventually retained. However, how to objectively fix this threshold is still an open issue. We introduce a criterion, the efficiency cost optimization (ECO), to select a threshold based on the optimization of the trade-off between the efficiency of a network and its wiring cost. We prove analytically and we confirm through numerical simulations that the connection density maximizing this trade-off emphasizes the intrinsic properties of a given network, while preserving its sparsity. Moreover, this density threshold can be determined a-priori, since the number of connections to filter only depends on the network size according to a power-law. We validate this result on several brain networks, from micro- to macro-scales, obtained with different imaging modalities. Finally, we test the potential of ECO in discriminating brain states with respect to alternative filtering methods. ECO advances our ability to analyze and compare biological networks, inferred from experimental data, in a fast and principled way. PMID:28076353
Sparse Multivariate Autoregressive Modeling for Mild Cognitive Impairment Classification
Li, Yang; Wee, Chong-Yaw; Jie, Biao; Peng, Ziwen
2014-01-01
Brain connectivity network derived from functional magnetic resonance imaging (fMRI) is becoming increasingly prevalent in the researches related to cognitive and perceptual processes. The capability to detect causal or effective connectivity is highly desirable for understanding the cooperative nature of brain network, particularly when the ultimate goal is to obtain good performance of control-patient classification with biological meaningful interpretations. Understanding directed functional interactions between brain regions via brain connectivity network is a challenging task. Since many genetic and biomedical networks are intrinsically sparse, incorporating sparsity property into connectivity modeling can make the derived models more biologically plausible. Accordingly, we propose an effective connectivity modeling of resting-state fMRI data based on the multivariate autoregressive (MAR) modeling technique, which is widely used to characterize temporal information of dynamic systems. This MAR modeling technique allows for the identification of effective connectivity using the Granger causality concept and reducing the spurious causality connectivity in assessment of directed functional interaction from fMRI data. A forward orthogonal least squares (OLS) regression algorithm is further used to construct a sparse MAR model. By applying the proposed modeling to mild cognitive impairment (MCI) classification, we identify several most discriminative regions, including middle cingulate gyrus, posterior cingulate gyrus, lingual gyrus and caudate regions, in line with results reported in previous findings. A relatively high classification accuracy of 91.89 % is also achieved, with an increment of 5.4 % compared to the fully-connected, non-directional Pearson-correlation-based functional connectivity approach. PMID:24595922
Language Recognition via Sparse Coding
2016-09-08
a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector
Boguslawski, Bartosz; Gripon, Vincent; Seguin, Fabrice; Heitzmann, Frédéric
2016-02-01
Associative memories are data structures that allow retrieval of previously stored messages given part of their content. They, thus, behave similarly to the human brain's memory that is capable, for instance, of retrieving the end of a song, given its beginning. Among different families of associative memories, sparse ones are known to provide the best efficiency (ratio of the number of bits stored to that of the bits used). Recently, a new family of sparse associative memories achieving almost optimal efficiency has been proposed. Their structure, relying on binary connections and neurons, induces a direct mapping between input messages and stored patterns. Nevertheless, it is well known that nonuniformity of the stored messages can lead to a dramatic decrease in performance. In this paper, we show the impact of nonuniformity on the performance of this recent model, and we exploit the structure of the model to improve its performance in practical applications, where data are not necessarily uniform. In order to approach the performance of networks with uniformly distributed messages presented in theoretical studies, twin neurons are introduced. To assess the adapted model, twin neurons are used with the real-world data to optimize power consumption of electronic circuits in practical test cases.
JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.
Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter
2016-01-01
Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.
Generative models for discovering sparse distributed representations.
Hinton, G E; Ghahramani, Z
1997-01-01
We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. PMID:9304685
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
A generative model of whole-brain effective connectivity.
Frässle, Stefan; Lomakina, Ekaterina I; Kasper, Lars; Manjaly, Zina M; Leff, Alex; Pruessmann, Klaas P; Buhmann, Joachim M; Stephan, Klaas E
2018-05-25
The development of whole-brain models that can infer effective (directed) connection strengths from fMRI data represents a central challenge for computational neuroimaging. A recently introduced generative model of fMRI data, regression dynamic causal modeling (rDCM), moves towards this goal as it scales gracefully to very large networks. However, large-scale networks with thousands of connections are difficult to interpret; additionally, one typically lacks information (data points per free parameter) for precise estimation of all model parameters. This paper introduces sparsity constraints to the variational Bayesian framework of rDCM as a solution to these problems in the domain of task-based fMRI. This sparse rDCM approach enables highly efficient effective connectivity analyses in whole-brain networks and does not require a priori assumptions about the network's connectivity structure but prunes fully (all-to-all) connected networks as part of model inversion. Following the derivation of the variational Bayesian update equations for sparse rDCM, we use both simulated and empirical data to assess the face validity of the model. In particular, we show that it is feasible to infer effective connection strengths from fMRI data using a network with more than 100 regions and 10,000 connections. This demonstrates the feasibility of whole-brain inference on effective connectivity from fMRI data - in single subjects and with a run-time below 1 min when using parallelized code. We anticipate that sparse rDCM may find useful application in connectomics and clinical neuromodeling - for example, for phenotyping individual patients in terms of whole-brain network structure. Copyright © 2018. Published by Elsevier Inc.
Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul
2016-01-15
Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. Copyright © 2015 Elsevier Inc. All rights reserved.
Highly parallel sparse Cholesky factorization
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Schreiber, Robert
1990-01-01
Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.
Low-rank network decomposition reveals structural characteristics of small-world networks
NASA Astrophysics Data System (ADS)
Barranca, Victor J.; Zhou, Douglas; Cai, David
2015-12-01
Small-world networks occur naturally throughout biological, technological, and social systems. With their prevalence, it is particularly important to prudently identify small-world networks and further characterize their unique connection structure with respect to network function. In this work we develop a formalism for classifying networks and identifying small-world structure using a decomposition of network connectivity matrices into low-rank and sparse components, corresponding to connections within clusters of highly connected nodes and sparse interconnections between clusters, respectively. We show that the network decomposition is independent of node indexing and define associated bounded measures of connectivity structure, which provide insight into the clustering and regularity of network connections. While many existing network characterizations rely on constructing benchmark networks for comparison or fail to describe the structural properties of relatively densely connected networks, our classification relies only on the intrinsic network structure and is quite robust with respect to changes in connection density, producing stable results across network realizations. Using this framework, we analyze several real-world networks and reveal new structural properties, which are often indiscernible by previously established characterizations of network connectivity.
Nonconvex Sparse Logistic Regression With Weakly Convex Regularization
NASA Astrophysics Data System (ADS)
Shen, Xinyue; Gu, Yuantao
2018-06-01
In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O
2015-12-01
To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.
The application of nonlinear programming and collocation to optimal aeroassisted orbital transfers
NASA Astrophysics Data System (ADS)
Shi, Y. Y.; Nelson, R. L.; Young, D. H.; Gill, P. E.; Murray, W.; Saunders, M. A.
1992-01-01
Sequential quadratic programming (SQP) and collocation of the differential equations of motion were applied to optimal aeroassisted orbital transfers. The Optimal Trajectory by Implicit Simulation (OTIS) computer program codes with updated nonlinear programming code (NZSOL) were used as a testbed for the SQP nonlinear programming (NLP) algorithms. The state-of-the-art sparse SQP method is considered to be effective for solving large problems with a sparse matrix. Sparse optimizers are characterized in terms of memory requirements and computational efficiency. For the OTIS problems, less than 10 percent of the Jacobian matrix elements are nonzero. The SQP method encompasses two phases: finding an initial feasible point by minimizing the sum of infeasibilities and minimizing the quadratic objective function within the feasible region. The orbital transfer problem under consideration involves the transfer from a high energy orbit to a low energy orbit.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
Signal processing using sparse derivatives with applications to chromatograms and ECG
NASA Astrophysics Data System (ADS)
Ning, Xiaoran
In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse respectively. At the end, the algorithm is applied to a QRS detection system and validated using the MIT-BIH Arrhythmia database (109452 anotations), resulting a sensitivity of Se = 99.87%$ and a positive prediction of +P = 99.88%.
Label consistent K-SVD: learning a discriminative dictionary for recognition.
Jiang, Zhuolin; Lin, Zhe; Davis, Larry S
2013-11-01
A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistency constraint called "discriminative sparse-code error" and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single overcomplete dictionary and an optimal linear classifier jointly. The incremental dictionary learning algorithm is presented for the situation of limited memory resources. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse-coding techniques for face, action, scene, and object category recognition under the same learning conditions.
Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M; Ramírez, Javier
2015-01-01
Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the usefulness of the Sparse Gaussian Graphical models to reveal functional and structural connectivity patterns. This information provided by the sparse inverse covariance matrices is not only used in an exploratory way but we also propose a method to use it in a discriminative way. Regression coefficients are used to compute reconstruction errors for the different classes that are then introduced in a SVM for classification. Classification experiments performed using 68 Controls, 70 AD, and 111 MCI images and assessed by cross-validation show the effectiveness of the proposed method.
Shen, Hong-Bin
2011-01-01
Modern science of networks has brought significant advances to our understanding of complex systems biology. As a representative model of systems biology, Protein Interaction Networks (PINs) are characterized by a remarkable modular structures, reflecting functional associations between their components. Many methods were proposed to capture cohesive modules so that there is a higher density of edges within modules than those across them. Recent studies reveal that cohesively interacting modules of proteins is not a universal organizing principle in PINs, which has opened up new avenues for revisiting functional modules in PINs. In this paper, functional clusters in PINs are found to be able to form unorthodox structures defined as bi-sparse module. In contrast to the traditional cohesive module, the nodes in the bi-sparse module are sparsely connected internally and densely connected with other bi-sparse or cohesive modules. We present a novel protocol called the BinTree Seeking (BTS) for mining both bi-sparse and cohesive modules in PINs based on Edge Density of Module (EDM) and matrix theory. BTS detects modules by depicting links and nodes rather than nodes alone and its derivation procedure is totally performed on adjacency matrix of networks. The number of modules in a PIN can be automatically determined in the proposed BTS approach. BTS is tested on three real PINs and the results demonstrate that functional modules in PINs are not dominantly cohesive but can be sparse. BTS software and the supporting information are available at: www.csbio.sjtu.edu.cn/bioinf/BTS/. PMID:22140454
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement
Hao, Yansong; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-01-01
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency. PMID:29597280
A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.
Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang
2018-03-28
Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.
Adaptive low-rank subspace learning with online optimization for robust visual tracking.
Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan
2017-04-01
In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
NASA Astrophysics Data System (ADS)
Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan
2018-02-01
The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.
Multi-objective based spectral unmixing for hyperspectral images
NASA Astrophysics Data System (ADS)
Xu, Xia; Shi, Zhenwei
2017-02-01
Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.
Das, Anup; Sampson, Aaron L.; Lainscsek, Claudia; Muller, Lyle; Lin, Wutu; Doyle, John C.; Cash, Sydney S.; Halgren, Eric; Sejnowski, Terrence J.
2017-01-01
The correlation method from brain imaging has been used to estimate functional connectivity in the human brain. However, brain regions might show very high correlation even when the two regions are not directly connected due to the strong interaction of the two regions with common input from a third region. One previously proposed solution to this problem is to use a sparse regularized inverse covariance matrix or precision matrix (SRPM) assuming that the connectivity structure is sparse. This method yields partial correlations to measure strong direct interactions between pairs of regions while simultaneously removing the influence of the rest of the regions, thus identifying regions that are conditionally independent. To test our methods, we first demonstrated conditions under which the SRPM method could indeed find the true physical connection between a pair of nodes for a spring-mass example and an RC circuit example. The recovery of the connectivity structure using the SRPM method can be explained by energy models using the Boltzmann distribution. We then demonstrated the application of the SRPM method for estimating brain connectivity during stage 2 sleep spindles from human electrocorticography (ECoG) recordings using an 8 × 8 electrode array. The ECoG recordings that we analyzed were from a 32-year-old male patient with long-standing pharmaco-resistant left temporal lobe complex partial epilepsy. Sleep spindles were automatically detected using delay differential analysis and then analyzed with SRPM and the Louvain method for community detection. We found spatially localized brain networks within and between neighboring cortical areas during spindles, in contrast to the case when sleep spindles were not present. PMID:28095202
Embedded sparse representation of fMRI data via group-wise dictionary optimization
NASA Astrophysics Data System (ADS)
Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.
2016-03-01
Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Tattini, Lorenzo; Olmi, Simona; Torcini, Alessandro
2012-06-01
In this article, we investigate the role of connectivity in promoting coherent activity in excitatory neural networks. In particular, we would like to understand if the onset of collective oscillations can be related to a minimal average connectivity and how this critical connectivity depends on the number of neurons in the networks. For these purposes, we consider an excitatory random network of leaky integrate-and-fire pulse coupled neurons. The neurons are connected as in a directed Erdös-Renyi graph with average connectivity
A Method for Optimizing Non-Axisymmetric Liners for Multimodal Sound Sources
NASA Technical Reports Server (NTRS)
Watson, W. R.; Jones, M. G.; Parrott, T. L.; Sobieski, J.
2002-01-01
Central processor unit times and memory requirements for a commonly used solver are compared to that of a state-of-the-art, parallel, sparse solver. The sparse solver is then used in conjunction with three constrained optimization methodologies to assess the relative merits of non-axisymmetric versus axisymmetric liner concepts for improving liner acoustic suppression. This assessment is performed with a multimodal noise source (with equal mode amplitudes and phases) in a finite-length rectangular duct without flow. The sparse solver is found to reduce memory requirements by a factor of five and central processing time by a factor of eleven when compared with the commonly used solver. Results show that the optimum impedance of the uniform liner is dominated by the least attenuated mode, whose attenuation is maximized by the Cremer optimum impedance. An optimized, four-segmented liner with impedance segments in a checkerboard arrangement is found to be inferior to an optimized spanwise segmented liner. This optimized spanwise segmented liner is shown to attenuate substantially more sound than the optimized uniform liner and tends to be more effective at the higher frequencies. The most important result of this study is the discovery that when optimized, a spanwise segmented liner with two segments gives attenuations equal to or substantially greater than an optimized axially segmented liner with the same number of segments.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
A denoising algorithm for CT image using low-rank sparse coding
NASA Astrophysics Data System (ADS)
Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng
2018-03-01
We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.
Brief announcement: Hypergraph parititioning for parallel sparse matrix-matrix multiplication
Ballard, Grey; Druinsky, Alex; Knight, Nicholas; ...
2015-01-01
The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Furthermore, our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computationmore » to improve application-specific algorithms for multiplying sparse matrices.« less
Sparse Substring Pattern Set Discovery Using Linear Programming Boosting
NASA Astrophysics Data System (ADS)
Kashihara, Kazuaki; Hatano, Kohei; Bannai, Hideo; Takeda, Masayuki
In this paper, we consider finding a small set of substring patterns which classifies the given documents well. We formulate the problem as 1 norm soft margin optimization problem where each dimension corresponds to a substring pattern. Then we solve this problem by using LPBoost and an optimal substring discovery algorithm. Since the problem is a linear program, the resulting solution is likely to be sparse, which is useful for feature selection. We evaluate the proposed method for real data such as movie reviews.
Yao, Jincao; Yu, Huimin; Hu, Roland
2017-01-01
This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.
Active learning of cortical connectivity from two-photon imaging data.
Bertrán, Martín A; Martínez, Natalia L; Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario
2018-01-01
Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this "active learning" method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model.
DANoC: An Efficient Algorithm and Hardware Codesign of Deep Neural Networks on Chip.
Zhou, Xichuan; Li, Shengli; Tang, Fang; Hu, Shengdong; Lin, Zhi; Zhang, Lei
2017-07-18
Deep neural networks (NNs) are the state-of-the-art models for understanding the content of images and videos. However, implementing deep NNs in embedded systems is a challenging task, e.g., a typical deep belief network could exhaust gigabytes of memory and result in bandwidth and computational bottlenecks. To address this challenge, this paper presents an algorithm and hardware codesign for efficient deep neural computation. A hardware-oriented deep learning algorithm, named the deep adaptive network, is proposed to explore the sparsity of neural connections. By adaptively removing the majority of neural connections and robustly representing the reserved connections using binary integers, the proposed algorithm could save up to 99.9% memory utility and computational resources without undermining classification accuracy. An efficient sparse-mapping-memory-based hardware architecture is proposed to fully take advantage of the algorithmic optimization. Different from traditional Von Neumann architecture, the deep-adaptive network on chip (DANoC) brings communication and computation in close proximity to avoid power-hungry parameter transfers between on-board memory and on-chip computational units. Experiments over different image classification benchmarks show that the DANoC system achieves competitively high accuracy and efficiency comparing with the state-of-the-art approaches.
Active learning of cortical connectivity from two-photon imaging data
Wang, Ye; Dunson, David; Sapiro, Guillermo; Ringach, Dario
2018-01-01
Understanding how groups of neurons interact within a network is a fundamental question in system neuroscience. Instead of passively observing the ongoing activity of a network, we can typically perturb its activity, either by external sensory stimulation or directly via techniques such as two-photon optogenetics. A natural question is how to use such perturbations to identify the connectivity of the network efficiently. Here we introduce a method to infer sparse connectivity graphs from in-vivo, two-photon imaging of population activity in response to external stimuli. A novel aspect of the work is the introduction of a recommended distribution, incrementally learned from the data, to optimally refine the inferred network. Unlike existing system identification techniques, this “active learning” method automatically focuses its attention on key undiscovered areas of the network, instead of targeting global uncertainty indicators like parameter variance. We show how active learning leads to faster inference while, at the same time, provides confidence intervals for the network parameters. We present simulations on artificial small-world networks to validate the methods and apply the method to real data. Analysis of frequency of motifs recovered show that cortical networks are consistent with a small-world topology model. PMID:29718955
Optimal sparse approximation with integrate and fire neurons.
Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher
2014-08-01
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Insights from Classifying Visual Concepts with Multiple Kernel Learning
Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki
2012-01-01
Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970
Sparse Reconstruction Techniques in MRI: Methods, Applications, and Challenges to Clinical Adoption
Yang, Alice Chieh-Yu; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-01-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in Magnetic Resonance Imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be employed to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MR imaging, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold-standards, are discussed. PMID:27003227
Non-identical multiplexing promotes chimera states
NASA Astrophysics Data System (ADS)
Ghosh, Saptarshi; Zakharova, Anna; Jalan, Sarika
2018-01-01
We present the emergence of chimeras, a state referring to coexistence of partly coherent, partly incoherent dynamics in networks of identical oscillators, in a multiplex network consisting of two non-identical layers which are interconnected. We demonstrate that the parameter range displaying the chimera state in the homogeneous first layer of the multiplex networks can be tuned by changing the link density or connection architecture of the same nodes in the second layer. We focus on the impact of the interconnected second layer on the enlargement or shrinking of the coupling regime for which chimeras are displayed in the homogeneous first layer. We find that a denser homogeneous second layer promotes chimera in a sparse first layer, where chimeras do not occur in isolation. Furthermore, while a dense connection density is required for the second layer if it is homogeneous, this is not true if the second layer is inhomogeneous. We demonstrate that a sparse inhomogeneous second layer which is common in real-world complex systems can promote chimera states in a sparse homogeneous first layer.
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics
Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf
2015-01-01
Motivation: RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of O(n6). Subsequently, numerous faster ‘Sankoff-style’ approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity (≥ quartic time). Results: Breaking this barrier, we introduce the novel Sankoff-style algorithm ‘sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)’, which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff’s original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. Availability and implementation: SPARSE is freely available at http://www.bioinf.uni-freiburg.de/Software/SPARSE. Contact: backofen@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25838465
Solving very large, sparse linear systems on mesh-connected parallel computers
NASA Technical Reports Server (NTRS)
Opsahl, Torstein; Reif, John
1987-01-01
The implementation of Pan and Reif's Parallel Nested Dissection (PND) algorithm on mesh connected parallel computers is described. This is the first known algorithm that allows very large, sparse linear systems of equations to be solved efficiently in polylog time using a small number of processors. How the processor bound of PND can be matched to the number of processors available on a given parallel computer by slowing down the algorithm by constant factors is described. Also, for the important class of problems where G(A) is a grid graph, a unique memory mapping that reduces the inter-processor communication requirements of PND to those that can be executed on mesh connected parallel machines is detailed. A description of an implementation on the Goodyear Massively Parallel Processor (MPP), located at Goddard is given. Also, a detailed discussion of data mappings and performance issues is given.
Global Parameter Optimization of CLM4.5 Using Sparse-Grid Based Surrogates
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Gu, L.
2016-12-01
Calibration of the Community Land Model (CLM) is challenging because of its model complexity, large parameter sets, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time. The goal of this study is to calibrate some of the CLM parameters in order to improve model projection of carbon fluxes. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first use advanced sparse grid (SG) interpolation to construct a surrogate system of the actual CLM model, and then we calibrate the surrogate model in the optimization process. As the surrogate model is a polynomial whose evaluation is fast, it can be efficiently evaluated with sufficiently large number of times in the optimization, which facilitates the global search. We calibrate five parameters against 12 months of GPP, NEP, and TLAI data from the U.S. Missouri Ozark (US-MOz) tower. The results indicate that an accurate surrogate model can be created for the CLM4.5 with a relatively small number of SG points (i.e., CLM4.5 simulations), and the application of the optimized parameters leads to a higher predictive capacity than the default parameter values in the CLM4.5 for the US-MOz site.
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
BIRD: A general interface for sparse distributed memory simulators
NASA Technical Reports Server (NTRS)
Rogers, David
1990-01-01
Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.
Stochastic blockmodeling of the modules and core of the Caenorhabditis elegans connectome.
Pavlovic, Dragana M; Vértes, Petra E; Bullmore, Edward T; Schafer, William R; Nichols, Thomas E
2014-01-01
Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4-5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the "core-in-modules" decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems.
Inverse lithography using sparse mask representations
NASA Astrophysics Data System (ADS)
Ionescu, Radu C.; Hurley, Paul; Apostol, Stefan
2015-03-01
We present a novel optimisation algorithm for inverse lithography, based on optimization of the mask derivative, a domain inherently sparse, and for rectilinear polygons, invertible. The method is first developed assuming a point light source, and then extended to general incoherent sources. What results is a fast algorithm, producing manufacturable masks (the search space is constrained to rectilinear polygons), and flexible (specific constraints such as minimal line widths can be imposed). One inherent trick is to treat polygons as continuous entities, thus making aerial image calculation extremely fast and accurate. Requirements for mask manufacturability can be integrated in the optimization without too much added complexity. We also explain how to extend the scheme for phase-changing mask optimization.
Arbitrary norm support vector machines.
Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R
2009-02-01
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.; Schultz, Peter F.; George, John S.
2015-07-28
An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using a combinatorial algorithm.
NASA Technical Reports Server (NTRS)
Keeler, James D.
1988-01-01
The information capacity of Kanerva's Sparse Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and Hopfield-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns.
HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION
Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong
2015-01-01
In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645
Sparseness- and continuity-constrained seismic imaging
NASA Astrophysics Data System (ADS)
Herrmann, Felix J.
2005-04-01
Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.
Identifying disease-related subnetwork connectome biomarkers by sparse hypergraph learning.
Zu, Chen; Gao, Yue; Munsell, Brent; Kim, Minjeong; Peng, Ziwen; Cohen, Jessica R; Zhang, Daoqiang; Wu, Guorong
2018-06-14
The functional brain network has gained increased attention in the neuroscience community because of its ability to reveal the underlying architecture of human brain. In general, majority work of functional network connectivity is built based on the correlations between discrete-time-series signals that link only two different brain regions. However, these simple region-to-region connectivity models do not capture complex connectivity patterns between three or more brain regions that form a connectivity subnetwork, or subnetwork for short. To overcome this current limitation, a hypergraph learning-based method is proposed to identify subnetwork differences between two different cohorts. To achieve our goal, a hypergraph is constructed, where each vertex represents a subject and also a hyperedge encodes a subnetwork with similar functional connectivity patterns between different subjects. Unlike previous learning-based methods, our approach is designed to jointly optimize the weights for all hyperedges such that the learned representation is in consensus with the distribution of phenotype data, i.e. clinical labels. In order to suppress the spurious subnetwork biomarkers, we further enforce a sparsity constraint on the hyperedge weights, where a larger hyperedge weight indicates the subnetwork with the capability of identifying the disorder condition. We apply our hypergraph learning-based method to identify subnetwork biomarkers in Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD). A comprehensive quantitative and qualitative analysis is performed, and the results show that our approach can correctly classify ASD and ADHD subjects from normal controls with 87.65 and 65.08% accuracies, respectively.
SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.
Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf
2015-08-01
RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.
Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke
2018-04-29
The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.
Wen, Quan; Stepanyants, Armen; Elston, Guy N.; Grosberg, Alexander Y.; Chklovskii, Dmitri B.
2009-01-01
The shapes of dendritic arbors are fascinating and important, yet the principles underlying these complex and diverse structures remain unclear. Here, we analyzed basal dendritic arbors of 2,171 pyramidal neurons sampled from mammalian brains and discovered 3 statistical properties: the dendritic arbor size scales with the total dendritic length, the spatial correlation of dendritic branches within an arbor has a universal functional form, and small parts of an arbor are self-similar. We proposed that these properties result from maximizing the repertoire of possible connectivity patterns between dendrites and surrounding axons while keeping the cost of dendrites low. We solved this optimization problem by drawing an analogy with maximization of the entropy for a given energy in statistical physics. The solution is consistent with the above observations and predicts scaling relations that can be tested experimentally. In addition, our theory explains why dendritic branches of pyramidal cells are distributed more sparsely than those of Purkinje cells. Our results represent a step toward a unifying view of the relationship between neuronal morphology and function. PMID:19622738
Structured networks support sparse traveling waves in rodent somatosensory cortex.
Moldakarimov, Samat; Bazhenov, Maxim; Feldman, Daniel E; Sejnowski, Terrence J
2018-05-15
Neurons responding to different whiskers are spatially intermixed in the superficial layer 2/3 (L2/3) of the rodent barrel cortex, where a single whisker deflection activates a sparse, distributed neuronal population that spans multiple cortical columns. How the superficial layer of the rodent barrel cortex is organized to support such distributed sensory representations is not clear. In a computer model, we tested the hypothesis that sensory representations in L2/3 of the rodent barrel cortex are formed by activity propagation horizontally within L2/3 from a site of initial activation. The model explained the observed properties of L2/3 neurons, including the low average response probability in the majority of responding L2/3 neurons, and the existence of a small subset of reliably responding L2/3 neurons. Sparsely propagating traveling waves similar to those observed in L2/3 of the rodent barrel cortex occurred in the model only when a subnetwork of strongly connected neurons was immersed in a much larger network of weakly connected neurons.
Barron, Martin; Zhang, Siyuan
2018-01-01
Abstract Cell types in cell populations change as the condition changes: some cell types die out, new cell types may emerge and surviving cell types evolve to adapt to the new condition. Using single-cell RNA-sequencing data that measure the gene expression of cells before and after the condition change, we propose an algorithm, SparseDC, which identifies cell types, traces their changes across conditions and identifies genes which are marker genes for these changes. By solving a unified optimization problem, SparseDC completes all three tasks simultaneously. SparseDC is highly computationally efficient and demonstrates its accuracy on both simulated and real data. PMID:29140455
NASA Astrophysics Data System (ADS)
Mahrooghy, Majid; Ashraf, Ahmed B.; Daye, Dania; Mies, Carolyn; Rosen, Mark; Feldman, Michael; Kontos, Despina
2014-03-01
We evaluate the prognostic value of sparse representation-based features by applying the K-SVD algorithm on multiparametric kinetic, textural, and morphologic features in breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). K-SVD is an iterative dimensionality reduction method that optimally reduces the initial feature space by updating the dictionary columns jointly with the sparse representation coefficients. Therefore, by using K-SVD, we not only provide sparse representation of the features and condense the information in a few coefficients but also we reduce the dimensionality. The extracted K-SVD features are evaluated by a machine learning algorithm including a logistic regression classifier for the task of classifying high versus low breast cancer recurrence risk as determined by a validated gene expression assay. The features are evaluated using ROC curve analysis and leave one-out cross validation for different sparse representation and dimensionality reduction numbers. Optimal sparse representation is obtained when the number of dictionary elements is 4 (K=4) and maximum non-zero coefficients is 2 (L=2). We compare K-SVD with ANOVA based feature selection for the same prognostic features. The ROC results show that the AUC of the K-SVD based (K=4, L=2), the ANOVA based, and the original features (i.e., no dimensionality reduction) are 0.78, 0.71. and 0.68, respectively. From the results, it can be inferred that by using sparse representation of the originally extracted multi-parametric, high-dimensional data, we can condense the information on a few coefficients with the highest predictive value. In addition, the dimensionality reduction introduced by K-SVD can prevent models from over-fitting.
Compressive sensing using optimized sensing matrix for face verification
NASA Astrophysics Data System (ADS)
Oey, Endra; Jeffry; Wongso, Kelvin; Tommy
2017-12-01
Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.
Sparse brain network using penalized linear regression
NASA Astrophysics Data System (ADS)
Lee, Hyekyoung; Lee, Dong Soo; Kang, Hyejin; Kim, Boong-Nyun; Chung, Moo K.
2011-03-01
Sparse partial correlation is a useful connectivity measure for brain networks when it is difficult to compute the exact partial correlation in the small-n large-p setting. In this paper, we formulate the problem of estimating partial correlation as a sparse linear regression with a l1-norm penalty. The method is applied to brain network consisting of parcellated regions of interest (ROIs), which are obtained from FDG-PET images of the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. To validate the results, we check their reproducibilities of the obtained brain networks by the leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergen, Benjamin Karl
2016-08-03
These are slides which are part of the ASC L2 Milestone Review. The following topics are covered: Legion Backend, Distributed-Memory Partitioning, Sparse Data Representations, and MPI-Legion Interoperability.
A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary
NASA Astrophysics Data System (ADS)
Gillis, Nicolas; Luce, Robert
2018-01-01
A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.
Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.
Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha
2017-03-01
This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.
Minati, Ludovico; Cercignani, Mara; Chan, Dennis
2013-10-01
Graph theory-based analyses of brain network topology can be used to model the spatiotemporal correlations in neural activity detected through fMRI, and such approaches have wide-ranging potential, from detection of alterations in preclinical Alzheimer's disease through to command identification in brain-machine interfaces. However, due to prohibitive computational costs, graph-based analyses to date have principally focused on measuring connection density rather than mapping the topological architecture in full by exhaustive shortest-path determination. This paper outlines a solution to this problem through parallel implementation of Dijkstra's algorithm in programmable logic. The processor design is optimized for large, sparse graphs and provided in full as synthesizable VHDL code. An acceleration factor between 15 and 18 is obtained on a representative resting-state fMRI dataset, and maps of Euclidean path length reveal the anticipated heterogeneous cortical involvement in long-range integrative processing. These results enable high-resolution geodesic connectivity mapping for resting-state fMRI in patient populations and real-time geodesic mapping to support identification of imagined actions for fMRI-based brain-machine interfaces. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Huang, Wentao; Sun, Hongjian; Wang, Weijie
2017-06-03
Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.
Huang, Wentao; Sun, Hongjian; Wang, Weijie
2017-01-01
Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD’s theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis. PMID:28587198
Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui
2017-08-24
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
Exhaustive Search for Sparse Variable Selection in Linear Regression
NASA Astrophysics Data System (ADS)
Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato
2018-04-01
We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paiton, Dylan M.; Kenyon, Garrett T.; Brumby, Steven P.
An approach to detecting objects in an image dataset may combine texture/color detection, shape/contour detection, and/or motion detection using sparse, generative, hierarchical models with lateral and top-down connections. A first independent representation of objects in an image dataset may be produced using a color/texture detection algorithm. A second independent representation of objects in the image dataset may be produced using a shape/contour detection algorithm. A third independent representation of objects in the image dataset may be produced using a motion detection algorithm. The first, second, and third independent representations may then be combined into a single coherent output using amore » combinatorial algorithm.« less
Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.
Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen
2015-04-01
In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms.
Development of a neuromorphic control system for a lightweight humanoid robot
NASA Astrophysics Data System (ADS)
Folgheraiter, Michele; Keldibek, Amina; Aubakir, Bauyrzhan; Salakchinov, Shyngys; Gini, Giuseppina; Mauro Franchi, Alessio; Bana, Matteo
2017-03-01
A neuromorphic control system for a lightweight middle size humanoid biped robot built using 3D printing techniques is proposed. The control architecture consists of different modules capable to learn and autonomously reproduce complex periodic trajectories. Each module is represented by a chaotic Recurrent Neural Network (RNN) with a core of dynamic neurons randomly and sparsely connected with fixed synapses. A set of read-out units with adaptable synapses realize a linear combination of the neurons output in order to reproduce the target signals. Different experiments were conducted to find out the optimal initialization for the RNN’s parameters. From simulation results, using normalized signals obtained from the robot model, it was proven that all the instances of the control module can learn and reproduce the target trajectories with an average RMS error of 1.63 and variance 0.74.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Chimera-like states in structured heterogeneous networks
NASA Astrophysics Data System (ADS)
Li, Bo; Saad, David
2017-04-01
Chimera-like states are manifested through the coexistence of synchronous and asynchronous dynamics and have been observed in various systems. To analyze the role of network topology in giving rise to chimera-like states, we study a heterogeneous network model comprising two groups of nodes, of high and low degrees of connectivity. The architecture facilitates the analysis of the system, which separates into a densely connected coherent group of nodes, perturbed by their sparsely connected drifting neighbors. It describes a synchronous behavior of the densely connected group and scaling properties of the induced perturbations.
Bilinear Inverse Problems: Theory, Algorithms, and Applications
NASA Astrophysics Data System (ADS)
Ling, Shuyang
We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical guarantees and stability theory are derived and the number of sampling complexity is nearly optimal (up to a poly-log factor). Applications in imaging sciences and signal processing are discussed and numerical simulations are presented to demonstrate the effectiveness and efficiency of our approach.
Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian
2013-01-01
Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826
Communication requirements of sparse Cholesky factorization with nested dissection ordering
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1989-01-01
Load distribution schemes for minimizing the communication requirements of the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems are presented. The total data traffic in factoring an n x n sparse symmetric positive definite matrix representing an n-vertex regular two-dimensional grid graph using n exp alpha, alpha not greater than 1, processors are shown to be O(n exp 1 + alpha/2). It is O(n), when n exp alpha, alpha not smaller than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal.
Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2015-11-30
Common spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually. This study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification. Two public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI. The optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods. The proposed SFBCSP is a potential method for improving the performance of MI-based BCI. Copyright © 2015 Elsevier B.V. All rights reserved.
Stochastic Blockmodeling of the Modules and Core of the Caenorhabditis elegans Connectome
Pavlovic, Dragana M.; Vértes, Petra E.; Bullmore, Edward T.; Schafer, William R.; Nichols, Thomas E.
2014-01-01
Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4–5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the “core-in-modules” decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems. PMID:24988196
Evidence for sparse synergies in grasping actions.
Prevete, Roberto; Donnarumma, Francesco; d'Avella, Andrea; Pezzulo, Giovanni
2018-01-12
Converging evidence shows that hand-actions are controlled at the level of synergies and not single muscles. One intriguing aspect of synergy-based action-representation is that it may be intrinsically sparse and the same synergies can be shared across several distinct types of hand-actions. Here, adopting a normative angle, we consider three hypotheses for hand-action optimal-control: sparse-combination hypothesis (SC) - sparsity in the mapping between synergies and actions - i.e., actions implemented using a sparse combination of synergies; sparse-elements hypothesis (SE) - sparsity in synergy representation - i.e., the mapping between degrees-of-freedom (DoF) and synergies is sparse; double-sparsity hypothesis (DS) - a novel view combining both SC and SE - i.e., both the mapping between DoF and synergies and between synergies and actions are sparse, each action implementing a sparse combination of synergies (as in SC), each using a limited set of DoFs (as in SE). We evaluate these hypotheses using hand kinematic data from six human subjects performing nine different types of reach-to-grasp actions. Our results support DS, suggesting that the best action representation is based on a relatively large set of synergies, each involving a reduced number of degrees-of-freedom, and that distinct sets of synergies may be involved in distinct tasks.
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
Chen, Jianhui; Liu, Ji; Ye, Jieping
2013-01-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.
Chen, Jianhui; Liu, Ji; Ye, Jieping
2012-02-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.
Deformable segmentation via sparse representation and dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N
2012-10-01
"Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Karimi, Davood; Ward, Rabab K.
2016-03-01
Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.
de Vos, Stijn; Wardenaar, Klaas J; Bos, Elisabeth H; Wit, Ernst C; Bouwmans, Mara E J; de Jonge, Peter
2017-01-01
Differences in within-person emotion dynamics may be an important source of heterogeneity in depression. To investigate these dynamics, researchers have previously combined multilevel regression analyses with network representations. However, sparse network methods, specifically developed for longitudinal network analyses, have not been applied. Therefore, this study used this approach to investigate population-level and individual-level emotion dynamics in healthy and depressed persons and compared this method with the multilevel approach. Time-series data were collected in pair-matched healthy persons and major depressive disorder (MDD) patients (n = 54). Seven positive affect (PA) and seven negative affect (NA) items were administered electronically at 90 times (30 days; thrice per day). The population-level (healthy vs. MDD) and individual-level time series were analyzed using a sparse longitudinal network model based on vector autoregression. The population-level model was also estimated with a multilevel approach. Effects of different preprocessing steps were evaluated as well. The characteristics of the longitudinal networks were investigated to gain insight into the emotion dynamics. In the population-level networks, longitudinal network connectivity was strongest in the healthy group, with nodes showing more and stronger longitudinal associations with each other. Individually estimated networks varied strongly across individuals. Individual variations in network connectivity were unrelated to baseline characteristics (depression status, neuroticism, severity). A multilevel approach applied to the same data showed higher connectivity in the MDD group, which seemed partly related to the preprocessing approach. The sparse network approach can be useful for the estimation of networks with multiple nodes, where overparameterization is an issue, and for individual-level networks. However, its current inability to model random effects makes it less useful as a population-level approach in case of large heterogeneity. Different preprocessing strategies appeared to strongly influence the results, complicating inferences about network density.
Topological properties of robust biological and computational networks
Navlakha, Saket; He, Xin; Faloutsos, Christos; Bar-Joseph, Ziv
2014-01-01
Network robustness is an important principle in biology and engineering. Previous studies of global networks have identified both redundancy and sparseness as topological properties used by robust networks. By focusing on molecular subnetworks, or modules, we show that module topology is tightly linked to the level of environmental variability (noise) the module expects to encounter. Modules internal to the cell that are less exposed to environmental noise are more connected and less robust than external modules. A similar design principle is used by several other biological networks. We propose a simple change to the evolutionary gene duplication model which gives rise to the rich range of module topologies observed within real networks. We apply these observations to evaluate and design communication networks that are specifically optimized for noisy or malicious environments. Combined, joint analysis of biological and computational networks leads to novel algorithms and insights benefiting both fields. PMID:24789562
Moving target detection for frequency agility radar by sparse reconstruction
NASA Astrophysics Data System (ADS)
Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi
2016-09-01
Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.
Moving target detection for frequency agility radar by sparse reconstruction.
Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi
2016-09-01
Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.
Sparse Modeling of Human Actions from Motion Imagery
2011-09-02
is here developed. Spatio-temporal features that char- acterize local changes in the image are rst extracted. This is followed by the learning of a...video comes from the optimal sparse linear com- bination of the learned basis vectors (action primitives) representing the actions. A low...computational cost deep-layer model learning the inter- class correlations of the data is added for increasing discriminative power. In spite of its simplicity
Robust Multi Sensor Classification via Jointly Sparse Representation
2016-03-14
rank, sensor network, dictionary learning REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8...with ultrafast laser pulses, Optics Express, (04 2015): 10521. doi: Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran. Task-Driven Dictionary Learning...in dictionary design, compressed sensors design, and optimization in sparse recovery also helps. We are able to advance the state of the art
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
NASA Astrophysics Data System (ADS)
Liu, Hongcheng; Dong, Peng; Xing, Lei
2017-08-01
{{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.
Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O
2018-05-01
Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.
Liu, Hongcheng; Dong, Peng; Xing, Lei
2017-07-20
[Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.
A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.
Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi
2015-12-01
Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.
Joint sparse representation for robust multimodal biometrics recognition.
Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama
2014-01-01
Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.
Sparse 4D TomoSAR imaging in the presence of non-linear deformation
NASA Astrophysics Data System (ADS)
Khwaja, Ahmed Shaharyar; ćetin, Müjdat
2018-04-01
In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.
Inference of the sparse kinetic Ising model using the decimation method
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Zhang, Pan
2015-05-01
In this paper we study the inference of the kinetic Ising model on sparse graphs by the decimation method. The decimation method, which was first proposed in Decelle and Ricci-Tersenghi [Phys. Rev. Lett. 112, 070603 (2014), 10.1103/PhysRevLett.112.070603] for the static inverse Ising problem, tries to recover the topology of the inferred system by setting the weakest couplings to zero iteratively. During the decimation process the likelihood function is maximized over the remaining couplings. Unlike the ℓ1-optimization-based methods, the decimation method does not use the Laplace distribution as a heuristic choice of prior to select a sparse solution. In our case, the whole process can be done auto-matically without fixing any parameters by hand. We show that in the dynamical inference problem, where the task is to reconstruct the couplings of an Ising model given the data, the decimation process can be applied naturally into a maximum-likelihood optimization algorithm, as opposed to the static case where pseudolikelihood method needs to be adopted. We also use extensive numerical studies to validate the accuracy of our methods in dynamical inference problems. Our results illustrate that, on various topologies and with different distribution of couplings, the decimation method outperforms the widely used ℓ1-optimization-based methods.
Optimization-based image reconstruction from sparse-view data in offset-detector CBCT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
L2-norm multiple kernel learning and its application to biomedical data fusion
2010-01-01
Background This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. Results We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. Conclusions This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL. Availability The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html. PMID:20529363
Sparse approximation problem: how rapid simulated annealing succeeds and fails
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
Low-count PET image restoration using sparse representation
NASA Astrophysics Data System (ADS)
Li, Tao; Jiang, Changhui; Gao, Juan; Yang, Yongfeng; Liang, Dong; Liu, Xin; Zheng, Hairong; Hu, Zhanli
2018-04-01
In the field of positron emission tomography (PET), reconstructed images are often blurry and contain noise. These problems are primarily caused by the low resolution of projection data. Solving this problem by improving hardware is an expensive solution, and therefore, we attempted to develop a solution based on optimizing several related algorithms in both the reconstruction and image post-processing domains. As sparse technology is widely used, sparse prediction is increasingly applied to solve this problem. In this paper, we propose a new sparse method to process low-resolution PET images. Two dictionaries (D1 for low-resolution PET images and D2 for high-resolution PET images) are learned from a group real PET image data sets. Among these two dictionaries, D1 is used to obtain a sparse representation for each patch of the input PET image. Then, a high-resolution PET image is generated from this sparse representation using D2. Experimental results indicate that the proposed method exhibits a stable and superior ability to enhance image resolution and recover image details. Quantitatively, this method achieves better performance than traditional methods. This proposed strategy is a new and efficient approach for improving the quality of PET images.
Margin based ontology sparse vector learning algorithm and applied in biology science.
Gao, Wei; Qudair Baig, Abdul; Ali, Haidar; Sajjad, Wasim; Reza Farahani, Mohammad
2017-01-01
In biology field, the ontology application relates to a large amount of genetic information and chemical information of molecular structure, which makes knowledge of ontology concepts convey much information. Therefore, in mathematical notation, the dimension of vector which corresponds to the ontology concept is often very large, and thus improves the higher requirements of ontology algorithm. Under this background, we consider the designing of ontology sparse vector algorithm and application in biology. In this paper, using knowledge of marginal likelihood and marginal distribution, the optimized strategy of marginal based ontology sparse vector learning algorithm is presented. Finally, the new algorithm is applied to gene ontology and plant ontology to verify its efficiency.
Learning partial differential equations via data discovery and sparse optimization
NASA Astrophysics Data System (ADS)
Schaeffer, Hayden
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
Learning partial differential equations via data discovery and sparse optimization.
Schaeffer, Hayden
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
Learning partial differential equations via data discovery and sparse optimization
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection. PMID:28265183
Optimized Design and Analysis of Sparse-Sampling fMRI Experiments
Perrachione, Tyler K.; Ghosh, Satrajit S.
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power. PMID:23616742
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.
Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff
2017-01-01
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.
NASA Astrophysics Data System (ADS)
Orović, Irena; Stanković, Srdjan; Amin, Moeness
2013-05-01
A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.
Optimal Chebyshev polynomials on ellipses in the complex plane
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland
1989-01-01
The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.
SparseBeads data: benchmarking sparsity-regularized computed tomography
NASA Astrophysics Data System (ADS)
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
Sambo, Francesco; de Oca, Marco A Montes; Di Camillo, Barbara; Toffolo, Gianna; Stützle, Thomas
2012-01-01
Reverse engineering is the problem of inferring the structure of a network of interactions between biological variables from a set of observations. In this paper, we propose an optimization algorithm, called MORE, for the reverse engineering of biological networks from time series data. The model inferred by MORE is a sparse system of nonlinear differential equations, complex enough to realistically describe the dynamics of a biological system. MORE tackles separately the discrete component of the problem, the determination of the biological network topology, and the continuous component of the problem, the strength of the interactions. This approach allows us both to enforce system sparsity, by globally constraining the number of edges, and to integrate a priori information about the structure of the underlying interaction network. Experimental results on simulated and real-world networks show that the mixed discrete/continuous optimization approach of MORE significantly outperforms standard continuous optimization and that MORE is competitive with the state of the art in terms of accuracy of the inferred networks.
A high-capacity model for one shot association learning in the brain
Einarsson, Hafsteinn; Lengler, Johannes; Steger, Angelika
2014-01-01
We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs. PMID:25426060
A high-capacity model for one shot association learning in the brain.
Einarsson, Hafsteinn; Lengler, Johannes; Steger, Angelika
2014-01-01
We present a high-capacity model for one-shot association learning (hetero-associative memory) in sparse networks. We assume that basic patterns are pre-learned in networks and associations between two patterns are presented only once and have to be learned immediately. The model is a combination of an Amit-Fusi like network sparsely connected to a Willshaw type network. The learning procedure is palimpsest and comes from earlier work on one-shot pattern learning. However, in our setup we can enhance the capacity of the network by iterative retrieval. This yields a model for sparse brain-like networks in which populations of a few thousand neurons are capable of learning hundreds of associations even if they are presented only once. The analysis of the model is based on a novel result by Janson et al. on bootstrap percolation in random graphs.
Cortical rewiring and information storage
NASA Astrophysics Data System (ADS)
Chklovskii, D. B.; Mel, B. W.; Svoboda, K.
2004-10-01
Current thinking about long-term memory in the cortex is focused on changes in the strengths of connections between neurons. But ongoing structural plasticity in the adult brain, including synapse formation/elimination and remodelling of axons and dendrites, suggests that memory could also depend on learning-induced changes in the cortical `wiring diagram'. Given that the cortex is sparsely connected, wiring plasticity could provide a substantial boost in storage capacity, although at a cost of more elaborate biological machinery and slower learning.
Modular, Hierarchical Learning By Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
NASA Astrophysics Data System (ADS)
Zhu, Hao
Sparsity plays an instrumental role in a plethora of scientific fields, including statistical inference for variable selection, parsimonious signal representations, and solving under-determined systems of linear equations - what has led to the ground-breaking result of compressive sampling (CS). This Thesis leverages exciting ideas of sparse signal reconstruction to develop sparsity-cognizant algorithms, and analyze their performance. The vision is to devise tools exploiting the 'right' form of sparsity for the 'right' application domain of multiuser communication systems, array signal processing systems, and the emerging challenges in the smart power grid. Two important power system monitoring tasks are addressed first by capitalizing on the hidden sparsity. To robustify power system state estimation, a sparse outlier model is leveraged to capture the possible corruption in every datum, while the problem nonconvexity due to nonlinear measurements is handled using the semidefinite relaxation technique. Different from existing iterative methods, the proposed algorithm approximates well the global optimum regardless of the initialization. In addition, for enhanced situational awareness, a novel sparse overcomplete representation is introduced to capture (possibly multiple) line outages, and develop real-time algorithms for solving the combinatorially complex identification problem. The proposed algorithms exhibit near-optimal performance while incurring only linear complexity in the number of lines, which makes it possible to quickly bring contingencies to attention. This Thesis also accounts for two basic issues in CS, namely fully-perturbed models and the finite alphabet property. The sparse total least-squares (S-TLS) approach is proposed to furnish CS algorithms for fully-perturbed linear models, leading to statistically optimal and computationally efficient solvers. The S-TLS framework is well motivated for grid-based sensing applications and exhibits higher accuracy than existing sparse algorithms. On the other hand, exploiting the finite alphabet of unknown signals emerges naturally in communication systems, along with sparsity coming from the low activity of each user. Compared to approaches only accounting for either one of the two, joint exploitation of both leads to statistically optimal detectors with improved error performance.
Communication Optimal Parallel Multiplication of Sparse Random Matrices
2013-02-21
Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That
Sparse aperiodic arrays for optical beam forming and LIDAR.
Komljenovic, Tin; Helkey, Roger; Coldren, Larry; Bowers, John E
2017-02-06
We analyze optical phased arrays with aperiodic pitch and element-to-element spacing greater than one wavelength at channel counts exceeding hundreds of elements. We optimize the spacing between waveguides for highest side-mode suppression providing grating lobe free steering in full visible space while preserving the narrow beamwidth. Optimum waveguide placement strategies are derived and design guidelines for sparse (> 1.5 λ and > 3 λ average element spacing) optical phased arrays are given. Scaling to larger array areas by means of tiling is considered.
Sparsely sampling the sky: Regular vs. random sampling
NASA Astrophysics Data System (ADS)
Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.
2015-09-01
Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.
Total variation-based method for radar coincidence imaging with model mismatch for extended target
NASA Astrophysics Data System (ADS)
Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang
2017-11-01
Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.
Disrupted Small-World Networks in Schizophrenia
ERIC Educational Resources Information Center
Liu, Yong; Liang, Meng; Zhou, Yuan; He, Yong; Hao, Yihui; Song, Ming; Yu, Chunshui; Liu, Haihong; Liu, Zhening; Jiang, Tianzi
2008-01-01
The human brain has been described as a large, sparse, complex network characterized by efficient small-world properties, which assure that the brain generates and integrates information with high efficiency. Many previous neuroimaging studies have provided consistent evidence of "dysfunctional connectivity" among the brain regions in…
Target detection in GPR data using joint low-rank and sparsity constraints
NASA Astrophysics Data System (ADS)
Bouzerdoum, Abdesselam; Tivive, Fok Hing Chi; Abeynayake, Canicious
2016-05-01
In ground penetrating radars, background clutter, which comprises the signals backscattered from the rough, uneven ground surface and the background noise, impairs the visualization of buried objects and subsurface inspections. In this paper, a clutter mitigation method is proposed for target detection. The removal of background clutter is formulated as a constrained optimization problem to obtain a low-rank matrix and a sparse matrix. The low-rank matrix captures the ground surface reflections and the background noise, whereas the sparse matrix contains the target reflections. An optimization method based on split-Bregman algorithm is developed to estimate these two matrices from the input GPR data. Evaluated on real radar data, the proposed method achieves promising results in removing the background clutter and enhancing the target signature.
NASA Astrophysics Data System (ADS)
Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun
2014-09-01
Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.
Balance of Interactions Determines Optimal Survival in Multi-Species Communities.
Choudhary, Anshul; Sinha, Sudeshna
2015-01-01
We consider a multi-species community modelled as a complex network of populations, where the links are given by a random asymmetric connectivity matrix J, with fraction 1 - C of zero entries, where C reflects the over-all connectivity of the system. The non-zero elements of J are drawn from a Gaussian distribution with mean μ and standard deviation σ. The signs of the elements Jij reflect the nature of density-dependent interactions, such as predatory-prey, mutualism or competition, and their magnitudes reflect the strength of the interaction. In this study we try to uncover the broad features of the inter-species interactions that determine the global robustness of this network, as indicated by the average number of active nodes (i.e. non-extinct species) in the network, and the total population, reflecting the biomass yield. We find that the network transitions from a completely extinct system to one where all nodes are active, as the mean interaction strength goes from negative to positive, with the transition getting sharper for increasing C and decreasing σ. We also find that the total population, displays distinct non-monotonic scaling behaviour with respect to the product μC, implying that survival is dependent not merely on the number of links, but rather on the combination of the sparseness of the connectivity matrix and the net interaction strength. Interestingly, in an intermediate window of positive μC, the total population is maximal, indicating that too little or too much positive interactions is detrimental to survival. Rather, the total population levels are optimal when the network has intermediate net positive connection strengths. At the local level we observe marked qualitative changes in dynamical patterns, ranging from anti-phase clusters of period 2 cycles and chaotic bands, to fixed points, under the variation of mean μ of the interaction strengths. We also study the correlation between synchronization and survival, and find that synchronization does not necessarily lead to extinction. Lastly, we propose an effective low dimensional map to capture the behavior of the entire network, and this provides a broad understanding of the interplay of the local dynamical patterns and the global robustness trends in the network.
A Robust Feedforward Model of the Olfactory System
NASA Astrophysics Data System (ADS)
Zhang, Yilun; Sharpee, Tatyana
Most natural odors have sparse molecular composition. This makes the principles of compressing sensing potentially relevant to the structure of the olfactory code. Yet, the largely feedforward organization of the olfactory system precludes reconstruction using standard compressed sensing algorithms. To resolve this problem, recent theoretical work has proposed that signal reconstruction could take place as a result of a low dimensional dynamical system converging to one of its attractor states. The dynamical aspects of optimization, however, would slow down odor recognition and were also found to be susceptible to noise. Here we describe a feedforward model of the olfactory system that achieves both strong compression and fast reconstruction that is also robust to noise. A key feature of the proposed model is a specific relationship between how odors are represented at the glomeruli stage, which corresponds to a compression, and the connections from glomeruli to Kenyon cells, which in the model corresponds to reconstruction. We show that provided this specific relationship holds true, the reconstruction will be both fast and robust to noise, and in particular to failure of glomeruli. The predicted connectivity rate from glomeruli to the Kenyon cells can be tested experimentally. This research was supported by James S. McDonnell Foundation, NSF CAREER award IIS-1254123, NSF Ideas Lab Collaborative Research IOS 1556388.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Optimization of sparse matrix-vector multiplication on emerging multicore platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2007-01-01
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientificmore » study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods
Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.
2013-01-01
Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822
NASA Technical Reports Server (NTRS)
Bless, Robert R.
1991-01-01
A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.
Estimation of Dynamic Sparse Connectivity Patterns From Resting State fMRI.
Cai, Biao; Zille, Pascal; Stephen, Julia M; Wilson, Tony W; Calhoun, Vince D; Wang, Yu Ping
2018-05-01
Functional connectivity (FC) estimated from functional magnetic resonance imaging (fMRI) time series, especially during resting state periods, provides a powerful tool to assess human brain functional architecture in health, disease, and developmental states. Recently, the focus of connectivity analysis has shifted toward the subnetworks of the brain, which reveals co-activating patterns over time. Most prior works produced a dense set of high-dimensional vectors, which are hard to interpret. In addition, their estimations to a large extent were based on an implicit assumption of spatial and temporal stationarity throughout the fMRI scanning session. In this paper, we propose an approach called dynamic sparse connectivity patterns (dSCPs), which takes advantage of both matrix factorization and time-varying fMRI time series to improve the estimation power of FC. The feasibility of analyzing dynamic FC with our model is first validated through simulated experiments. Then, we use our framework to measure the difference between young adults and children with real fMRI data set from the Philadelphia Neurodevelopmental Cohort (PNC). The results from the PNC data set showed significant FC differences between young adults and children in four different states. For instance, young adults had reduced connectivity between the default mode network and other subnetworks, as well as hyperconnectivity within the visual system in states 1 and 3, and hypoconnectivity in state 2. Meanwhile, they exhibited temporal correlation patterns that changed over time within functional subnetworks. In addition, the dSCPs model indicated that older people tend to spend more time within a relatively connected FC pattern. Overall, the proposed method provides a valid means to assess dynamic FC, which could facilitate the study of brain networks.
Research on connection structure of aluminumbody bus using multi-objective topology optimization
NASA Astrophysics Data System (ADS)
Peng, Q.; Ni, X.; Han, F.; Rhaman, K.; Ulianov, C.; Fang, X.
2018-01-01
For connecting Aluminum Alloy bus body aluminum components often occur the problem of failure, a new aluminum alloy connection structure is designed based on multi-objective topology optimization method. Determining the shape of the outer contour of the connection structure with topography optimization, establishing a topology optimization model of connections based on SIMP density interpolation method, going on multi-objective topology optimization, and improving the design of the connecting piece according to the optimization results. The results show that the quality of the aluminum alloy connector after topology optimization is reduced by 18%, and the first six natural frequencies are improved and the strength performance and stiffness performance are obviously improved.
Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.
Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo
2015-05-01
It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Thresholding functional connectomes by means of mixture modeling.
Bielczyk, Natalia Z; Walocha, Fabian; Ebel, Patrick W; Haak, Koen V; Llera, Alberto; Buitelaar, Jan K; Glennon, Jeffrey C; Beckmann, Christian F
2018-05-01
Functional connectivity has been shown to be a very promising tool for studying the large-scale functional architecture of the human brain. In network research in fMRI, functional connectivity is considered as a set of pair-wise interactions between the nodes of the network. These interactions are typically operationalized through the full or partial correlation between all pairs of regional time series. Estimating the structure of the latent underlying functional connectome from the set of pair-wise partial correlations remains an open research problem though. Typically, this thresholding problem is approached by proportional thresholding, or by means of parametric or non-parametric permutation testing across a cohort of subjects at each possible connection. As an alternative, we propose a data-driven thresholding approach for network matrices on the basis of mixture modeling. This approach allows for creating subject-specific sparse connectomes by modeling the full set of partial correlations as a mixture of low correlation values associated with weak or unreliable edges in the connectome and a sparse set of reliable connections. Consequently, we propose to use alternative thresholding strategy based on the model fit using pseudo-False Discovery Rates derived on the basis of the empirical null estimated as part of the mixture distribution. We evaluate the method on synthetic benchmark fMRI datasets where the underlying network structure is known, and demonstrate that it gives improved performance with respect to the alternative methods for thresholding connectomes, given the canonical thresholding levels. We also demonstrate that mixture modeling gives highly reproducible results when applied to the functional connectomes of the visual system derived from the n-back Working Memory task in the Human Connectome Project. The sparse connectomes obtained from mixture modeling are further discussed in the light of the previous knowledge of the functional architecture of the visual system in humans. We also demonstrate that with use of our method, we are able to extract similar information on the group level as can be achieved with permutation testing even though these two methods are not equivalent. We demonstrate that with both of these methods, we obtain functional decoupling between the two hemispheres in the higher order areas of the visual cortex during visual stimulation as compared to the resting state, which is in line with previous studies suggesting lateralization in the visual processing. However, as opposed to permutation testing, our approach does not require inference at the cohort level and can be used for creating sparse connectomes at the level of a single subject. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Sparse regularization for force identification using dictionaries
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Robust Joint Graph Sparse Coding for Unsupervised Spectral Feature Selection.
Zhu, Xiaofeng; Li, Xuelong; Zhang, Shichao; Ju, Chunhua; Wu, Xindong
2017-06-01
In this paper, we propose a new unsupervised spectral feature selection model by embedding a graph regularizer into the framework of joint sparse regression for preserving the local structures of data. To do this, we first extract the bases of training data by previous dictionary learning methods and, then, map original data into the basis space to generate their new representations, by proposing a novel joint graph sparse coding (JGSC) model. In JGSC, we first formulate its objective function by simultaneously taking subspace learning and joint sparse regression into account, then, design a new optimization solution to solve the resulting objective function, and further prove the convergence of the proposed solution. Furthermore, we extend JGSC to a robust JGSC (RJGSC) via replacing the least square loss function with a robust loss function, for achieving the same goals and also avoiding the impact of outliers. Finally, experimental results on real data sets showed that both JGSC and RJGSC outperformed the state-of-the-art algorithms in terms of k -nearest neighbor classification performance.
Systematic sparse matrix error control for linear scaling electronic structure calculations.
Rubensson, Emanuel H; Sałek, Paweł
2005-11-30
Efficient truncation criteria used in multiatom blocked sparse matrix operations for ab initio calculations are proposed. As system size increases, so does the need to stay on top of errors and still achieve high performance. A variant of a blocked sparse matrix algebra to achieve strict error control with good performance is proposed. The presented idea is that the condition to drop a certain submatrix should depend not only on the magnitude of that particular submatrix, but also on which other submatrices that are dropped. The decision to remove a certain submatrix is based on the contribution the removal would cause to the error in the chosen norm. We study the effect of an accumulated truncation error in iterative algorithms like trace correcting density matrix purification. One way to reduce the initial exponential growth of this error is presented. The presented error control for a sparse blocked matrix toolbox allows for achieving optimal performance by performing only necessary operations needed to maintain the requested level of accuracy. Copyright 2005 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Han-Ming, Zhang; Lin-Yuan, Wang; Lei, Li; Bin, Yan; Ai-Long, Cai; Guo-En, Hu
2016-07-01
The additional sparse prior of images has been the subject of much research in problems of sparse-view computed tomography (CT) reconstruction. A method employing the image gradient sparsity is often used to reduce the sampling rate and is shown to remove the unwanted artifacts while preserve sharp edges, but may cause blocky or patchy artifacts. To eliminate this drawback, we propose a novel sparsity exploitation-based model for CT image reconstruction. In the presented model, the sparse representation and sparsity exploitation of both gradient and nonlocal gradient are investigated. The new model is shown to offer the potential for better results by introducing a similarity prior information of the image structure. Then, an effective alternating direction minimization algorithm is developed to optimize the objective function with a robust convergence result. Qualitative and quantitative evaluations have been carried out both on the simulation and real data in terms of accuracy and resolution properties. The results indicate that the proposed method can be applied for achieving better image-quality potential with the theoretically expected detailed feature preservation. Project supported by the National Natural Science Foundation of China (Grant No. 61372172).
Regulatory networks and connected components of the neutral space. A look at functional islands
NASA Astrophysics Data System (ADS)
Boldhaus, G.; Klemm, K.
2010-09-01
The functioning of a living cell is largely determined by the structure of its regulatory network, comprising non-linear interactions between regulatory genes. An important factor for the stability and evolvability of such regulatory systems is neutrality - typically a large number of alternative network structures give rise to the necessary dynamics. Here we study the discretized regulatory dynamics of the yeast cell cycle [Li et al., PNAS, 2004] and the set of networks capable of reproducing it, which we call functional. Among these, the empirical yeast wildtype network is close to optimal with respect to sparse wiring. Under point mutations, which establish or delete single interactions, the neutral space of functional networks is fragmented into ≈ 4.7 × 108 components. One of the smaller ones contains the wildtype network. On average, functional networks reachable from the wildtype by mutations are sparser, have higher noise resilience and fewer fixed point attractors as compared with networks outside of this wildtype component.
Sparsely sampling the sky: a Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Paykari, P.; Jaffe, A. H.
2013-08-01
The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.
Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; Huang, Lianjie
2015-01-28
Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Li, Pin; Lee, Jay
2018-03-01
Sparsity is becoming a more and more important topic in the area of machine learning and signal processing recently. One big family of sparse measures in current literature is the generalized lp /lq norm, which is scale invariant and is widely regarded as normalized lp norm. However, the characteristics of the generalized lp /lq norm are still less discussed and its application to the condition monitoring of rotating devices has been still unexplored. In this study, we firstly discuss the characteristics of the generalized lp /lq norm for sparse optimization and then propose a method of sparse filtering with the generalized lp /lq norm for the purpose of impulsive signature enhancement. Further driven by the trend of industrial big data and the need of reducing maintenance cost for industrial equipment, the proposed sparse filter is customized for vibration signal processing and also implemented on bearing and gearbox for the purpose of condition monitoring. Based on the results from the industrial implementations in this paper, the proposed method has been found to be a promising tool for impulsive feature enhancement, and the superiority of the proposed method over previous methods is also demonstrated.
NASA Astrophysics Data System (ADS)
Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie
2014-12-01
To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.
Mandala Networks: ultra-small-world and highly sparse graphs
Sampaio Filho, Cesar I. N.; Moreira, André A.; Andrade, Roberto F. S.; Herrmann, Hans J.; Andrade, José S.
2015-01-01
The increasing demands in security and reliability of infrastructures call for the optimal design of their embedded complex networks topologies. The following question then arises: what is the optimal layout to fulfill best all the demands? Here we present a general solution for this problem with scale-free networks, like the Internet and airline networks. Precisely, we disclose a way to systematically construct networks which are robust against random failures. Furthermore, as the size of the network increases, its shortest path becomes asymptotically invariant and the density of links goes to zero, making it ultra-small world and highly sparse, respectively. The first property is ideal for communication and navigation purposes, while the second is interesting economically. Finally, we show that some simple changes on the original network formulation can lead to an improved topology against malicious attacks. PMID:25765450
Living on the Edge: A Geometric Theory of Phase Transitions in Convex Optimization
2013-03-24
framework for constructing a regularizer f that promotes a specified type of structure, as well as many additional examples. We say that the...Rd that promote the structures we expect to find in x0 8 D. AMELUNXEN, M. LOTZ, M. B. MCCOY, AND J. A. TROPP and y0. Then we can frame the convex...signal x0 is sparse in the standard basis, and the second signal U y0 is sparse in a known basis U . In this case, we can use `1 norms to promote
Object-Oriented Design for Sparse Direct Solvers
NASA Technical Reports Server (NTRS)
Dobrian, Florin; Kumfert, Gary; Pothen, Alex
1999-01-01
We discuss the object-oriented design of a software package for solving sparse, symmetric systems of equations (positive definite and indefinite) by direct methods. At the highest layers, we decouple data structure classes from algorithmic classes for flexibility. We describe the important structural and algorithmic classes in our design, and discuss the trade-offs we made for high performance. The kernels at the lower layers were optimized by hand. Our results show no performance loss from our object-oriented design, while providing flexibility, case of use, and extensibility over solvers using procedural design.
Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook
2015-01-01
Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.
Structured illumination diffuse optical tomography for noninvasive functional neuroimaging in mice.
Reisman, Matthew D; Markow, Zachary E; Bauer, Adam Q; Culver, Joseph P
2017-04-01
Optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to superficial cortical tissues. Diffuse optical tomography (DOT) techniques provide noninvasive imaging, but previous DOT systems for rodent neuroimaging have been limited either by sparse spatial sampling or by slow speed. Here, we develop a DOT system with asymmetric source-detector sampling that combines the high-density spatial sampling (0.4 mm) detection of a scientific complementary metal-oxide-semiconductor camera with the rapid (2 Hz) imaging of a few ([Formula: see text]) structured illumination (SI) patterns. Analysis techniques are developed to take advantage of the system's flexibility and optimize trade-offs among spatial sampling, imaging speed, and signal-to-noise ratio. An effective source-detector separation for the SI patterns was developed and compared with light intensity for a quantitative assessment of data quality. The light fall-off versus effective distance was also used for in situ empirical optimization of our light model. We demonstrated the feasibility of this technique by noninvasively mapping the functional response in the somatosensory cortex of the mouse following electrical stimulation of the forepaw.
Kashefolgheta, Sadra; Vila Verde, Ana
2017-08-09
We present a set of Lennard-Jones parameters for classical, all-atom models of acetate and various alkylated and non-alkylated forms of sulfate, sulfonate and phosphate ions, optimized to reproduce their interactions with water and with the physiologically relevant sodium, ammonium and methylammonium cations. The parameters are internally consistent and are fully compatible with the Generalized Amber Force Field (GAFF), the AMBER force field for proteins, the accompanying TIP3P water model and the sodium model of Joung and Cheatham. The parameters were developed primarily relying on experimental information - hydration free energies and solution activity derivatives at 0.5 m concentration - with ab initio, gas phase calculations being used for the cases where experimental information is missing. The ab initio parameterization scheme presented here is distinct from other approaches because it explicitly connects gas phase binding energies to intermolecular interactions in solution. We demonstrate that the original GAFF/AMBER parameters often overestimate anion-cation interactions, leading to an excessive number of contact ion pairs in solutions of carboxylate ions, and to aggregation in solutions of divalent ions. GAFF/AMBER parameters lead to excessive numbers of salt bridges in proteins and of contact ion pairs between sodium and acidic protein groups, issues that are resolved by using the optimized parameters presented here.
Yao, Kuang-Ta; Chen, Chen-Sheng; Cheng, Cheng-Kung; Fang, Hsu-Wei; Huang, Chang-Hung; Kao, Hung-Chan; Hsu, Ming-Lun
2018-02-01
Conical implant-abutment connections are popular for their excellent connection stability, which is attributable to frictional resistance in the connection. However, conical angles, the inherent design parameter of conical connections, exert opposing effects on 2 influencing factors of the connection stability: frictional resistance and abutment rigidity. This pilot study employed an optimization approach through the finite element method to obtain an optimal conical angle for the highest connection stability in an Ankylos-based conical connection system. A nonlinear 3-dimensional finite element parametric model was developed according to the geometry of the Ankylos system (conical half angle = 5.7°) by using the ANSYS 11.0 software. Optimization algorithms were conducted to obtain the optimal conical half angle and achieve the minimal value of maximum von Mises stress in the abutment, which represents the highest connection stability. The optimal conical half angle obtained was 10.1°. Compared with the original design (5.7°), the optimal design demonstrated an increased rigidity of abutment (36.4%) and implant (25.5%), a decreased microgap at the implant-abutment interface (62.3%), a decreased contact pressure (37.9%) with a more uniform stress distribution in the connection, and a decreased stress in the cortical bone (4.5%). In conclusion, the methodology of design optimization to determine the optimal conical angle of the Ankylos-based system is feasible. Because of the heterogeneity of different systems, more studies should be conducted to define the optimal conical angle in various conical connection designs.
Data Mining and Optimization Tools for Developing Engine Parameters Tools
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1998-01-01
This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. Tricia Erhardt and I studied the problem domain for developing an Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy, datasets. From the study and discussion with NASA LeRC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of tile data for GA based multi-resolution optimal search.
Bilevel Model-Based Discriminative Dictionary Learning for Recognition.
Zhou, Pan; Zhang, Chao; Lin, Zhouchen
2017-03-01
Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.
Roux, Emmanuel; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Robini, Marc C; Liebgott, Herve
2016-12-01
Full matrix arrays are excellent tools for 3-D ultrasound imaging, but the required number of active elements is too high to be individually controlled by an equal number of scanner channels. The number of active elements is significantly reduced by the sparse array techniques, but the position of the remaining elements must be carefully optimized. This issue is faced here by introducing novel energy functions in the simulated annealing (SA) algorithm. At each iteration step of the optimization process, one element is freely translated and the associated radiated pattern is simulated. To control the pressure field behavior at multiple depths, three energy functions inspired by the pressure field radiated by a Blackman-tapered spiral array are introduced. Such energy functions aim at limiting the main lobe width while lowering the side lobe and grating lobe levels at multiple depths. Numerical optimization results illustrate the influence of the number of iterations, pressure measurement points, and depths, as well as the influence of the energy function definition on the optimized layout. It is also shown that performance close to or even better than the one provided by a spiral array, here assumed as reference, may be obtained. The finite-time convergence properties of SA allow the duration of the optimization process to be set in advance.
Fast Solution in Sparse LDA for Binary Classification
NASA Technical Reports Server (NTRS)
Moghaddam, Baback
2010-01-01
An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic form along with the inherent sequential nature of greedy search itself. Together this enables the use of highly-efficient partitioned-matrix-inverse techniques that result in large speedups of computation in both the forward-selection and backward-elimination stages of greedy algorithms in general.
Memory replay in balanced recurrent networks
Chenkov, Nikolay; Sprekeler, Henning; Kempter, Richard
2017-01-01
Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global—potentially neuromodulatory—alterations of neuronal excitability can switch between network states that favor retrieval and consolidation. PMID:28135266
An efficient optical architecture for sparsely connected neural networks
NASA Technical Reports Server (NTRS)
Hine, Butler P., III; Downie, John D.; Reid, Max B.
1990-01-01
An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.
Ponzi, Adam; Wickens, Jeff
2010-04-28
The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.
Wan, Xiaoqing; Zhao, Chunhui
2017-06-01
As a competitive machine learning algorithm, the stacked sparse autoencoder (SSA) has achieved outstanding popularity in exploiting high-level features for classification of hyperspectral images (HSIs). In general, in the SSA architecture, the nodes between adjacent layers are fully connected and need to be iteratively fine-tuned during the pretraining stage; however, the nodes of previous layers further away may be less likely to have a dense correlation to the given node of subsequent layers. Therefore, to reduce the classification error and increase the learning rate, this paper proposes the general framework of locally connected SSA; that is, the biologically inspired local receptive field (LRF) constrained SSA architecture is employed to simultaneously characterize the local correlations of spectral features and extract high-level feature representations of hyperspectral data. In addition, the appropriate receptive field constraint is concurrently updated by measuring the spatial distances from the neighbor nodes to the corresponding node. Finally, the efficient random forest classifier is cascaded to the last hidden layer of the SSA architecture as a benchmark classifier. Experimental results on two real HSI datasets demonstrate that the proposed hierarchical LRF constrained stacked sparse autoencoder and random forest (SSARF) provides encouraging results with respect to other contrastive methods, for instance, the improvements of overall accuracy in a range of 0.72%-10.87% for the Indian Pines dataset and 0.74%-7.90% for the Kennedy Space Center dataset; moreover, it generates lower running time compared with the result provided by similar SSARF based methodology.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
NASA Astrophysics Data System (ADS)
Xu, Xia; Shi, Zhenwei; Pan, Bin
2018-07-01
Sparse unmixing aims at recovering pure materials from hyperpspectral images and estimating their abundance fractions. Sparse unmixing is actually ℓ0 problem which is NP-h ard, and a relaxation is often used. In this paper, we attempt to deal with ℓ0 problem directly via a multi-objective based method, which is a non-convex manner. The characteristics of hyperspectral images are integrated into the proposed method, which leads to a new spectra and multi-objective based sparse unmixing method (SMoSU). In order to solve the ℓ0 norm optimization problem, the spectral library is encoded in a binary vector, and a bit-wise flipping strategy is used to generate new individuals in the evolution process. However, a multi-objective method usually produces a number of non-dominated solutions, while sparse unmixing requires a single solution. How to make the final decision for sparse unmixing is challenging. To handle this problem, we integrate the spectral characteristic of hyperspectral images into SMoSU. By considering the spectral correlation in hyperspectral data, we improve the Tchebycheff decomposition function in SMoSU via a new regularization item. This regularization item is able to enforce the individual divergence in the evolution process of SMoSU. In this way, the diversity and convergence of population is further balanced, which is beneficial to the concentration of individuals. In the experiments part, three synthetic datasets and one real-world data are used to analyse the effectiveness of SMoSU, and several state-of-art sparse unmixing algorithms are compared.
Modeling Alzheimer's disease cognitive scores using multi-task sparse group lasso.
Liu, Xiaoli; Goncalves, André R; Cao, Peng; Zhao, Dazhe; Banerjee, Arindam
2018-06-01
Alzheimer's disease (AD) is a severe neurodegenerative disorder characterized by loss of memory and reduction in cognitive functions due to progressive degeneration of neurons and their connections, eventually leading to death. In this paper, we consider the problem of simultaneously predicting several different cognitive scores associated with categorizing subjects as normal, mild cognitive impairment (MCI), or Alzheimer's disease (AD) in a multi-task learning framework using features extracted from brain images obtained from ADNI (Alzheimer's Disease Neuroimaging Initiative). To solve the problem, we present a multi-task sparse group lasso (MT-SGL) framework, which estimates sparse features coupled across tasks, and can work with loss functions associated with any Generalized Linear Models. Through comparisons with a variety of baseline models using multiple evaluation metrics, we illustrate the promising predictive performance of MT-SGL on ADNI along with its ability to identify brain regions more likely to help the characterization Alzheimer's disease progression. Copyright © 2017 Elsevier Ltd. All rights reserved.
Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter
Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...
2018-04-06
Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less
Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Qiang; Du, Qizhen; Gong, Xufei
Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less
Dose-shaping using targeted sparse optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayre, George A.; Ruan, Dan
2013-07-15
Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, themore » authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E{sub tot}{sup sparse}), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L{sub 1} norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E{sub tot}{sup sparse} improves tradeoff between planning goals by 'sacrificing' voxels that have already been violated to improve PTV coverage, PTV homogeneity, and/or OAR-sparing. In doing so, overall plan quality is increased since these large violations only arise if a net reduction in E{sub tot}{sup sparse} occurs as a result. For example, large violations to dose prescription in the PTV in E{sub tot}{sup sparse}-optimized plans will naturally localize to voxels in and around PTV-OAR overlaps where OAR-sparing may be increased without compromising target coverage. The authors compared the results of our method and the corresponding clinical plans using analyses of DVH plots, dose maps, and two quantitative metrics that quantify PTV homogeneity and overdose. These metrics do not penalize underdose since E{sub tot}{sup sparse}-optimized plans were planned such that their target coverage was similar or better than that of the clinical plans. Finally, plan deliverability was assessed with the 2D modulation index.Results: The proposed method was implemented using IBM's CPLEX optimization package (ILOG CPLEX, Sunnyvale, CA) and required 1-4 min to solve with a 12-core Intel i7 processor. In the testing procedure, the authors optimized for several points on the Pareto surface of four 7-field 6MV prostate cases that were optimized for different levels of PTV homogeneity and OAR-sparing. The generated results were compared against each other and the clinical plan by analyzing their DVH plots and dose maps. After developing intuition by planning the four prostate cases, which had relatively few tradeoffs, the authors applied our method to a 7-field 6 MV pancreas case and a 9-field 6MV head-and-neck case to test the potential impact of our method on more challenging cases. The authors found that our formulation: (1) provided excellent flexibility for balancing OAR-sparing with PTV homogeneity; and (2) permitted the dose planner more control over the evolution of the PTV's spatial dose distribution than conventional objective functions. In particular, E{sub tot}{sup sparse}-optimized plans for the pancreas case and head-and-neck case exhibited substantially improved sparing of the spinal cord and parotid glands, respectively, while maintaining or improving sparing for other OARs and markedly improving PTV homogeneity. Plan deliverability for E{sub tot}{sup sparse}-optimized plans was shown to be better than their associated clinical plans, according to the two-dimensional modulation index.Conclusions: These results suggest that our formulation may be used to improve dose-shaping and OAR-sparing for complicated disease sites, such as the pancreas or head and neck. Furthermore, our objective function and constraints are linear and constitute a linear program, which converges to the global minimum quickly, and can be easily implemented in treatment planning software. Thus, the authors expect fast translation of our method to the clinic where it may have a positive impact on plan quality for challenging disease sites.« less
Structured sparse linear graph embedding.
Wang, Haixian
2012-03-01
Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.
Huo, Zhiguang; Tseng, George
2017-01-01
Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K-means (is-K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is-K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency. PMID:28959370
Huo, Zhiguang; Tseng, George
2017-06-01
Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons
Burbank, Kendra S.
2015-01-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. PMID:26633645
Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu
2017-11-01
This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.
Friday Letters: Connecting Students, Teachers, and Families through Writing
ERIC Educational Resources Information Center
Newman, Terry H.; Bizzarri, Sarah A.
2012-01-01
An important part of student success in school is the involvement of families. However, the communication between students and families regarding school is often sparse at best and caregivers can feel left out as to what is happening. Friday letters improve communication between students and families and also provide a myriad of instructional…
Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard
2008-10-16
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one ofmore » the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.« less
Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines
NASA Astrophysics Data System (ADS)
Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian
2016-11-01
Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Aragon, Cecilia; Bardina, Jorge; Britten, Roy
2002-01-01
A fast, reliable way of predicting aerodynamic coefficients is produced using a neural network optimized by a genetic algorithm. Basic aerodynamic coefficients (e.g. lift, drag, pitching moment) are modelled as functions of angle of attack and Mach number. The neural network is first trained on a relatively rich set of data from wind tunnel tests of numerical simulations to learn an overall model. Most of the aerodynamic parameters can be well-fitted using polynomial functions. A new set of data, which can be relatively sparse, is then supplied to the network to produce a new model consistent with the previous model and the new data. Because the new model interpolates realistically between the sparse test data points, it is suitable for use in piloted simulations. The genetic algorithm is used to choose a neural network architecture to give best results, avoiding over-and under-fitting of the test data.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Kiremidjian, Anne S.
2011-04-01
This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.
Hiratani, Naoki; Fukai, Tomoki
2016-01-01
In the adult mammalian cortex, a small fraction of spines are created and eliminated every day, and the resultant synaptic connection structure is highly nonrandom, even in local circuits. However, it remains unknown whether a particular synaptic connection structure is functionally advantageous in local circuits, and why creation and elimination of synaptic connections is necessary in addition to rich synaptic weight plasticity. To answer these questions, we studied an inference task model through theoretical and numerical analyses. We demonstrate that a robustly beneficial network structure naturally emerges by combining Hebbian-type synaptic weight plasticity and wiring plasticity. Especially in a sparsely connected network, wiring plasticity achieves reliable computation by enabling efficient information transmission. Furthermore, the proposed rule reproduces experimental observed correlation between spine dynamics and task performance. PMID:27303271
SPReM: Sparse Projection Regression Model For High-dimensional Linear Regression *
Sun, Qiang; Zhu, Hongtu; Liu, Yufeng; Ibrahim, Joseph G.
2014-01-01
The aim of this paper is to develop a sparse projection regression modeling (SPReM) framework to perform multivariate regression modeling with a large number of responses and a multivariate covariate of interest. We propose two novel heritability ratios to simultaneously perform dimension reduction, response selection, estimation, and testing, while explicitly accounting for correlations among multivariate responses. Our SPReM is devised to specifically address the low statistical power issue of many standard statistical approaches, such as the Hotelling’s T2 test statistic or a mass univariate analysis, for high-dimensional data. We formulate the estimation problem of SPREM as a novel sparse unit rank projection (SURP) problem and propose a fast optimization algorithm for SURP. Furthermore, we extend SURP to the sparse multi-rank projection (SMURP) by adopting a sequential SURP approximation. Theoretically, we have systematically investigated the convergence properties of SURP and the convergence rate of SURP estimates. Our simulation results and real data analysis have shown that SPReM out-performs other state-of-the-art methods. PMID:26527844
Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.
Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng
2015-02-01
This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.
Configurable hardware integrate and fire neurons for sparse approximation.
Shapero, Samuel; Rozell, Christopher; Hasler, Paul
2013-09-01
Sparse approximation is an important optimization problem in signal and image processing applications. A Hopfield-Network-like system of integrate and fire (IF) neurons is proposed as a solution, using the Locally Competitive Algorithm (LCA) to solve an overcomplete L1 sparse approximation problem. A scalable system architecture is described, including IF neurons with a nonlinear firing function, and current-based synapses to provide linear computation. A network of 18 neurons with 12 inputs is implemented on the RASP 2.9v chip, a Field Programmable Analog Array (FPAA) with directly programmable floating gate elements. Said system uses over 1400 floating gates, the largest system programmed on a FPAA to date. The circuit successfully reproduced the outputs of a digital optimization program, converging to within 4.8% RMS, and an objective cost only 1.7% higher on average. The active circuit consumed 559 μA of current at 2.4 V and converges on solutions in 25 μs, with measurement of the converged spike rate taking an additional 1 ms. Extrapolating the scaling trends to a N=1000 node system, the spiking LCA compares favorably with state-of-the-art digital solutions, and analog solutions using a non-spiking approach. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang
2016-04-01
Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
C%2B%2B tensor toolbox user manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plantenga, Todd D.; Kolda, Tamara Gibson
2012-04-01
The C++ Tensor Toolbox is a software package for computing tensor decompositions. It is based on the Matlab Tensor Toolbox, and is particularly optimized for sparse data sets. This user manual briefly overviews tensor decomposition mathematics, software capabilities, and installation of the package. Tensors (also known as multidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to network analysis. The Tensor Toolbox provides classes for manipulating dense, sparse, and structured tensors in C++. The Toolbox compiles into libraries and is intended for use with custom applications written by users.
Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package “DensParcorr” can be downloaded from CRAN for implementing the proposed statistical methods. PMID:27242395
Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying
2016-01-01
Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.
Locality preserving non-negative basis learning with graph embedding.
Ghanbari, Yasser; Herrington, John; Gur, Ruben C; Schultz, Robert T; Verma, Ragini
2013-01-01
The high dimensionality of connectivity networks necessitates the development of methods identifying the connectivity building blocks that not only characterize the patterns of brain pathology but also reveal representative population patterns. In this paper, we present a non-negative component analysis framework for learning localized and sparse sub-network patterns of connectivity matrices by decomposing them into two sets of discriminative and reconstructive bases. In order to obtain components that are designed towards extracting population differences, we exploit the geometry of the population by using a graphtheoretical scheme that imposes locality-preserving properties as well as maintaining the underlying distance between distant nodes in the original and the projected space. The effectiveness of the proposed framework is demonstrated by applying it to two clinical studies using connectivity matrices derived from DTI to study a population of subjects with ASD, as well as a developmental study of structural brain connectivity that extracts gender differences.
Locality-preserving sparse representation-based classification in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting
2016-10-01
This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.
Efficient image enhancement using sparse source separation in the Retinex theory
NASA Astrophysics Data System (ADS)
Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik
2017-11-01
Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.
Adaptive Sparse Representation for Source Localization with Gain/Phase Errors
Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin
2011-01-01
Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875
Zhao, Tuo; Liu, Han
2016-01-01
We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430
Sparse Covariance Matrix Estimation by DCA-Based Algorithms.
Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham
2017-11-01
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
Nonlinear Estimation With Sparse Temporal Measurements
2016-09-01
Kalman filter , the extended Kalman filter (EKF) and unscented Kalman filter (UKF) are commonly used in practical application. The Kalman filter is an...optimal estimator for linear systems; the EKF and UKF are sub-optimal approximations of the Kalman filter . The EKF uses a first-order Taylor series...propagated covariance is compared for similarity with a Monte Carlo propagation. The similarity of the covariance matrices is shown to predict filter
Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI
NASA Astrophysics Data System (ADS)
Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.
2015-09-01
In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.
Li, Xiang; Yang, Zhibo; Chen, Xuefeng
2014-01-01
The active structural health monitoring (SHM) approach for the complex composite laminate structures of wind turbine blades (WTBs), addresses the important and complicated problem of signal noise. After illustrating the wind energy industry's development perspectives and its crucial requirement for SHM, an improved redundant second generation wavelet transform (IRSGWT) pre-processing algorithm based on neighboring coefficients is introduced for feeble signal denoising. The method can avoid the drawbacks of conventional wavelet methods that lose information in transforms and the shortcomings of redundant second generation wavelet (RSGWT) denoising that can lead to error propagation. For large scale WTB composites, how to minimize the number of sensors while ensuring accuracy is also a key issue. A sparse sensor array optimization of composites for WTB applications is proposed that can reduce the number of transducers that must be used. Compared to a full sixteen transducer array, the optimized eight transducer configuration displays better accuracy in identifying the correct position of simulated damage (mass of load) on composite laminates with anisotropic characteristics than a non-optimized array. It can help to guarantee more flexible and qualified monitoring of the areas that more frequently suffer damage. The proposed methods are verified experimentally on specimens of carbon fiber reinforced resin composite laminates. PMID:24763210
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Chapter 2: Effects of fire on nonnative invasive plants and invasibility of wildland ecosystems
Kristin Zouhar; Jane Kapler Smith; Steve Sutherland
2008-01-01
Considerable experimental and theoretical work has been done on general concepts regarding nonnative species and disturbance, but experimental research on the effects of fire on nonnative invasive species is sparse. We begin this chapter by connecting fundamental concepts from the literature of invasion ecology to fire. Then we examine fire behavior characteristics,...
ERIC Educational Resources Information Center
Walford, Nigel
2007-01-01
Exchanges of population between supposedly "urban" and "rural" spaces have occurred throughout history as people migrate between areas with relatively, densely and sparsely settled populations. However, comparatively little is known about whether the same small areas persistently contribute to the flow and what types of…
Service-Learning: Implications for Empathy and Community Engagement in Elementary School Children
ERIC Educational Resources Information Center
Scott, Katharine E.; Graham, James A.
2015-01-01
The literature on service-learning outcomes in pre-adolescent children is relatively sparse. Empathy (i.e., overall, cognitive, affective) and community engagement (i.e., connection to the community, civic awareness, civic efficacy) were assessed in 155 first, second, and fifth graders (n = 79 males; n = 76 females) using a pre/post design for a…
Anticipation-related brain connectivity in bipolar and unipolar depression: a graph theory approach
Almeida, Jorge R. C.; Stiffler, Richelle; Lockovich, Jeanette C.; Aslam, Haris A.; Phillips, Mary L.
2016-01-01
Bipolar disorder is often misdiagnosed as major depressive disorder, which leads to inadequate treatment. Depressed individuals versus healthy control subjects, show increased expectation of negative outcomes. Due to increased impulsivity and risk for mania, however, depressed individuals with bipolar disorder may differ from those with major depressive disorder in neural mechanisms underlying anticipation processes. Graph theory methods for neuroimaging data analysis allow the identification of connectivity between multiple brain regions without prior model specification, and may help to identify neurobiological markers differentiating these disorders, thereby facilitating development of better therapeutic interventions. This study aimed to compare brain connectivity among regions involved in win/loss anticipation in depressed individuals with bipolar disorder (BDD) versus depressed individuals with major depressive disorder (MDD) versus healthy control subjects using graph theory methods. The study was conducted at the University of Pittsburgh Medical Center and included 31 BDD, 39 MDD, and 36 healthy control subjects. Participants were scanned while performing a number guessing reward task that included the periods of win and loss anticipation. We first identified the anticipatory network across all 106 participants by contrasting brain activation during all anticipation periods (win anticipation + loss anticipation) versus baseline, and win anticipation versus loss anticipation. Brain connectivity within the identified network was determined using the Independent Multiple sample Greedy Equivalence Search (IMaGES) and Linear non-Gaussian Orientation, Fixed Structure (LOFS) algorithms. Density of connections (the number of connections in the network), path length, and the global connectivity direction (‘top-down’ versus ‘bottom-up’) were compared across groups (BDD/MDD/healthy control subjects) and conditions (win/loss anticipation). These analyses showed that loss anticipation was characterized by denser top-down fronto-striatal and fronto-parietal connectivity in healthy control subjects, by bottom-up striatal-frontal connectivity in MDD, and by sparse connectivity lacking fronto-striatal connections in BDD. Win anticipation was characterized by dense connectivity of medial frontal with striatal and lateral frontal cortical regions in BDD, by sparser bottom-up striatum-medial frontal cortex connectivity in MDD, and by sparse connectivity in healthy control subjects. In summary, this is the first study to demonstrate that BDD and MDD with comparable levels of current depression differed from each other and healthy control subjects in density of connections, connectivity path length, and connectivity direction as a function of win or loss anticipation. These findings suggest that different neurobiological mechanisms may underlie aberrant anticipation processes in BDD and MDD, and that distinct therapeutic strategies may be required for these individuals to improve coping strategies during expectation of positive and negative outcomes. PMID:27368345
Anticipation-related brain connectivity in bipolar and unipolar depression: a graph theory approach.
Manelis, Anna; Almeida, Jorge R C; Stiffler, Richelle; Lockovich, Jeanette C; Aslam, Haris A; Phillips, Mary L
2016-09-01
Bipolar disorder is often misdiagnosed as major depressive disorder, which leads to inadequate treatment. Depressed individuals versus healthy control subjects, show increased expectation of negative outcomes. Due to increased impulsivity and risk for mania, however, depressed individuals with bipolar disorder may differ from those with major depressive disorder in neural mechanisms underlying anticipation processes. Graph theory methods for neuroimaging data analysis allow the identification of connectivity between multiple brain regions without prior model specification, and may help to identify neurobiological markers differentiating these disorders, thereby facilitating development of better therapeutic interventions. This study aimed to compare brain connectivity among regions involved in win/loss anticipation in depressed individuals with bipolar disorder (BDD) versus depressed individuals with major depressive disorder (MDD) versus healthy control subjects using graph theory methods. The study was conducted at the University of Pittsburgh Medical Center and included 31 BDD, 39 MDD, and 36 healthy control subjects. Participants were scanned while performing a number guessing reward task that included the periods of win and loss anticipation. We first identified the anticipatory network across all 106 participants by contrasting brain activation during all anticipation periods (win anticipation + loss anticipation) versus baseline, and win anticipation versus loss anticipation. Brain connectivity within the identified network was determined using the Independent Multiple sample Greedy Equivalence Search (IMaGES) and Linear non-Gaussian Orientation, Fixed Structure (LOFS) algorithms. Density of connections (the number of connections in the network), path length, and the global connectivity direction ('top-down' versus 'bottom-up') were compared across groups (BDD/MDD/healthy control subjects) and conditions (win/loss anticipation). These analyses showed that loss anticipation was characterized by denser top-down fronto-striatal and fronto-parietal connectivity in healthy control subjects, by bottom-up striatal-frontal connectivity in MDD, and by sparse connectivity lacking fronto-striatal connections in BDD. Win anticipation was characterized by dense connectivity of medial frontal with striatal and lateral frontal cortical regions in BDD, by sparser bottom-up striatum-medial frontal cortex connectivity in MDD, and by sparse connectivity in healthy control subjects. In summary, this is the first study to demonstrate that BDD and MDD with comparable levels of current depression differed from each other and healthy control subjects in density of connections, connectivity path length, and connectivity direction as a function of win or loss anticipation. These findings suggest that different neurobiological mechanisms may underlie aberrant anticipation processes in BDD and MDD, and that distinct therapeutic strategies may be required for these individuals to improve coping strategies during expectation of positive and negative outcomes. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization
Xi, Maolong; Lu, Dan; Gui, Dongwei; ...
2016-11-27
Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less
Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization
NASA Astrophysics Data System (ADS)
Xi, Maolong; Lu, Dan; Gui, Dongwei; Qi, Zhiming; Zhang, Guannan
2017-01-01
Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so as to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.
Calibration of an agricultural-hydrological model (RZWQM2) using surrogate global optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xi, Maolong; Lu, Dan; Gui, Dongwei
Robust calibration of an agricultural-hydrological model is critical for simulating crop yield and water quality and making reasonable agricultural management. However, calibration of the agricultural-hydrological system models is challenging because of model complexity, the existence of strong parameter correlation, and significant computational requirements. Therefore, only a limited number of simulations can be allowed in any attempt to find a near-optimal solution within an affordable time, which greatly restricts the successful application of the model. The goal of this study is to locate the optimal solution of the Root Zone Water Quality Model (RZWQM2) given a limited simulation time, so asmore » to improve the model simulation and help make rational and effective agricultural-hydrological decisions. To this end, we propose a computationally efficient global optimization procedure using sparse-grid based surrogates. We first used advanced sparse grid (SG) interpolation to construct a surrogate system of the actual RZWQM2, and then we calibrate the surrogate model using the global optimization algorithm, Quantum-behaved Particle Swarm Optimization (QPSO). As the surrogate model is a polynomial with fast evaluation, it can be efficiently evaluated with a sufficiently large number of times during the optimization, which facilitates the global search. We calibrate seven model parameters against five years of yield, drain flow, and NO 3-N loss data from a subsurface-drained corn-soybean field in Iowa. Results indicate that an accurate surrogate model can be created for the RZWQM2 with a relatively small number of SG points (i.e., RZWQM2 runs). Compared to the conventional QPSO algorithm, our surrogate-based optimization method can achieve a smaller objective function value and better calibration performance using a fewer number of expensive RZWQM2 executions, which greatly improves computational efficiency.« less
Behar, Vera; Adam, Dan
2005-12-01
An effective aperture approach is used for optimization of a sparse synthetic transmit aperture (STA) imaging system with coded excitation and frequency division. A new two-stage algorithm is proposed for optimization of both the positions of the transmit elements and the weights of the receive elements. In order to increase the signal-to-noise ratio in a synthetic aperture system, temporal encoding of the excitation signals is employed. When comparing the excitation by linear frequency modulation (LFM) signals and phase shift key modulation (PSKM) signals, the analysis shows that chirps are better for excitation, since at the output of a compression filter the sidelobes generated are much smaller than those produced by the binary PSKM signals. Here, an implementation of a fast STA imaging is studied by spatial encoding with frequency division of the LFM signals. The proposed system employs a 64-element array with only four active elements used during transmit. The two-dimensional point spread function (PSF) produced by such a sparse STA system is compared to the PSF produced by an equivalent phased array system, using the Field II simulation program. The analysis demonstrates the superiority of the new sparse STA imaging system while using coded excitation and frequency division. Compared to a conventional phased array imaging system, this system acquires images of equivalent quality 60 times faster, when the transmit elements are fired in pairs consecutively and the power level used during transmit is very low. The fastest acquisition time is achieved when all transmit elements are fired simultaneously, which improves detectability, but at the cost of a slight degradation of the axial resolution. In real-time implementation, however, it must be borne in mind that the frame rate of a STA imaging system depends not only on the acquisition time of the data but also on the processing time needed for image reconstruction. Comparing to phased array imaging, a significant increase in the frame rate of a STA imaging system is possible if and only if an equivalent time efficient algorithm is used for image reconstruction.
An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.
Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi
2016-02-01
Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.
Collaborative sparse priors for multi-view ATR
NASA Astrophysics Data System (ADS)
Li, Xuelu; Monga, Vishal
2018-04-01
Recent work has seen a surge of sparse representation based classification (SRC) methods applied to automatic target recognition problems. While traditional SRC approaches used l0 or l1 norm to quantify sparsity, spike and slab priors have established themselves as the gold standard for providing general tunable sparse structures on vectors. In this work, we employ collaborative spike and slab priors that can be applied to matrices to encourage sparsity for the problem of multi-view ATR. That is, target images captured from multiple views are expanded in terms of a training dictionary multiplied with a coefficient matrix. Ideally, for a test image set comprising of multiple views of a target, coefficients corresponding to its identifying class are expected to be active, while others should be zero, i.e. the coefficient matrix is naturally sparse. We develop a new approach to solve the optimization problem that estimates the sparse coefficient matrix jointly with the sparsity inducing parameters in the collaborative prior. ATR problems are investigated on the mid-wave infrared (MWIR) database made available by the US Army Night Vision and Electronic Sensors Directorate, which has a rich collection of views. Experimental results show that the proposed joint prior and coefficient estimation method (JPCEM) can: 1.) enable improved accuracy when multiple views vs. a single one are invoked, and 2.) outperform state of the art alternatives particularly when training imagery is limited.
Effects of topology on network evolution
NASA Astrophysics Data System (ADS)
Oikonomou, Panos; Cluzel, Philippe
2006-08-01
The ubiquity of scale-free topology in nature raises the question of whether this particular network design confers an evolutionary advantage. A series of studies has identified key principles controlling the growth and the dynamics of scale-free networks. Here, we use neuron-based networks of boolean components as a framework for modelling a large class of dynamical behaviours in both natural and artificial systems. Applying a training algorithm, we characterize how networks with distinct topologies evolve towards a pre-established target function through a process of random mutations and selection. We find that homogeneous random networks and scale-free networks exhibit drastically different evolutionary paths. Whereas homogeneous random networks accumulate neutral mutations and evolve by sparse punctuated steps, scale-free networks evolve rapidly and continuously. Remarkably, this latter property is robust to variations of the degree exponent. In contrast, homogeneous random networks require a specific tuning of their connectivity to optimize their ability to evolve. These results highlight an organizing principle that governs the evolution of complex networks and that can improve the design of engineered systems.
A combinatorial model for dentate gyrus sparse coding
Severa, William; Parekh, Ojas; James, Conrad D.; ...
2016-12-29
The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation—similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus’s (DG) coding. We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal.To explore the value of this model framework, we assess how suitable it is for twomore » notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG. We find tailoring the model to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Lastly, we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.« less
Blind Compressed Image Watermarking for Noisy Communication Channels
2015-10-26
Lenna test image [11] for our simulations, and gradient projection for sparse recon- struction (GPSR) [12] to solve the convex optimization prob- lem...E. Candes, J. Romberg , and T. Tao, “Robust uncertainty prin- ciples: exact signal reconstruction from highly incomplete fre- quency information,” IEEE...Images - Requirements and Guidelines,” ITU-T Recommen- dation T.81, 1992. [6] M. Gkizeli, D. Pados, and M. Medley, “Optimal signature de - sign for
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation.
Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi
2016-12-16
Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.
An efficient dictionary learning algorithm and its application to 3-D medical image denoising.
Li, Shutao; Fang, Leyuan; Yin, Haitao
2012-02-01
In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction
NASA Astrophysics Data System (ADS)
Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing
2018-02-01
Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-01-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-05-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
ERIC Educational Resources Information Center
Grant, Cosette M.
2012-01-01
There is a sparse literature base which currently informs our knowledge about the connection between mentoring African-American female doctoral students in educational leadership programs at predominantly white institutions (PWIs) interested in faculty careers and their matriculation into the academy. This has left a void in our understanding of…
Archaeology and Developmental Psychology: A Brief Survey of Ancient Athenian Toys
ERIC Educational Resources Information Center
Sommer, Maria; Sommer, Dion
2017-01-01
The authors note that ancient Athens, in important ways, connected children, toys, and play. But they also find the scholarship of toys sparse and scattered. They discuss obstacles that can skew our modern view of the Greek mind, and they caution that modern eyes should not see play where the Greeks saw ritual and religious devotion. With these…
The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth
ERIC Educational Resources Information Center
Steyvers, Mark; Tenenbaum, Joshua B.
2005-01-01
We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…
Sequencing Computer-Assisted Learning of Transformations of Trigonometric Functions
ERIC Educational Resources Information Center
Ross, John A.; Bruce, Catherine D.; Sibbald, Timothy M.
2011-01-01
Studies incorporating technology into the teaching of trigonometry, although sparse, have demonstrated positive effects on student achievement. The optimal sequence for integrating technology with teacher-led mathematics instruction has not been determined. Our research investigated whether technology has a greater impact on student achievement…
Group sparse multiview patch alignment framework with view consistency for image classification.
Gui, Jie; Tao, Dacheng; Sun, Zhenan; Luo, Yong; You, Xinge; Tang, Yuan Yan
2014-07-01
No single feature can satisfactorily characterize the semantic concepts of an image. Multiview learning aims to unify different kinds of features to produce a consensual and efficient representation. This paper redefines part optimization in the patch alignment framework (PAF) and develops a group sparse multiview patch alignment framework (GSM-PAF). The new part optimization considers not only the complementary properties of different views, but also view consistency. In particular, view consistency models the correlations between all possible combinations of any two kinds of view. In contrast to conventional dimensionality reduction algorithms that perform feature extraction and feature selection independently, GSM-PAF enjoys joint feature extraction and feature selection by exploiting l(2,1)-norm on the projection matrix to achieve row sparsity, which leads to the simultaneous selection of relevant features and learning transformation, and thus makes the algorithm more discriminative. Experiments on two real-world image data sets demonstrate the effectiveness of GSM-PAF for image classification.
Detection of faults in rotating machinery using periodic time-frequency sparsity
NASA Astrophysics Data System (ADS)
Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.
2016-11-01
This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.
Hamaguchi, Kosuke; Riehle, Alexa; Brunel, Nicolas
2011-01-01
High firing irregularity is a hallmark of cortical neurons in vivo, and modeling studies suggest a balance of excitation and inhibition is necessary to explain this high irregularity. Such a balance must be generated, at least partly, from local interconnected networks of excitatory and inhibitory neurons, but the details of the local network structure are largely unknown. The dynamics of the neural activity depends on the local network structure; this in turn suggests the possibility of estimating network structure from the dynamics of the firing statistics. Here we report a new method to estimate properties of the local cortical network from the instantaneous firing rate and irregularity (CV(2)) under the assumption that recorded neurons are a part of a randomly connected sparse network. The firing irregularity, measured in monkey motor cortex, exhibits two features; many neurons show relatively stable firing irregularity in time and across different task conditions; the time-averaged CV(2) is widely distributed from quasi-regular to irregular (CV(2) = 0.3-1.0). For each recorded neuron, we estimate the three parameters of a local network [balance of local excitation-inhibition, number of recurrent connections per neuron, and excitatory postsynaptic potential (EPSP) size] that best describe the dynamics of the measured firing rates and irregularities. Our analysis shows that optimal parameter sets form a two-dimensional manifold in the three-dimensional parameter space that is confined for most of the neurons to the inhibition-dominated region. High irregularity neurons tend to be more strongly connected to the local network, either in terms of larger EPSP and inhibitory PSP size or larger number of recurrent connections, compared with the low irregularity neurons, for a given excitatory/inhibitory balance. Incorporating either synaptic short-term depression or conductance-based synapses leads many low CV(2) neurons to move to the excitation-dominated region as well as to an increase of EPSP size.
Stevens, Alexander A.; Tappon, Sarah C.; Garg, Arun; Fair, Damien A.
2012-01-01
Background Cognitive abilities, such as working memory, differ among people; however, individuals also vary in their own day-to-day cognitive performance. One potential source of cognitive variability may be fluctuations in the functional organization of neural systems. The degree to which the organization of these functional networks is optimized may relate to the effective cognitive functioning of the individual. Here we specifically examine how changes in the organization of large-scale networks measured via resting state functional connectivity MRI and graph theory track changes in working memory capacity. Methodology/Principal Findings Twenty-two participants performed a test of working memory capacity and then underwent resting-state fMRI. Seventeen subjects repeated the protocol three weeks later. We applied graph theoretic techniques to measure network organization on 34 brain regions of interest (ROI). Network modularity, which measures the level of integration and segregation across sub-networks, and small-worldness, which measures global network connection efficiency, both predicted individual differences in memory capacity; however, only modularity predicted intra-individual variation across the two sessions. Partial correlations controlling for the component of working memory that was stable across sessions revealed that modularity was almost entirely associated with the variability of working memory at each session. Analyses of specific sub-networks and individual circuits were unable to consistently account for working memory capacity variability. Conclusions/Significance The results suggest that the intrinsic functional organization of an a priori defined cognitive control network measured at rest provides substantial information about actual cognitive performance. The association of network modularity to the variability in an individual's working memory capacity suggests that the organization of this network into high connectivity within modules and sparse connections between modules may reflect effective signaling across brain regions, perhaps through the modulation of signal or the suppression of the propagation of noise. PMID:22276205
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
Computational Methods for Sparse Solution of Linear Inverse Problems
2009-03-01
this approach is that the algorithms take advantage of fast matrix–vector multiplications. An implementation is available as pdco and SolveBP in the...M. A. Saunders, “ PDCO : primal-dual interior-point method for con- vex objectives,” Systems Optimization Laboratory, Stanford University, Tech. Rep
PRP: peripheral routing protocol for WSN realistic marginal mobility model
NASA Astrophysics Data System (ADS)
Tudorache, I. G.; Popescu, A. M.; Kemp, A. H.
2017-02-01
This article proposes a new routing protocol called Peripheral Routing Protocol (PRP) for the scenario where the mobile destination (D) moves at the wireless sensor network (WSN) periphery for gathering data. From a connectivity point of view, when D follows the marginal mobility model (MMM), the WSN becomes a hybrid network: a sparse network, because of the interrupted connectivity between D and the rest of the nodes and a well-connected network, because of the connectivity between all the other nodes of the WSN except D. It will be proven through MATLAB simulations that, for a military application scenario where D's connectivity to the WSN varies between 10% and 95%, compared with the 100% case, PRP outperforms routing protocols recommended for Mobile Ad-hoc Networks (MANET) in three ways: it maintains an average Packet Delivery Ratio (PDR) over 90%, a below 10% and 5% increase for the Average End to End Delay (AETED) and energy per transmitted packet.
The Impact of Structural Heterogeneity on Excitation-Inhibition Balance in Cortical Networks.
Landau, Itamar D; Egger, Robert; Dercksen, Vincent J; Oberlaender, Marcel; Sompolinsky, Haim
2016-12-07
Models of cortical dynamics often assume a homogeneous connectivity structure. However, we show that heterogeneous input connectivity can prevent the dynamic balance between excitation and inhibition, a hallmark of cortical dynamics, and yield unrealistically sparse and temporally regular firing. Anatomically based estimates of the connectivity of layer 4 (L4) rat barrel cortex and numerical simulations of this circuit indicate that the local network possesses substantial heterogeneity in input connectivity, sufficient to disrupt excitation-inhibition balance. We show that homeostatic plasticity in inhibitory synapses can align the functional connectivity to compensate for structural heterogeneity. Alternatively, spike-frequency adaptation can give rise to a novel state in which local firing rates adjust dynamically so that adaptation currents and synaptic inputs are balanced. This theory is supported by simulations of L4 barrel cortex during spontaneous and stimulus-evoked conditions. Our study shows how synaptic and cellular mechanisms yield fluctuation-driven dynamics despite structural heterogeneity in cortical circuits. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Structural connectivity of the developing human amygdala.
Saygin, Zeynep M; Osher, David E; Koldewyn, Kami; Martin, Rebecca E; Finn, Amy; Saxe, Rebecca; Gabrieli, John D E; Sheridan, Margaret
2015-01-01
A large corpus of research suggests that there are changes in the manner and degree to which the amygdala supports cognitive and emotional function across development. One possible basis for these developmental differences could be the maturation of amygdalar connections with the rest of the brain. Recent functional connectivity studies support this conclusion, but the structural connectivity of the developing amygdala and its different nuclei remains largely unstudied. We examined age related changes in the DWI connectivity fingerprints of the amygdala to the rest of the brain in 166 individuals of ages 5-30. We also developed a model to predict age based on individual-subject amygdala connectivity, and identified the connections that were most predictive of age. Finally, we segmented the amygdala into its four main nucleus groups, and examined the developmental changes in connectivity for each nucleus. We observed that with age, amygdalar connectivity becomes increasingly sparse and localized. Age related changes were largely localized to the subregions of the amygdala that are implicated in social inference and contextual memory (the basal and lateral nuclei). The central nucleus' connectivity also showed differences with age but these differences affected fewer target regions than the basal and lateral nuclei. The medial nucleus did not exhibit any age related changes. These findings demonstrate increasing specificity in the connectivity patterns of amygdalar nuclei across age.
Association of childhood abuse with homeless women's social networks.
Green, Harold D; Tucker, Joan S; Wenzel, Suzanne L; Golinelli, Daniela; Kennedy, David P; Ryan, Gery W; Zhou, Annie J
2012-01-01
Childhood abuse has been linked to negative sequelae for women later in life including drug and alcohol use and violence as victim or perpetrator and may also affect the development of women's social networks. Childhood abuse is prevalent among at-risk populations of women (such as the homeless) and thus may have a stronger impact on their social networks. We conducted a study to: (a) develop a typology of sheltered homeless women's social networks; (b) determine whether childhood abuse was associated with the social networks of sheltered homeless women; and (c) determine whether those associations remained after accounting for past-year substance abuse and recent intimate partner abuse. A probability sample of 428 homeless women from temporary shelter settings in Los Angeles County completed a personal network survey that provided respondent information as well as information about their network members' demographics and level of interaction with each other. Cluster analyses identified groups of women who shared specific social network characteristics. Multinomial logistic regressions revealed variables associated with group membership. We identified three groups of women with differing social network characteristics: low-risk networks, densely connected risky networks (dense, risky), and sparsely connected risky networks (sparse, risky). Multinomial logistic regressions indicated that membership in the sparse, risky network group, when compared to the low-risk group, was associated with history of childhood physical abuse (but not sexual or emotional abuse). Recent drug abuse was associated with membership in both risky network groups; however, the association of childhood physical abuse with sparse, risky network group membership remained. Although these findings support theories proposing that the experience of childhood abuse can shape women's social networks, they suggest that it may be childhood physical abuse that has the most impact among homeless women. The effects of childhood physical abuse should be more actively investigated in clinical settings, especially those frequented by homeless women, particularly with respect to the formation of social networks in social contexts that may expose these women to greater risks. Copyright © 2012. Published by Elsevier Ltd.
Nonlinear spike-and-slab sparse coding for interpretable image encoding.
Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947
Recurrent Coupling Improves Discrimination of Temporal Spike Patterns
Yuan, Chun-Wei; Leibold, Christian
2012-01-01
Despite the ubiquitous presence of recurrent synaptic connections in sensory neuronal systems, their general functional purpose is not well understood. A recent conceptual advance has been achieved by theories of reservoir computing in which recurrent networks have been proposed to generate short-term memory as well as to improve neuronal representation of the sensory input for subsequent computations. Here, we present a numerical study on the distinct effects of inhibitory and excitatory recurrence in a canonical linear classification task. It is found that both types of coupling improve the ability to discriminate temporal spike patterns as compared to a purely feed-forward system, although in different ways. For a large class of inhibitory networks, the network’s performance is optimal as long as a fraction of roughly 50% of neurons per stimulus is active in the resulting population code. Thereby the contribution of inactive neurons to the neural code is found to be even more informative than that of the active neurons, generating an inherent robustness of classification performance against temporal jitter of the input spikes. Excitatory couplings are found to not only produce a short-term memory buffer but also to improve linear separability of the population patterns by evoking more irregular firing as compared to the purely inhibitory case. As the excitatory connectivity becomes more sparse, firing becomes more variable, and pattern separability improves. We argue that the proposed paradigm is particularly well-suited as a conceptual framework for processing of sensory information in the auditory pathway. PMID:22586392
Enhancing quantum annealing performance for the molecular similarity problem
NASA Astrophysics Data System (ADS)
Hernandez, Maritza; Aramon, Maliheh
2017-05-01
Quantum annealing is a promising technique which leverages quantum mechanics to solve hard optimization problems. Considerable progress has been made in the development of a physical quantum annealer, motivating the study of methods to enhance the efficiency of such a solver. In this work, we present a quantum annealing approach to measure similarity among molecular structures. Implementing real-world problems on a quantum annealer is challenging due to hardware limitations such as sparse connectivity, intrinsic control error, and limited precision. In order to overcome the limited connectivity, a problem must be reformulated using minor-embedding techniques. Using a real data set, we investigate the performance of a quantum annealer in solving the molecular similarity problem. We provide experimental evidence that common practices for embedding can be replaced by new alternatives which mitigate some of the hardware limitations and enhance its performance. Common practices for embedding include minimizing either the number of qubits or the chain length and determining the strength of ferromagnetic couplers empirically. We show that current criteria for selecting an embedding do not improve the hardware's performance for the molecular similarity problem. Furthermore, we use a theoretical approach to determine the strength of ferromagnetic couplers. Such an approach removes the computational burden of the current empirical approaches and also results in hardware solutions that can benefit from simple local classical improvement. Although our results are limited to the problems considered here, they can be generalized to guide future benchmarking studies.
NASA Astrophysics Data System (ADS)
Miorelli, Roberto; Reboud, Christophe
2018-04-01
Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.
Migliore, Michele; Hines, Michael L.; Shepherd, Gordon M.
2014-01-01
The precise mechanism by which synaptic excitation and inhibition interact with each other in odor coding through the unique dendrodendritic synaptic microcircuits present in olfactory bulb is unknown. Here a scaled-up model of the mitral–granule cell network in the rodent olfactory bulb is used to analyze dendrodendritic processing of experimentally determined odor patterns. We found that the interaction between excitation and inhibition is responsible for two fundamental computational mechanisms: (1) a balanced excitation/inhibition in strongly activated mitral cells, leading to a sparse representation of odorant input, and (2) an unbalanced excitation/inhibition (inhibition dominated) in surrounding weakly activated mitral cells, leading to lateral inhibition. These results suggest how both mechanisms can carry information about the input patterns, with optimal level of synaptic excitation and inhibition producing the highest level of sparseness and decorrelation in the network response. The results suggest how the learning process, through the emergent development of these mechanisms, can enhance odor representation of olfactory bulb. PMID:25297097
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Learning Collaborative Sparse Representation for Grayscale-Thermal Tracking.
Li, Chenglong; Cheng, Hui; Hu, Shiyi; Liu, Xiaobai; Tang, Jin; Lin, Liang
2016-09-27
Integrating multiple different yet complementary feature representations has been proved to be an effective way for boosting tracking performance. This paper investigates how to perform robust object tracking in challenging scenarios by adaptively incorporating information from grayscale and thermal videos, and proposes a novel collaborative algorithm for online tracking. In particular, an adaptive fusion scheme is proposed based on collaborative sparse representation in Bayesian filtering framework. We jointly optimize sparse codes and the reliable weights of different modalities in an online way. In addition, this work contributes a comprehensive video benchmark, which includes 50 grayscale-thermal sequences and their ground truth annotations for tracking purpose. The videos are with high diversity and the annotations were finished by one single person to guarantee consistency. Extensive experiments against other stateof- the-art trackers with both grayscale and grayscale-thermal inputs demonstrate the effectiveness of the proposed tracking approach. Through analyzing quantitative results, we also provide basic insights and potential future research directions in grayscale-thermal tracking.
Data traffic reduction schemes for sparse Cholesky factorizations
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1988-01-01
Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.
Inference for High-dimensional Differential Correlation Matrices.
Cai, T Tony; Zhang, Anru
2016-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.
Method and apparatus for optimized processing of sparse matrices
Taylor, Valerie E.
1993-01-01
A computer architecture for processing a sparse matrix is disclosed. The apparatus stores a value-row vector corresponding to nonzero values of a sparse matrix. Each of the nonzero values is located at a defined row and column position in the matrix. The value-row vector includes a first vector including nonzero values and delimiting characters indicating a transition from one column to another. The value-row vector also includes a second vector which defines row position values in the matrix corresponding to the nonzero values in the first vector and column position values in the matrix corresponding to the column position of the nonzero values in the first vector. The architecture also includes a circuit for detecting a special character within the value-row vector. Matrix-vector multiplication is executed on the value-row vector. This multiplication is performed by multiplying an index value of the first vector value by a column value from a second matrix to form a matrix-vector product which is added to a previous matrix-vector product.
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
ERIC Educational Resources Information Center
Mallette, Leo A.
2012-01-01
There are many modes of information flow in the sciences: books, journals, conferences, research and development, acquisition of companies, co-workers, students, and professors in schools of higher learning. In the sciences, dissertation students learn from their dissertation advisor (or chairperson or mentor) and the other dissertation committee…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce themore » electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. As a result, the use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO 3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.« less
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; ...
2016-10-17
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce themore » electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.« less
Stability and stabilisation of a class of networked dynamic systems
NASA Astrophysics Data System (ADS)
Liu, H. B.; Wang, D. Q.
2018-04-01
We investigate the stability and stabilisation of a linear time invariant networked heterogeneous system with arbitrarily connected subsystems. A new linear matrix inequality based sufficient and necessary condition for the stability is derived, based on which the stabilisation is provided. The obtained conditions efficiently utilise the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, a sufficient condition only dependent on each individual subsystem is also presented for the stabilisation of the networked systems with a large scale. Numerical simulations show that these conditions are computationally valid in the analysis and synthesis of a large-scale networked system.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
2016-01-01
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986
Optimal Dictionaries for Sparse Solutions of Multi-frame Blind Deconvolution
2014-09-01
object is the Hubble Space Telescope (HST). As stated above, the dictionary training used the first 100 of the total of the simulated PSFs. The second set...diffraction-limited Hubble image and HubbleRE is the reconstructed image from the 100 simulated atmospheric turbulence degraded images of the HST
Finite element solution of optimal control problems with state-control inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1992-01-01
It is demonstrated that the weak Hamiltonian finite-element formulation is amenable to the solution of optimal control problems with inequality constraints which are functions of both state and control variables. Difficult problems can be treated on account of the ease with which algebraic equations can be generated before having to specify the problem. These equations yield very accurate solutions. Owing to the sparse structure of the resulting Jacobian, computer solutions can be obtained quickly when the sparsity is exploited.
Statistical Mechanics of Combinatorial Auctions
NASA Astrophysics Data System (ADS)
Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo
2006-09-01
Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters).
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-07
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Fredenberg, E.; Lundqvist, Mats; Siewerdsen, J. H.
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging. PMID:26611740
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.
Meng, Qinggang; Deng, Su; Huang, Hongbin; Wu, Yahui; Badii, Atta
2017-01-01
Heterogeneous information networks (e.g. bibliographic networks and social media networks) that consist of multiple interconnected objects are ubiquitous. Clustering analysis is an effective method to understand the semantic information and interpretable structure of the heterogeneous information networks, and it has attracted the attention of many researchers in recent years. However, most studies assume that heterogeneous information networks usually follow some simple schemas, such as bi-typed networks or star network schema, and they can only cluster one type of object in the network each time. In this paper, a novel clustering framework is proposed based on sparse tensor factorization for heterogeneous information networks, which can cluster multiple types of objects simultaneously in a single pass without any network schema information. The types of objects and the relations between them in the heterogeneous information networks are modeled as a sparse tensor. The clustering issue is modeled as an optimization problem, which is similar to the well-known Tucker decomposition. Then, an Alternating Least Squares (ALS) algorithm and a feasible initialization method are proposed to solve the optimization problem. Based on the tensor factorization, we simultaneously partition different types of objects into different clusters. The experimental results on both synthetic and real-world datasets have demonstrated that our proposed clustering framework, STFClus, can model heterogeneous information networks efficiently and can outperform state-of-the-art clustering algorithms as a generally applicable single-pass clustering method for heterogeneous network which is network schema agnostic. PMID:28245222
Wu, Jibing; Meng, Qinggang; Deng, Su; Huang, Hongbin; Wu, Yahui; Badii, Atta
2017-01-01
Heterogeneous information networks (e.g. bibliographic networks and social media networks) that consist of multiple interconnected objects are ubiquitous. Clustering analysis is an effective method to understand the semantic information and interpretable structure of the heterogeneous information networks, and it has attracted the attention of many researchers in recent years. However, most studies assume that heterogeneous information networks usually follow some simple schemas, such as bi-typed networks or star network schema, and they can only cluster one type of object in the network each time. In this paper, a novel clustering framework is proposed based on sparse tensor factorization for heterogeneous information networks, which can cluster multiple types of objects simultaneously in a single pass without any network schema information. The types of objects and the relations between them in the heterogeneous information networks are modeled as a sparse tensor. The clustering issue is modeled as an optimization problem, which is similar to the well-known Tucker decomposition. Then, an Alternating Least Squares (ALS) algorithm and a feasible initialization method are proposed to solve the optimization problem. Based on the tensor factorization, we simultaneously partition different types of objects into different clusters. The experimental results on both synthetic and real-world datasets have demonstrated that our proposed clustering framework, STFClus, can model heterogeneous information networks efficiently and can outperform state-of-the-art clustering algorithms as a generally applicable single-pass clustering method for heterogeneous network which is network schema agnostic.
Acceleration of GPU-based Krylov solvers via data transfer reduction
Anzt, Hartwig; Tomov, Stanimire; Luszczek, Piotr; ...
2015-04-08
Krylov subspace iterative solvers are often the method of choice when solving large sparse linear systems. At the same time, hardware accelerators such as graphics processing units continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to reduce certain data communications, and hence fail to leverage the full potential of the accelerator. In this study, we target the acceleration of Krylov subspace iterative methods for graphicsmore » processing units, and in particular the Biconjugate Gradient Stabilized solver that significant improvement can be achieved by reformulating the method to reduce data-communications through application-specific kernels instead of using the generic BLAS kernels, e.g. as provided by NVIDIA’s cuBLAS library, and by designing a graphics processing unit specific sparse matrix-vector product kernel that is able to more efficiently use the graphics processing unit’s computing power. Furthermore, we derive a model estimating the performance improvement, and use experimental data to validate the expected runtime savings. Finally, considering that the derived implementation achieves significantly higher performance, we assert that similar optimizations addressing algorithm structure, as well as sparse matrix-vector, are crucial for the subsequent development of high-performance graphics processing units accelerated Krylov subspace iterative methods.« less
Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation
Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi
2016-01-01
Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency. PMID:27999261
NASA Astrophysics Data System (ADS)
Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen
2017-12-01
Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.
Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu
2015-07-21
Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.
Robust Single Image Super-Resolution via Deep Networks With Sparse Prior.
Liu, Ding; Wang, Zhaowen; Wen, Bihan; Yang, Jianchao; Han, Wei; Huang, Thomas S
2016-07-01
Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.
Optimal design in pediatric pharmacokinetic and pharmacodynamic clinical studies.
Roberts, Jessica K; Stockmann, Chris; Balch, Alfred; Yu, Tian; Ward, Robert M; Spigarelli, Michael G; Sherwin, Catherine M T
2015-03-01
It is not trivial to conduct clinical trials with pediatric participants. Ethical, logistical, and financial considerations add to the complexity of pediatric studies. Optimal design theory allows investigators the opportunity to apply mathematical optimization algorithms to define how to structure their data collection to answer focused research questions. These techniques can be used to determine an optimal sample size, optimal sample times, and the number of samples required for pharmacokinetic and pharmacodynamic studies. The aim of this review is to demonstrate how to determine optimal sample size, optimal sample times, and the number of samples required from each patient by presenting specific examples using optimal design tools. Additionally, this review aims to discuss the relative usefulness of sparse vs rich data. This review is intended to educate the clinician, as well as the basic research scientist, whom plan on conducting a pharmacokinetic/pharmacodynamic clinical trial in pediatric patients. © 2015 John Wiley & Sons Ltd.
The topography of the environment alters the optimal search strategy for active particles
Volpe, Giovanni
2017-01-01
In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher’s size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers’ taxis to biochemical rates of reaction. PMID:29073055
The topography of the environment alters the optimal search strategy for active particles
NASA Astrophysics Data System (ADS)
Volpe, Giorgio; Volpe, Giovanni
2017-10-01
In environments with scarce resources, adopting the right search strategy can make the difference between succeeding and failing, even between life and death. At different scales, this applies to molecular encounters in the cell cytoplasm, to animals looking for food or mates in natural landscapes, to rescuers during search and rescue operations in disaster zones, and to genetic computer algorithms exploring parameter spaces. When looking for sparse targets in a homogeneous environment, a combination of ballistic and diffusive steps is considered optimal; in particular, more ballistic Lévy flights with exponent α≤1 are generally believed to optimize the search process. However, most search spaces present complex topographies. What is the best search strategy in these more realistic scenarios? Here, we show that the topography of the environment significantly alters the optimal search strategy toward less ballistic and more Brownian strategies. We consider an active particle performing a blind cruise search for nonregenerating sparse targets in a 2D space with steps drawn from a Lévy distribution with the exponent varying from α=1 to α=2 (Brownian). We show that, when boundaries, barriers, and obstacles are present, the optimal search strategy depends on the topography of the environment, with α assuming intermediate values in the whole range under consideration. We interpret these findings using simple scaling arguments and discuss their robustness to varying searcher's size. Our results are relevant for search problems at different length scales from animal and human foraging to microswimmers' taxis to biochemical rates of reaction.
Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P
2017-01-01
Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Li; Gao, Yaozong; Shi, Feng
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less
Curvelet-based compressive sensing for InSAR raw data
NASA Astrophysics Data System (ADS)
Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David
2015-10-01
The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.
Kurtosis based weighted sparse model with convex optimization technique for bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yan, Ruqiang
2016-12-01
The bearing failure, generating harmful vibrations, is one of the most frequent reasons for machine breakdowns. Thus, performing bearing fault diagnosis is an essential procedure to improve the reliability of the mechanical system and reduce its operating expenses. Most of the previous studies focused on rolling bearing fault diagnosis could be categorized into two main families, kurtosis-based filter method and wavelet-based shrinkage method. Although tremendous progresses have been made, their effectiveness suffers from three potential drawbacks: firstly, fault information is often decomposed into proximal frequency bands and results in impulsive feature frequency band splitting (IFFBS) phenomenon, which significantly degrades the performance of capturing the optimal information band; secondly, noise energy spreads throughout all frequency bins and contaminates fault information in the information band, especially under the heavy noisy circumstance; thirdly, wavelet coefficients are shrunk equally to satisfy the sparsity constraints and most of the feature information energy are thus eliminated unreasonably. Therefore, exploiting two pieces of prior information (i.e., one is that the coefficient sequences of fault information in the wavelet basis is sparse, and the other is that the kurtosis of the envelope spectrum could evaluate accurately the information capacity of rolling bearing faults), a novel weighted sparse model and its corresponding framework for bearing fault diagnosis is proposed in this paper, coined KurWSD. KurWSD formulates the prior information into weighted sparse regularization terms and then obtains a nonsmooth convex optimization problem. The alternating direction method of multipliers (ADMM) is sequentially employed to solve this problem and the fault information is extracted through the estimated wavelet coefficients. Compared with state-of-the-art methods, KurWSD overcomes the three drawbacks and utilizes the advantages of both family tools. KurWSD has three main advantages: firstly, all the characteristic information scattered in proximal sub-bands is gathered through synthesizing those impulsive dominant sub-band signals and thus eliminates the dilemma of the IFFBS phenomenon. Secondly, the noises in the focused sub-bands could be alleviated efficiently through shrinking or removing the dense wavelet coefficients of Gaussian noise. Lastly, wavelet coefficients with faulty information are reliably detected and preserved through manipulating wavelet coefficients discriminatively based on the contribution to the impulsive components. Moreover, the reliability and effectiveness of the KurWSD are demonstrated through simulated and experimental signals.
Li, Wenjing; Li, Jianhong; Wang, Zhenchang; Li, Yong; Liu, Zhaohui; Yan, Fei; Xian, Junfang; He, Huiguang
2015-01-01
Previous studies have shown brain reorganizations after early deprivation of auditory sensory. However, changes of grey matter connectivity have not been investigated in prelingually deaf adolescents yet. In the present study, we aimed to investigate changes of grey matter connectivity within and between auditory, language and visual systems in prelingually deaf adolescents. We recruited 16 prelingually deaf adolescents and 16 age-and gender-matched normal controls, and extracted the grey matter volume as the structural characteristic from 14 regions of interest involved in auditory, language or visual processing to investigate the changes of grey matter connectivity within and between auditory, language and visual systems. Sparse inverse covariance estimation (SICE) was utilized to construct grey matter connectivity between these brain regions. The results show that prelingually deaf adolescents present weaker grey matter connectivity within auditory and visual systems, and connectivity between language and visual systems declined. Notably, significantly increased brain connectivity was found between auditory and visual systems in prelingually deaf adolescents. Our results indicate "cross-modal" plasticity after deprivation of the auditory input in prelingually deaf adolescents, especially between auditory and visual systems. Besides, auditory deprivation and visual deficits might affect the connectivity pattern within language and visual systems in prelingually deaf adolescents.
From the Rendering Equation to Stratified Light Transport Inversion
2010-12-09
iteratively. These approaches relate closely to the radiosity method for diffuse global illumination in forward rendering (Hanrahan et al, 1991; Gortler et...currently simply use sparse matrices to represent T, we are also interested in exploring connections with hierar- chical and wavelet radiosity as in...Seidel iterative methods used in radiosity . 2.4 Inverse Light Transport Previous work on inverse rendering has considered inversion of the direct
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, D; Ruan, D; Low, D
2015-06-15
Purpose: Existing efforts to replace complex multileaf collimator (MLC) by simple jaws for intensity modulated radiation therapy (IMRT) resulted in unacceptable compromise in plan quality and delivery efficiency. We introduce a novel fluence map segmentation method based on compressed sensing for plan delivery using a simplified sparse orthogonal collimator (SOC) on the 4π non-coplanar radiotherapy platform. Methods: 4π plans with varying prescription doses were first created by automatically selecting and optimizing 20 non-coplanar beams for 2 GBM, 2 head & neck, and 2 lung patients. To create deliverable 4π plans using SOC, which are two pairs of orthogonal collimators withmore » 1 to 4 leaves in each collimator bank, a Haar Fluence Optimization (HFO) method was used to regulate the number of Haar wavelet coefficients while maximizing the dose fidelity to the ideal prescription. The plans were directly stratified utilizing the optimized Haar wavelet rectangular basis. A matching number of deliverable segments were stratified for the MLC-based plans. Results: Compared to the MLC-based 4π plans, the SOC-based 4π plans increased the average PTV dose homogeneity from 0.811 to 0.913. PTV D98 and D99 were improved by 3.53% and 5.60% of the corresponding prescription doses. The average mean and maximal OAR doses slightly increased by 0.57% and 2.57% of the prescription doses. The average number of segments ranged between 5 and 30 per beam. The collimator travel time to create the segments decreased with increasing leaf numbers in the SOC. The two and four leaf designs were 1.71 and 1.93 times more efficient, on average, than the single leaf design. Conclusion: The innovative dose domain optimization based on compressed sensing enables uncompromised 4π non-coplanar IMRT dose delivery using simple rectangular segments that are deliverable using a sparse orthogonal collimator, which only requires 8 to 16 leaves yet is unlimited in modulation resolution. This work is supported in part by Varian Medical Systems, Inc. and NIH R43 CA18339.« less
Robust quantum optimizer with full connectivity.
Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P
2017-04-01
Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.
An Interactive Simulation Program for Exploring Computational Models of Auto-Associative Memory.
Fink, Christian G
2017-01-01
While neuroscience students typically learn about activity-dependent plasticity early in their education, they often struggle to conceptually connect modification at the synaptic scale with network-level neuronal dynamics, not to mention with their own everyday experience of recalling a memory. We have developed an interactive simulation program (based on the Hopfield model of auto-associative memory) that enables the user to visualize the connections generated by any pattern of neural activity, as well as to simulate the network dynamics resulting from such connectivity. An accompanying set of student exercises introduces the concepts of pattern completion, pattern separation, and sparse versus distributed neural representations. Results from a conceptual assessment administered before and after students worked through these exercises indicate that the simulation program is a useful pedagogical tool for illustrating fundamental concepts of computational models of memory.
Lallouette, Jules; De Pittà, Maurizio; Ben-Jacob, Eshel; Berry, Hugues
2014-01-01
Traditionally, astrocytes have been considered to couple via gap-junctions into a syncytium with only rudimentary spatial organization. However, this view is challenged by growing experimental evidence that astrocytes organize as a proper gap-junction mediated network with more complex region-dependent properties. On the other hand, the propagation range of intercellular calcium waves (ICW) within astrocyte populations is as well highly variable, depending on the brain region considered. This suggests that the variability of the topology of gap-junction couplings could play a role in the variability of the ICW propagation range. Since this hypothesis is very difficult to investigate with current experimental approaches, we explore it here using a biophysically realistic model of three-dimensional astrocyte networks in which we varied the topology of the astrocyte network, while keeping intracellular properties and spatial cell distribution and density constant. Computer simulations of the model suggest that changing the topology of the network is indeed sufficient to reproduce the distinct ranges of ICW propagation reported experimentally. Unexpectedly, our simulations also predict that sparse connectivity and restriction of gap-junction couplings to short distances should favor propagation while long–distance or dense connectivity should impair it. Altogether, our results provide support to recent experimental findings that point toward a significant functional role of the organization of gap-junction couplings into proper astroglial networks. Dynamic control of this topology by neurons and signaling molecules could thus constitute a new type of regulation of neuron-glia and glia-glia interactions. PMID:24795613
Craft, David
2010-10-01
A discrete set of points and their convex combinations can serve as a sparse representation of the Pareto surface in multiple objective convex optimization. We develop a method to evaluate the quality of such a representation, and show by example that in multiple objective radiotherapy planning, the number of Pareto optimal solutions needed to represent Pareto surfaces of up to five dimensions grows at most linearly with the number of objectives. The method described is also applicable to the representation of convex sets. Copyright © 2009 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Giving Eyesight to the Blind: Towards Attention-Aware AIED
ERIC Educational Resources Information Center
D'Mello, Sidney K.
2016-01-01
There is an inextricable link between attention and learning, yet AIED systems in 2015 are largely blind to learners' attentional states. We argue that next-generation AIED systems should have the ability to monitor and dynamically (re)direct attention in order to optimize allocation of sparse attentional resources. We present some initial ideas…
Abivardi, Aslan; Bach, Dominik R
2017-08-01
Structural alterations in long-range amygdala connections are proposed to crucially underlie several neuropsychiatric disorders. While progress has been made in elucidating the function of these connections, our understanding of their structure in humans remains sparse and non-systematic. Harnessing diffusion-weighted imaging and probabilistic tractography in humans, we investigate connections between two main amygdala nucleus groups, thalamic nuclei, and cortex. We first parcellated amygdala into deep (basolateral) and superficial (centrocortical) nucleus groups, and thalamus into six subregions, using previously established protocols based on connectivity. Cortex was parcellated based on T1-weighted images. We found substantial amygdala connections to thalamus, with different patterns for the two amygdala nuclei. Crucially, we describe direct subcortical connections between amygdala and paraventricular thalamus. Different from rodents but similar to non-human primates, these are more pronounced for basolateral than centrocortical amygdala. Substantial white-matter connectivity between amygdala and visual pulvinar is also more pronounced for basolateral amygdala. Furthermore, we establish detailed connectivity profiles for basolateral and centrocortical amygdala to cortical regions. These exhibit cascadic connections with sensory cortices as suggested previously based on tracer methods in non-human animals. We propose that the quantitative connectivity profiles provided here may guide future work on normal and pathological function of human amygdala. Hum Brain Mapp 38:3927-3940, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
2017-01-01
Abstract Structural alterations in long‐range amygdala connections are proposed to crucially underlie several neuropsychiatric disorders. While progress has been made in elucidating the function of these connections, our understanding of their structure in humans remains sparse and non‐systematic. Harnessing diffusion‐weighted imaging and probabilistic tractography in humans, we investigate connections between two main amygdala nucleus groups, thalamic nuclei, and cortex. We first parcellated amygdala into deep (basolateral) and superficial (centrocortical) nucleus groups, and thalamus into six subregions, using previously established protocols based on connectivity. Cortex was parcellated based on T1‐weighted images. We found substantial amygdala connections to thalamus, with different patterns for the two amygdala nuclei. Crucially, we describe direct subcortical connections between amygdala and paraventricular thalamus. Different from rodents but similar to non‐human primates, these are more pronounced for basolateral than centrocortical amygdala. Substantial white‐matter connectivity between amygdala and visual pulvinar is also more pronounced for basolateral amygdala. Furthermore, we establish detailed connectivity profiles for basolateral and centrocortical amygdala to cortical regions. These exhibit cascadic connections with sensory cortices as suggested previously based on tracer methods in non‐human animals. We propose that the quantitative connectivity profiles provided here may guide future work on normal and pathological function of human amygdala. Hum Brain Mapp 38:3927–3940, 2017. © 2017 Wiley Periodicals, Inc. PMID:28512761
Structural Connectivity of the Developing Human Amygdala
Saygin, Zeynep M.; Osher, David E.; Koldewyn, Kami; Martin, Rebecca E.; Finn, Amy; Saxe, Rebecca; Gabrieli, John D.E.; Sheridan, Margaret
2015-01-01
A large corpus of research suggests that there are changes in the manner and degree to which the amygdala supports cognitive and emotional function across development. One possible basis for these developmental differences could be the maturation of amygdalar connections with the rest of the brain. Recent functional connectivity studies support this conclusion, but the structural connectivity of the developing amygdala and its different nuclei remains largely unstudied. We examined age related changes in the DWI connectivity fingerprints of the amygdala to the rest of the brain in 166 individuals of ages 5-30. We also developed a model to predict age based on individual-subject amygdala connectivity, and identified the connections that were most predictive of age. Finally, we segmented the amygdala into its four main nucleus groups, and examined the developmental changes in connectivity for each nucleus. We observed that with age, amygdalar connectivity becomes increasingly sparse and localized. Age related changes were largely localized to the subregions of the amygdala that are implicated in social inference and contextual memory (the basal and lateral nuclei). The central nucleus’ connectivity also showed differences with age but these differences affected fewer target regions than the basal and lateral nuclei. The medial nucleus did not exhibit any age related changes. These findings demonstrate increasing specificity in the connectivity patterns of amygdalar nuclei across age. PMID:25875758
Yuan, Yuan; Lin, Jianzhe; Wang, Qi
2016-12-01
Hyperspectral image (HSI) classification is a crucial issue in remote sensing. Accurate classification benefits a large number of applications such as land use analysis and marine resource utilization. But high data correlation brings difficulty to reliable classification, especially for HSI with abundant spectral information. Furthermore, the traditional methods often fail to well consider the spatial coherency of HSI that also limits the classification performance. To address these inherent obstacles, a novel spectral-spatial classification scheme is proposed in this paper. The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework, which are claimed to be two main contributions in this procedure. First, the MJSR not only reduces the spectral redundancy, but also retains necessary correlation in spectral field during classification. Second, the stepwise optimization further explores the spatial correlation that significantly enhances the classification accuracy and robustness. As far as several universal quality evaluation indexes are concerned, the experimental results on Indian Pines and Pavia University demonstrate the superiority of our method compared with the state-of-the-art competitors.
NASA Astrophysics Data System (ADS)
Friedel, Michael; Buscema, Massimo
2016-04-01
Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen
2013-01-01
In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Shape prior modeling using sparse representation and online dictionary learning.
Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N
2012-01-01
The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.
Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Xu, Richard Yi Da; Luo, Xiangfeng
2018-05-01
Sparse nonnegative matrix factorization (SNMF) aims to factorize a data matrix into two optimized nonnegative sparse factor matrices, which could benefit many tasks, such as document-word co-clustering. However, the traditional SNMF typically assumes the number of latent factors (i.e., dimensionality of the factor matrices) to be fixed. This assumption makes it inflexible in practice. In this paper, we propose a doubly sparse nonparametric NMF framework to mitigate this issue by using dependent Indian buffet processes (dIBP). We apply a correlation function for the generation of two stick weights associated with each column pair of factor matrices while still maintaining their respective marginal distribution specified by IBP. As a consequence, the generation of two factor matrices will be columnwise correlated. Under this framework, two classes of correlation function are proposed: 1) using bivariate Beta distribution and 2) using Copula function. Compared with the single IBP-based NMF, this paper jointly makes two factor matrices nonparametric and sparse, which could be applied to broader scenarios, such as co-clustering. This paper is seen to be much more flexible than Gaussian process-based and hierarchial Beta process-based dIBPs in terms of allowing the two corresponding binary matrix columns to have greater variations in their nonzero entries. Our experiments on synthetic data show the merits of this paper compared with the state-of-the-art models in respect of factorization efficiency, sparsity, and flexibility. Experiments on real-world data sets demonstrate the efficiency of this paper in document-word co-clustering tasks.
Hall, Amee J; Brown, Trecia A; Grahn, Jessica A; Gati, Joseph S; Nixon, Pam L; Hughes, Sarah M; Menon, Ravi S; Lomber, Stephen G
2014-03-15
When conducting auditory investigations using functional magnetic resonance imaging (fMRI), there are inherent potential confounds that need to be considered. Traditional continuous fMRI acquisition methods produce sounds >90 dB which compete with stimuli or produce neural activation masking evoked activity. Sparse scanning methods insert a period of reduced MRI-related noise, between image acquisitions, in which a stimulus can be presented without competition. In this study, we compared sparse and continuous scanning methods to identify the optimal approach to investigate acoustically evoked cortical, thalamic and midbrain activity in the cat. Using a 7 T magnet, we presented broadband noise, 10 kHz tones, or 0.5 kHz tones in a block design, interleaved with blocks in which no stimulus was presented. Continuous scanning resulted in larger clusters of activation and more peak voxels within the auditory cortex. However, no significant activation was observed within the thalamus. Also, there was no significant difference found, between continuous or sparse scanning, in activations of midbrain structures. Higher magnitude activations were identified in auditory cortex compared to the midbrain using both continuous and sparse scanning. These results indicate that continuous scanning is the preferred method for investigations of auditory cortex in the cat using fMRI. Also, choice of method for future investigations of midbrain activity should be driven by other experimental factors, such as stimulus intensity and task performance during scanning. Copyright © 2014 Elsevier B.V. All rights reserved.
Sparse reconstruction of liver cirrhosis from monocular mini-laparoscopic sequences
NASA Astrophysics Data System (ADS)
Marcinczak, Jan Marek; Painer, Sven; Grigat, Rolf-Rainer
2015-03-01
Mini-laparoscopy is a technique which is used by clinicians to inspect the liver surface with ultra-thin laparoscopes. However, so far no quantitative measures based on mini-laparoscopic sequences are possible. This paper presents a Structure from Motion (SfM) based methodology to do 3D reconstruction of liver cirrhosis from mini-laparoscopic videos. The approach combines state-of-the-art tracking, pose estimation, outlier rejection and global optimization to obtain a sparse reconstruction of the cirrhotic liver surface. Specular reflection segmentation is included into the reconstruction framework to increase the robustness of the reconstruction. The presented approach is evaluated on 15 endoscopic sequences using three cirrhotic liver phantoms. The median reconstruction accuracy ranges from 0.3 mm to 1 mm.
Modified PTS-based PAPR Reduction for FBMC-OQAM Systems
NASA Astrophysics Data System (ADS)
Deng, Honggui; Ren, Shuang; Liu, Yan; Tang, Chengying
2017-10-01
The filter bank multicarrier with offset quadrature amplitude modulation (FBMC-OQAM) has been raised great concern in the 5G communication research. However FBMC-OQAM has also the inherent drawback of high peak-to-average power ratio (PAPR) that should be addressed. Due to the overlapping structure of FBMC-OQAM signals, it is proven that directly employing conventional partial transmit sequence (PTS) scheme proposed for OFDM to FBMC-OQAM is ineffective. In this paper, we propose a modified PTS-based scheme by employing phase rotation factors to optimize only the phase of the sparse peak signals, called as sparse PTS (S-PTS) scheme. Theoretical analysis and simulation results show that the proposed S-PTS scheme provides a significant PAPR reduction performance with lower computational complexity.
Hari, Riitta
2017-06-07
Experimental data about brain function accumulate faster than does our understanding of how the brain works. To tackle some general principles at the grain level of behavior, I start from the omnipresent brain-environment connection that forces regularities of the physical world to shape the brain. Based on top-down processing, added by sparse sensory information, people are able to form individual "caricature worlds," which are similar enough to be shared among other people and which allow quick and purposeful reactions to abrupt changes. Temporal dynamics and social interaction in natural environments serve as further essential organizing principles of human brain function. Copyright © 2017 Elsevier Inc. All rights reserved.
Rossi Espagnet, M C; Bangiyev, L; Haber, M; Block, K T; Babb, J; Ruggiero, V; Boada, F; Gonen, O; Fatterpekar, G M
2015-08-01
The pituitary gland is located outside of the blood-brain barrier. Dynamic T1 weighted contrast enhanced sequence is considered to be the gold standard to evaluate this region. However, it does not allow assessment of intrinsic permeability properties of the gland. Our aim was to demonstrate the utility of radial volumetric interpolated brain examination with the golden-angle radial sparse parallel technique to evaluate permeability characteristics of the individual components (anterior and posterior gland and the median eminence) of the pituitary gland and areas of differential enhancement and to optimize the study acquisition time. A retrospective study was performed in 52 patients (group 1, 25 patients with normal pituitary glands; and group 2, 27 patients with a known diagnosis of microadenoma). Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were evaluated with an ROI-based method to obtain signal-time curves and permeability measures of individual normal structures within the pituitary gland and areas of differential enhancement. Statistical analyses were performed to assess differences in the permeability parameters of these individual regions and optimize the study acquisition time. Signal-time curves from the posterior pituitary gland and median eminence demonstrated a faster wash-in and time of maximum enhancement with a lower peak of enhancement compared with the anterior pituitary gland (P < .005). Time-optimization analysis demonstrated that 120 seconds is ideal for dynamic pituitary gland evaluation. In the absence of a clinical history, differences in the signal-time curves allow easy distinction between a simple cyst and a microadenoma. This retrospective study confirms the ability of the golden-angle radial sparse parallel technique to evaluate the permeability characteristics of the pituitary gland and establishes 120 seconds as the ideal acquisition time for dynamic pituitary gland imaging. © 2015 by American Journal of Neuroradiology.
Rossi Espagnet, M.C.; Bangiyev, L.; Haber, M.; Block, K.T.; Babb, J.; Ruggiero, V.; Boada, F.; Gonen, O.; Fatterpekar, G.M.
2015-01-01
BACKGROUNDANDPURPOSE The pituitary gland is located outside of the blood-brain barrier. Dynamic T1 weighted contrast enhanced sequence is considered to be the gold standard to evaluate this region. However, it does not allow assessment of intrinsic permeability properties of the gland. Our aim was to demonstrate the utility of radial volumetric interpolated brain examination with the golden-angle radial sparse parallel technique to evaluate permeability characteristics of the individual components (anterior and posterior gland and the median eminence) of the pituitary gland and areas of differential enhancement and to optimize the study acquisition time. MATERIALS AND METHODS A retrospective study was performed in 52 patients (group 1, 25 patients with normal pituitary glands; and group 2, 27 patients with a known diagnosis of microadenoma). Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were evaluated with an ROI-based method to obtain signal-time curves and permeability measures of individual normal structures within the pituitary gland and areas of differential enhancement. Statistical analyses were performed to assess differences in the permeability parameters of these individual regions and optimize the study acquisition time. RESULTS Signal-time curves from the posterior pituitary gland and median eminence demonstrated a faster wash-in and time of maximum enhancement with a lower peak of enhancement compared with the anterior pituitary gland (P < .005). Time-optimization analysis demonstrated that 120 seconds is ideal for dynamic pituitary gland evaluation. In the absence of a clinical history, differences in the signal-time curves allow easy distinction between a simple cyst and a microadenoma. CONCLUSIONS This retrospective study confirms the ability of the golden-angle radial sparse parallel technique to evaluate the permeability characteristics of the pituitary gland and establishes 120 seconds as the ideal acquisition time for dynamic pituitary gland imaging. PMID:25953760
Relaxations to Sparse Optimization Problems and Applications
NASA Astrophysics Data System (ADS)
Skau, Erik West
Parsimony is a fundamental property that is applied to many characteristics in a variety of fields. Of particular interest are optimization problems that apply rank, dimensionality, or support in a parsimonious manner. In this thesis we study some optimization problems and their relaxations, and focus on properties and qualities of the solutions of these problems. The Gramian tensor decomposition problem attempts to decompose a symmetric tensor as a sum of rank one tensors.We approach the Gramian tensor decomposition problem with a relaxation to a semidefinite program. We study conditions which ensure that the solution of the relaxed semidefinite problem gives the minimal Gramian rank decomposition. Sparse representations with learned dictionaries are one of the leading image modeling techniques for image restoration. When learning these dictionaries from a set of training images, the sparsity parameter of the dictionary learning algorithm strongly influences the content of the dictionary atoms.We describe geometrically the content of trained dictionaries and how it changes with the sparsity parameter.We use statistical analysis to characterize how the different content is used in sparse representations. Finally, a method to control the structure of the dictionaries is demonstrated, allowing us to learn a dictionary which can later be tailored for specific applications. Variations of dictionary learning can be broadly applied to a variety of applications.We explore a pansharpening problem with a triple factorization variant of coupled dictionary learning. Another application of dictionary learning is computer vision. Computer vision relies heavily on object detection, which we explore with a hierarchical convolutional dictionary learning model. Data fusion of disparate modalities is a growing topic of interest.We do a case study to demonstrate the benefit of using social media data with satellite imagery to estimate hazard extents. In this case study analysis we apply a maximum entropy model, guided by the social media data, to estimate the flooded regions during a 2013 flood in Boulder, CO and show that the results are comparable to those obtained using expert information.
Salvi, M; Velluti, C; Misasi, M; Bartolozzi, P; Quacci, D; Dell'Orbo, C
1991-04-01
Light- and electron-microscopic investigations were performed on two failed Dacron ligaments that had been removed from 2 patients shortly after failure of the implant 2-3 years after reconstruction of the anterior cruciate ligament. Two different cell populations and matrices were correlated with closeness to the Dacron threads. Fibroblasts surrounded by connective tissue with collagen fibrils were located far from the Dacron threads. Roundish cells, appearing to be myofibroblasts surrounded by a more lax connective tissue and elastic fibers, were found close to the Dacron threads. The presence of myofibroblasts and the matrix differentiation could be attributed to the different mechanical forces acting on the Dacron and on the connective tissue because of their different coefficients of elasticity. The sparse occurrence of inflammatory cells in the synovial membrane and in the connective tissue surrounding the Dacron supports the biologic inertness of this artificial material. However, the repair tissue was not structured to resist tension stresses.
Interaction of multiarmed spirals in bistable media.
He, Ya-feng; Ai, Bao-quan; Liu, Fu-cheng
2013-05-01
We study the interaction of both dense and sparse multiarmed spirals in bistable media modeled by equations of the FitzHugh-Nagumo type. A dense one-armed spiral is characterized by its fixed tip. For dense multiarmed spirals, when the initial distance between tips is less than a critical value, the arms collide, connect, and disconnect continuously as the spirals rotate. The continuous reconstruction between the front and the back drives the tips to corotate along a rough circle and to meander zigzaggedly. The rotation frequency of tip, the frequency of zigzagged displacement, the frequency of spiral, the oscillation frequency of media, and the number of arms satisfy certain relations as long as the control parameters of the model are fixed. When the initial distance between tips is larger than the critical value, the behaviors of individual arms within either dense or sparse multiarmed spirals are identical to that of corresponding one-armed spirals.
3D Reconstruction of human bones based on dictionary learning.
Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin
2017-11-01
An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Dispersal capacity and genetic relatedness in Acropora cervicornis on the Florida Reef Tract
NASA Astrophysics Data System (ADS)
Drury, Crawford; Paris, Claire B.; Kourafalou, Vassiliki H.; Lirman, Diego
2018-06-01
Sexual reproduction in scleractinian corals is a critical component of species recovery, fostering population connectivity and enhancing genetic diveristy. The relative contribution of sexual reproduction to both connectivity and diversity in Acropora cervicornis may be variable due to this species' capacity to reproduce effectively by fragmentation. Using a biophysical model and genomic data in this threatened species, we construct potential connectivity pathways on the Florida Reef Tract (FRT) and compare them to inferred migration rates derived from next-generation sequencing, using a link and node-based approach. Larval connectivity on the FRT can be divided into two zones: the northern region, where most transport is unidirectional to the north with the Florida Current, and the southern region that is more dynamic and exhibits complex spatial patterns. These biophysical linkages are poorly correlated with genetic connectivity patterns, which resolve many reciprocal connections and suggest a less sparse network. These results are difficult to reconcile with genetic data which indicate that individual reefs are diverse, suggesting important contributions of sexual reproduction and recruitment. Larval connectivity models highlight potential resources for recovery, such as areas with high larval export like the Lower Keys, or areas that are well connected to most other regions on the FRT, such as the Dry Tortugas.
Bouillon-Pichault, Marion; Jullien, Vincent; Bazzoli, Caroline; Pons, Gérard; Tod, Michel
2011-02-01
The aim of this work was to determine whether optimizing the study design in terms of ages and sampling times for a drug eliminated solely via cytochrome P450 3A4 (CYP3A4) would allow us to accurately estimate the pharmacokinetic parameters throughout the entire childhood timespan, while taking into account age- and weight-related changes. A linear monocompartmental model with first-order absorption was used successively with three different residual error models and previously published pharmacokinetic parameters ("true values"). The optimal ages were established by D-optimization using the CYP3A4 maturation function to create "optimized demographic databases." The post-dose times for each previously selected age were determined by D-optimization using the pharmacokinetic model to create "optimized sparse sampling databases." We simulated concentrations by applying the population pharmacokinetic model to the optimized sparse sampling databases to create optimized concentration databases. The latter were modeled to estimate population pharmacokinetic parameters. We then compared true and estimated parameter values. The established optimal design comprised four age ranges: 0.008 years old (i.e., around 3 days), 0.192 years old (i.e., around 2 months), 1.325 years old, and adults, with the same number of subjects per group and three or four samples per subject, in accordance with the error model. The population pharmacokinetic parameters that we estimated with this design were precise and unbiased (root mean square error [RMSE] and mean prediction error [MPE] less than 11% for clearance and distribution volume and less than 18% for k(a)), whereas the maturation parameters were unbiased but less precise (MPE < 6% and RMSE < 37%). Based on our results, taking growth and maturation into account a priori in a pediatric pharmacokinetic study is theoretically feasible. However, it requires that very early ages be included in studies, which may present an obstacle to the use of this approach. First-pass effects, alternative elimination routes, and combined elimination pathways should also be investigated.
Inference for High-dimensional Differential Correlation Matrices *
Cai, T. Tony; Zhang, Anru
2015-01-01
Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed. PMID:26500380
SD-SEM: sparse-dense correspondence for 3D reconstruction of microscopic samples.
Baghaie, Ahmadreza; Tafti, Ahmad P; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-06-01
Scanning electron microscopy (SEM) imaging has been a principal component of many studies in biomedical, mechanical, and materials sciences since its emergence. Despite the high resolution of captured images, they remain two-dimensional (2D). In this work, a novel framework using sparse-dense correspondence is introduced and investigated for 3D reconstruction of stereo SEM images. SEM micrographs from microscopic samples are captured by tilting the specimen stage by a known angle. The pair of SEM micrographs is then rectified using sparse scale invariant feature transform (SIFT) features/descriptors and a contrario RANSAC for matching outlier removal to ensure a gross horizontal displacement between corresponding points. This is followed by dense correspondence estimation using dense SIFT descriptors and employing a factor graph representation of the energy minimization functional and loopy belief propagation (LBP) as means of optimization. Given the pixel-by-pixel correspondence and the tilt angle of the specimen stage during the acquisition of micrographs, depth can be recovered. Extensive tests reveal the strength of the proposed method for high-quality reconstruction of microscopic samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parallel Finite Element Domain Decomposition for Structural/Acoustic Analysis
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Tungkahotara, Siroj; Watson, Willie R.; Rajan, Subramaniam D.
2005-01-01
A domain decomposition (DD) formulation for solving sparse linear systems of equations resulting from finite element analysis is presented. The formulation incorporates mixed direct and iterative equation solving strategics and other novel algorithmic ideas that are optimized to take advantage of sparsity and exploit modern computer architecture, such as memory and parallel computing. The most time consuming part of the formulation is identified and the critical roles of direct sparse and iterative solvers within the framework of the formulation are discussed. Experiments on several computer platforms using several complex test matrices are conducted using software based on the formulation. Small-scale structural examples are used to validate thc steps in the formulation and large-scale (l,000,000+ unknowns) duct acoustic examples are used to evaluate the ORIGIN 2000 processors, and a duster of 6 PCs (running under the Windows environment). Statistics show that the formulation is efficient in both sequential and parallel computing environmental and that the formulation is significantly faster and consumes less memory than that based on one of the best available commercialized parallel sparse solvers.
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks.
Dilkina, Bistra; Houtman, Rachel; Gomes, Carla P; Montgomery, Claire A; McKelvey, Kevin S; Kendall, Katherine; Graves, Tabitha A; Bernstein, Richard; Schwartz, Michael K
2017-02-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a network of protected areas. We show that designing corridors for single species based on purely ecological criteria leads to extremely expensive linkages that are suboptimal for multispecies connectivity objectives. Similarly, acquiring the least-expensive linkages leads to ecologically poor solutions. We developed algorithms for optimizing corridors for multispecies use given a specific budget. We applied our approach in western Montana to demonstrate how the solutions may be used to evaluate trade-offs in connectivity for 2 species with different habitat requirements, different core areas, and different conservation values under different budgets. We evaluated corridors that were optimal for each species individually and for both species jointly. Incorporating a budget constraint and jointly optimizing for both species resulted in corridors that were close to the individual species movement-potential optima but with substantial cost savings. Our approach produced corridors that were within 14% and 11% of the best possible corridor connectivity for grizzly bears (Ursus arctos) and wolverines (Gulo gulo), respectively, and saved 75% of the cost. Similarly, joint optimization under a combined budget resulted in improved connectivity for both species relative to splitting the budget in 2 to optimize for each species individually. Our results demonstrate economies of scale and complementarities conservation planners can achieve by optimizing corridor designs for financial costs and for multiple species connectivity jointly. We believe that our approach will facilitate corridor conservation by reducing acquisition costs and by allowing derived corridors to more closely reflect conservation priorities. © 2016 Society for Conservation Biology.
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks
Dilkina, Bistra; Houtman, Rachel; Gomes, Carla P.; Montgomery, Claire A.; McKelvey, Kevin; Kendall, Katherine; Graves, Tabitha A.; Bernstein, Richard; Schwartz, Michael K.
2017-01-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a network of protected areas. We show that designing corridors for single species based on purely ecological criteria leads to extremely expensive linkages that are suboptimal for multispecies connectivity objectives. Similarly, acquiring the least-expensive linkages leads to ecologically poor solutions. We developed algorithms for optimizing corridors for multispecies use given a specific budget. We applied our approach in western Montana to demonstrate how the solutions may be used to evaluate trade-offs in connectivity for 2 species with different habitat requirements, different core areas, and different conservation values under different budgets. We evaluated corridors that were optimal for each species individually and for both species jointly. Incorporating a budget constraint and jointly optimizing for both species resulted in corridors that were close to the individual species movement-potential optima but with substantial cost savings. Our approach produced corridors that were within 14% and 11% of the best possible corridor connectivity for grizzly bears (Ursus arctos) and wolverines (Gulo gulo), respectively, and saved 75% of the cost. Similarly, joint optimization under a combined budget resulted in improved connectivity for both species relative to splitting the budget in 2 to optimize for each species individually. Our results demonstrate economies of scale and complementarities conservation planners can achieve by optimizing corridor designs for financial costs and for multiple species connectivity jointly. We believe that our approach will facilitate corridor conservation by reducing acquisition costs and by allowing derived corridors to more closely reflect conservation priorities.
DuFour, Mark R.; Mayer, Christine M.; Kocovsky, Patrick; Qian, Song; Warner, David M.; Kraus, Richard T.; Vandergoot, Christopher
2017-01-01
Hydroacoustic sampling of low-density fish in shallow water can lead to low sample sizes of naturally variable target strength (TS) estimates, resulting in both sparse and variable data. Increasing maximum beam compensation (BC) beyond conventional values (i.e., 3 dB beam width) can recover more targets during data analysis; however, data quality decreases near the acoustic beam edges. We identified the optimal balance between data quantity and quality with increasing BC using a standard sphere calibration, and we quantified the effect of BC on fish track variability, size structure, and density estimates of Lake Erie walleye (Sander vitreus). Standard sphere mean TS estimates were consistent with theoretical values (−39.6 dB) up to 18-dB BC, while estimates decreased at greater BC values. Natural sources (i.e., residual and mean TS) dominated total fish track variation, while contributions from measurement related error (i.e., number of single echo detections (SEDs) and BC) were proportionally low. Increasing BC led to more fish encounters and SEDs per fish, while stability in size structure and density were observed at intermediate values (e.g., 18 dB). Detection of medium to large fish (i.e., age-2+ walleye) benefited most from increasing BC, as proportional changes in size structure and density were greatest in these size categories. Therefore, when TS data are sparse and variable, increasing BC to an optimal value (here 18 dB) will maximize the TS data quantity while limiting lower-quality data near the beam edges.
Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo
2018-05-03
Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).
Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-05-16
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)
Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-01-01
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731
Infrared moving small target detection based on saliency extraction and image sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie
2016-10-01
Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.
Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E
2015-09-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.
Design and optimization of color lookup tables on a simplex topology.
Monga, Vishal; Bala, Raja; Mo, Xuan
2012-04-01
An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
Robust quantum optimizer with full connectivity
Nigg, Simon E.; Lörch, Niels; Tiwari, Rakesh P.
2017-01-01
Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation. PMID:28435880
ERIC Educational Resources Information Center
Selhorst, Adam L.; Klein, Eric; Harrison, Justin
2017-01-01
Research addressing the effects of cohort size on student success in asynchronous online discussions is sparse. As such, the following study attempted to determine an optimal student cohort size to enhance success and engagement within online discussions in general education courses at a large post-secondary university consisting of predominately…
Cityscape genetics: structural vs. functional connectivity of an urban lizard population.
Beninde, Joscha; Feldmeier, Stephan; Werner, Maike; Peroverde, Daniel; Schulte, Ulrich; Hochkirch, Axel; Veith, Michael
2016-10-01
Functional connectivity is essential for the long-term persistence of populations. However, many studies assess connectivity with a focus on structural connectivity only. Cityscapes, namely urban landscapes, are particularly dynamic and include numerous potential anthropogenic barriers to animal movements, such as roads, traffic or buildings. To assess and compare structural connectivity of habitats and functional connectivity of gene flow of an urban lizard, we here combined species distribution models (SDMs) with an individual-based landscape genetic optimization procedure. The most important environmental factors of the SDMs are structural diversity and substrate type, with high and medium levels of structural diversity as well as open and rocky/gravel substrates contributing most to structural connectivity. By contrast, water cover was the best model of all environmental factors following landscape genetic optimization. The river is thus a major barrier to gene flow, while of the typical anthropogenic factors only buildings showed an effect. Nonetheless, using SDMs as a basis for landscape genetic optimization provided the highest ranked model for functional connectivity. Optimizing SDMs in this way can provide a sound basis for models of gene flow of the cityscape, and elsewhere, while presence-only and presence-absence modelling approaches showed differences in performance. Additionally, interpretation of results based on SDM factor importance can be misleading, dictating more thorough analyses following optimization of SDMs. Such approaches can be adopted for management strategies, for example aiming to connect native common wall lizard populations or disconnect them from non-native introduced populations, which are currently spreading in many cities in Central Europe. © 2016 John Wiley & Sons Ltd.
Split Bregman's optimization method for image construction in compressive sensing
NASA Astrophysics Data System (ADS)
Skinner, D.; Foo, S.; Meyer-Bäse, A.
2014-05-01
The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.
Network architecture of the cerebral nuclei (basal ganglia) association and commissural connectome.
Swanson, Larry W; Sporns, Olaf; Hahn, Joel D
2016-10-04
The cerebral nuclei form the ventral division of the cerebral hemisphere and are thought to play an important role in neural systems controlling somatic movement and motivation. Network analysis was used to define global architectural features of intrinsic cerebral nuclei circuitry in one hemisphere (association connections) and between hemispheres (commissural connections). The analysis was based on more than 4,000 reports of histologically defined axonal connections involving all 45 gray matter regions of the rat cerebral nuclei and revealed the existence of four asymmetrically interconnected modules. The modules form four topographically distinct longitudinal columns that only partly correspond to previous interpretations of cerebral nuclei structure-function organization. The network of connections within and between modules in one hemisphere or the other is quite dense (about 40% of all possible connections), whereas the network of connections between hemispheres is weak and sparse (only about 5% of all possible connections). Particularly highly interconnected regions (rich club and hubs within it) form a topologically continuous band extending through two of the modules. Connection path lengths among numerous pairs of regions, and among some of the network's modules, are relatively long, thus accounting for low global efficiency in network communication. These results provide a starting point for reexamining the connectional organization of the cerebral hemispheres as a whole (right and left cerebral cortex and cerebral nuclei together) and their relation to the rest of the nervous system.
Stochastic simulation and robust design optimization of integrated photonic filters
NASA Astrophysics Data System (ADS)
Weng, Tsui-Wei; Melati, Daniele; Melloni, Andrea; Daniel, Luca
2017-01-01
Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%-35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.
Color normalization of histology slides using graph regularized sparse NMF
NASA Astrophysics Data System (ADS)
Sha, Lingdao; Schonfeld, Dan; Sethi, Amit
2017-03-01
Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.
NASA Astrophysics Data System (ADS)
Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.
2013-02-01
Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.
Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen
2016-07-07
Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.
The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-01-01
Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.
Sparse and stable Markowitz portfolios.
Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace
2009-07-28
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.
NASA Astrophysics Data System (ADS)
Santarius, John; Navarro, Marcos; Michalak, Matthew; Fancher, Aaron; Kulcinski, Gerald; Bonomo, Richard
2016-10-01
A newly initiated research project will be described that investigates methods for detecting shielded special nuclear materials by combining multi-dimensional neutron sources, forward/adjoint calculations modeling neutron and gamma transport, and sparse data analysis of detector signals. The key tasks for this project are: (1) developing a radiation transport capability for use in optimizing adaptive-geometry, inertial-electrostatic confinement (IEC) neutron source/detector configurations for neutron pulses distributed in space and/or phased in time; (2) creating distributed-geometry, gas-target, IEC fusion neutron sources; (3) applying sparse data and noise reduction algorithms, such as principal component analysis (PCA) and wavelet transform analysis, to enhance detection fidelity; and (4) educating graduate and undergraduate students. Funded by DHS DNDO Project 2015-DN-077-ARI095.
The evolutionary origins of Lévy walk foraging
Wosniack, Marina E.
2017-01-01
We study through a reaction-diffusion algorithm the influence of landscape diversity on the efficiency of search dynamics. Remarkably, the identical optimal search strategy arises in a wide variety of environments, provided the target density is sparse and the searcher’s information is restricted to its close vicinity. Our results strongly impact the current debate on the emergentist vs. evolutionary origins of animal foraging. The inherent character of the optimal solution (i.e., independent on the landscape for the broad scenarios assumed here) suggests an interpretation favoring the evolutionary view, as originally implied by the Lévy flight foraging hypothesis. The latter states that, under conditions of scarcity of information and sparse resources, some organisms must have evolved to exploit optimal strategies characterized by heavy-tailed truncated power-law distributions of move lengths. These results strongly suggest that Lévy strategies—and hence the selection pressure for the relevant adaptations—are robust with respect to large changes in habitat. In contrast, the usual emergentist explanation seems not able to explain how very similar Lévy walks can emerge from all the distinct non-Lévy foraging strategies that are needed for the observed large variety of specific environments. We also report that deviations from Lévy can take place in plentiful ecosystems, where locomotion truncation is very frequent due to high encounter rates. So, in this case normal diffusion strategies—performing as effectively as the optimal one—can naturally emerge from Lévy. Our results constitute the strongest theoretical evidence to date supporting the evolutionary origins of experimentally observed Lévy walks. PMID:28972973
Data Mining and Optimization Tools for Developing Engine Parameters Tools
NASA Technical Reports Server (NTRS)
Dhawan, Atam P.
1998-01-01
This project was awarded for understanding the problem and developing a plan for Data Mining tools for use in designing and implementing an Engine Condition Monitoring System. From the total budget of $5,000, Tricia and I studied the problem domain for developing ail Engine Condition Monitoring system using the sparse and non-standardized datasets to be available through a consortium at NASA Lewis Research Center. We visited NASA three times to discuss additional issues related to dataset which was not made available to us. We discussed and developed a general framework of data mining and optimization tools to extract useful information from sparse and non-standard datasets. These discussions lead to the training of Tricia Erhardt to develop Genetic Algorithm based search programs which were written in C++ and used to demonstrate the capability of GA algorithm in searching an optimal solution in noisy datasets. From the study and discussion with NASA LERC personnel, we then prepared a proposal, which is being submitted to NASA for future work for the development of data mining algorithms for engine conditional monitoring. The proposed set of algorithm uses wavelet processing for creating multi-resolution pyramid of the data for GA based multi-resolution optimal search. Wavelet processing is proposed to create a coarse resolution representation of data providing two advantages in GA based search: 1. We will have less data to begin with to make search sub-spaces. 2. It will have robustness against the noise because at every level of wavelet based decomposition, we will be decomposing the signal into low pass and high pass filters.
Optimal interpolation and the Kalman filter. [for analysis of numerical weather predictions
NASA Technical Reports Server (NTRS)
Cohn, S.; Isaacson, E.; Ghil, M.
1981-01-01
The estimation theory of stochastic-dynamic systems is described and used in a numerical study of optimal interpolation. The general form of data assimilation methods is reviewed. The Kalman-Bucy, KB filter, and optimal interpolation (OI) filters are examined for effectiveness in performance as gain matrices using a one-dimensional form of the shallow-water equations. Control runs in the numerical analyses were performed for a ten-day forecast in concert with the OI method. The effects of optimality, initialization, and assimilation were studied. It was found that correct initialization is necessary in order to localize errors, especially near boundary points. Also, the use of small forecast error growth rates over data-sparse areas was determined to offset inaccurate modeling of correlation functions near boundaries.
Multiple crack detection in 3D using a stable XFEM and global optimization
NASA Astrophysics Data System (ADS)
Agathos, Konstantinos; Chatzi, Eleni; Bordas, Stéphane P. A.
2018-02-01
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
Capacity for patterns and sequences in Kanerva's SDM as compared to other associative memory models
NASA Technical Reports Server (NTRS)
Keeler, James D.
1987-01-01
The information capacity of Kanerva's Sparse Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and Hopfield-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns.
NASA Astrophysics Data System (ADS)
Kalantari, Z.
2015-12-01
In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. This study was built on a conceptual framework for looking at SedInConnect model, topography, land use, soil data and other PCDs and climate change in an integrated way to pave the way for more integrated policy making. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. This framework can provide a region with an effective tool to inform a broad range of watershed planning activities within a region. Regional planners, decision-makers, etc. can utilize this tool to identify the most vulnerable points in a watershed and along roads to plan for interventions and actions to alter impacts of high flows and other extreme weather events on roads construction. The application of the model over a large scale can give a realistic spatial characterization of sediment connectivity for the optimal management of debris flow to road structures. The ability of the model to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.
Study on transfer optimization of urban rail transit and conventional public transport
NASA Astrophysics Data System (ADS)
Wang, Jie; Sun, Quan Xin; Mao, Bao Hua
2018-04-01
This paper mainly studies the time optimization of feeder connection between rail transit and conventional bus in a shopping center. In order to achieve the goal of connecting rail transportation effectively and optimizing the convergence between the two transportations, the things had to be done are optimizing the departure intervals, shorting the passenger transfer time and improving the service level of public transit. Based on the goal that has the minimum of total waiting time of passengers and the number of start of classes, establish the optimizing model of bus connecting of departure time. This model has some constrains such as transfer time, load factor, and the convergence of public transportation grid spacing. It solves the problems by using genetic algorithms.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
NASA Technical Reports Server (NTRS)
Zubair, Mohammad; Nielsen, Eric; Luitjens, Justin; Hammond, Dana
2016-01-01
In the field of computational fluid dynamics, the Navier-Stokes equations are often solved using an unstructuredgrid approach to accommodate geometric complexity. Implicit solution methodologies for such spatial discretizations generally require frequent solution of large tightly-coupled systems of block-sparse linear equations. The multicolor point-implicit solver used in the current work typically requires a significant fraction of the overall application run time. In this work, an efficient implementation of the solver for graphics processing units is proposed. Several factors present unique challenges to achieving an efficient implementation in this environment. These include the variable amount of parallelism available in different kernel calls, indirect memory access patterns, low arithmetic intensity, and the requirement to support variable block sizes. In this work, the solver is reformulated to use standard sparse and dense Basic Linear Algebra Subprograms (BLAS) functions. However, numerical experiments show that the performance of the BLAS functions available in existing CUDA libraries is suboptimal for matrices representative of those encountered in actual simulations. Instead, optimized versions of these functions are developed. Depending on block size, the new implementations show performance gains of up to 7x over the existing CUDA library functions.
NASA Astrophysics Data System (ADS)
Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin
2018-02-01
In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.
Convex Banding of the Covariance Matrix
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189
Convex Banding of the Covariance Matrix.
Bien, Jacob; Bunea, Florentina; Xiao, Luo
2016-01-01
We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.
The association between resting functional connectivity and dispositional optimism.
Ran, Qian; Yang, Junyi; Yang, Wenjing; Wei, Dongtao; Qiu, Jiang; Zhang, Dong
2017-01-01
Dispositional optimism is an individual characteristic that plays an important role in human experience. Optimists are people who tend to hold positive expectations for their future. Previous studies have focused on the neural basis of optimism, such as task response neural activity and brain structure volume. However, the functional connectivity between brain regions of the dispositional optimists are poorly understood. Previous study suggested that the ventromedial prefrontal cortex (vmPFC) are associated with individual differences in dispositional optimism, but it is unclear whether there are other brain regions that combine with the vmPFC to contribute to dispositional optimism. Thus, the present study used the resting-state functional connectivity (RSFC) approach and set the vmPFC as the seed region to examine if differences in functional brain connectivity between the vmPFC and other brain regions would be associated with individual differences in dispositional optimism. The results found that dispositional optimism was significantly positively correlated with the strength of the RSFC between vmPFC and middle temporal gyrus (mTG) and negativly correlated with RSFC between vmPFC and inferior frontal gyrus (IFG). These findings may be suggested that mTG and IFG which associated with emotion processes and emotion regulation also play an important role in the dispositional optimism.
The association between resting functional connectivity and dispositional optimism
Yang, Wenjing; Wei, Dongtao; Qiu, Jiang; Zhang, Dong
2017-01-01
Dispositional optimism is an individual characteristic that plays an important role in human experience. Optimists are people who tend to hold positive expectations for their future. Previous studies have focused on the neural basis of optimism, such as task response neural activity and brain structure volume. However, the functional connectivity between brain regions of the dispositional optimists are poorly understood. Previous study suggested that the ventromedial prefrontal cortex (vmPFC) are associated with individual differences in dispositional optimism, but it is unclear whether there are other brain regions that combine with the vmPFC to contribute to dispositional optimism. Thus, the present study used the resting-state functional connectivity (RSFC) approach and set the vmPFC as the seed region to examine if differences in functional brain connectivity between the vmPFC and other brain regions would be associated with individual differences in dispositional optimism. The results found that dispositional optimism was significantly positively correlated with the strength of the RSFC between vmPFC and middle temporal gyrus (mTG) and negativly correlated with RSFC between vmPFC and inferior frontal gyrus (IFG). These findings may be suggested that mTG and IFG which associated with emotion processes and emotion regulation also play an important role in the dispositional optimism. PMID:28700613
Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method
NASA Astrophysics Data System (ADS)
Chen, Xiaomin; Wang, Gang
2017-05-01
The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.
Statistical inference methods for sparse biological time series data.
Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita
2011-04-25
Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée
2015-04-01
In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less
Interaction mining and skill-dependent recommendations for multi-objective team composition
Dorn, Christoph; Skopik, Florian; Schall, Daniel; Dustdar, Schahram
2011-01-01
Web-based collaboration and virtual environments supported by various Web 2.0 concepts enable the application of numerous monitoring, mining and analysis tools to study human interactions and team formation processes. The composition of an effective team requires a balance between adequate skill fulfillment and sufficient team connectivity. The underlying interaction structure reflects social behavior and relations of individuals and determines to a large degree how well people can be expected to collaborate. In this paper we address an extended team formation problem that does not only require direct interactions to determine team connectivity but additionally uses implicit recommendations of collaboration partners to support even sparsely connected networks. We provide two heuristics based on Genetic Algorithms and Simulated Annealing for discovering efficient team configurations that yield the best trade-off between skill coverage and team connectivity. Our self-adjusting mechanism aims to discover the best combination of direct interactions and recommendations when deriving connectivity. We evaluate our approach based on multiple configurations of a simulated collaboration network that features close resemblance to real world expert networks. We demonstrate that our algorithm successfully identifies efficient team configurations even when removing up to 40% of experts from various social network configurations. PMID:22298939
Determining biosonar images using sparse representations.
Fontaine, Bertrand; Peremans, Herbert
2009-05-01
Echolocating bats are thought to be able to create an image of their environment by emitting pulses and analyzing the reflected echoes. In this paper, the theory of sparse representations and its more recent further development into compressed sensing are applied to this biosonar image formation task. Considering the target image representation as sparse allows formulation of this inverse problem as a convex optimization problem for which well defined and efficient solution methods have been established. The resulting technique, referred to as L1-minimization, is applied to simulated data to analyze its performance relative to delay accuracy and delay resolution experiments. This method performs comparably to the coherent receiver for the delay accuracy experiments, is quite robust to noise, and can reconstruct complex target impulse responses as generated by many closely spaced reflectors with different reflection strengths. This same technique, in addition to reconstructing biosonar target images, can be used to simultaneously localize these complex targets by interpreting location cues induced by the bat's head related transfer function. Finally, a tentative explanation is proposed for specific bat behavioral experiments in terms of the properties of target images as reconstructed by the L1-minimization method.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
Technical note: an R package for fitting sparse neural networks with application in animal breeding.
Wang, Yangfan; Mi, Xue; Rosa, Guilherme J M; Chen, Zhihui; Lin, Ping; Wang, Shi; Bao, Zhenmin
2018-05-04
Neural networks (NNs) have emerged as a new tool for genomic selection (GS) in animal breeding. However, the properties of NN used in GS for the prediction of phenotypic outcomes are not well characterized due to the problem of over-parameterization of NN and difficulties in using whole-genome marker sets as high-dimensional NN input. In this note, we have developed an R package called snnR that finds an optimal sparse structure of a NN by minimizing the square error subject to a penalty on the L1-norm of the parameters (weights and biases), therefore solving the problem of over-parameterization in NN. We have also tested some models fitted in the snnR package to demonstrate their feasibility and effectiveness to be used in several cases as examples. In comparison of snnR to the R package brnn (the Bayesian regularized single layer NNs), with both using the entries of a genotype matrix or a genomic relationship matrix as inputs, snnR has greatly improved the computational efficiency and the prediction ability for the GS in animal breeding because snnR implements a sparse NN with many hidden layers.
Joint seismic data denoising and interpolation with double-sparsity dictionary learning
NASA Astrophysics Data System (ADS)
Zhu, Lingchen; Liu, Entao; McClellan, James H.
2017-08-01
Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.
Computational efficiency improvements for image colorization
NASA Astrophysics Data System (ADS)
Yu, Chao; Sharma, Gaurav; Aly, Hussein
2013-03-01
We propose an efficient algorithm for colorization of greyscale images. As in prior work, colorization is posed as an optimization problem: a user specifies the color for a few scribbles drawn on the greyscale image and the color image is obtained by propagating color information from the scribbles to surrounding regions, while maximizing the local smoothness of colors. In this formulation, colorization is obtained by solving a large sparse linear system, which normally requires substantial computation and memory resources. Our algorithm improves the computational performance through three innovations over prior colorization implementations. First, the linear system is solved iteratively without explicitly constructing the sparse matrix, which significantly reduces the required memory. Second, we formulate each iteration in terms of integral images obtained by dynamic programming, reducing repetitive computation. Third, we use a coarseto- fine framework, where a lower resolution subsampled image is first colorized and this low resolution color image is upsampled to initialize the colorization process for the fine level. The improvements we develop provide significant speedup and memory savings compared to the conventional approach of solving the linear system directly using off-the-shelf sparse solvers, and allow us to colorize images with typical sizes encountered in realistic applications on typical commodity computing platforms.
Artificial neural networks as quantum associative memory
NASA Astrophysics Data System (ADS)
Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis
We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n < 1000 qubits. This work was supported by the United States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.
Lin, Tiger W.; Das, Anup; Krishnan, Giri P.; Bazhenov, Maxim; Sejnowski, Terrence J.
2017-01-01
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005; Pillow et al., 2008), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals. PMID:28777719
Lin, Tiger W; Das, Anup; Krishnan, Giri P; Bazhenov, Maxim; Sejnowski, Terrence J
2017-10-01
With our ability to record more neurons simultaneously, making sense of these data is a challenge. Functional connectivity is one popular way to study the relationship of multiple neural signals. Correlation-based methods are a set of currently well-used techniques for functional connectivity estimation. However, due to explaining away and unobserved common inputs (Stevenson, Rebesco, Miller, & Körding, 2008 ), they produce spurious connections. The general linear model (GLM), which models spike trains as Poisson processes (Okatan, Wilson, & Brown, 2005 ; Truccolo, Eden, Fellows, Donoghue, & Brown, 2005 ; Pillow et al., 2008 ), avoids these confounds. We develop here a new class of methods by using differential signals based on simulated intracellular voltage recordings. It is equivalent to a regularized AR(2) model. We also expand the method to simulated local field potential recordings and calcium imaging. In all of our simulated data, the differential covariance-based methods achieved performance better than or similar to the GLM method and required fewer data samples. This new class of methods provides alternative ways to analyze neural signals.
Exploring the Function Space of Deep-Learning Machines
NASA Astrophysics Data System (ADS)
Li, Bo; Saad, David
2018-06-01
The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely connected architectures to discover a layerwise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.
Dual pore-connectivity and flow-paths affect shale hydrocarbon production
NASA Astrophysics Data System (ADS)
Hayman, N. W.; Daigle, H.; Kelly, E. D.; Milliken, K. L.; Jiang, H.
2016-12-01
Aided with integrated characterization approaches of droplet contact angle measurement, mercury intrusion capillary pressure, low-pressure gas physisorption, scanning electron microscopy, and small angle neutron scattering, we have systematically studied how pore connectivity and wettability are associated with mineral and organic matter phases of shales (Barnett, Bakken, Eagle Ford), as well as their influence on macroscopic fluid flow and hydrocarbon movement, from the following complementary tests: vacuum saturation with vacuum-pulling on dry shale followed with tracer introduction and high-pressure intrusion, tracer diffusion into fluid-saturated shale, fluid and tracer imbibition into partially-saturated shale, and Wood's metal intrusion followed with imaging and elemental mapping. The first three tests use tracer-bearing fluids (hydrophilic API brine and hydrophobic n-decane) fluids with a suite of wettability tracers of different sizes and reactivities developed in our laboratory. These innovative and integrated approaches indicate a Dalmatian wettability behavior at a scale of microns, limited connectivity (<500 microns from shale sample edge) shale pores, and disparity of well-connected hydrophobic pore network ( 10 nm) and sparsely connected hydrophilic pore systems (>50-100 nm), which is linked to the steep initial decline and low overall recovery because of the limited connection of hydrocarbon molecules in the shale matrix to the stimulated fracture network.
Dual pore-connectivity and flow-paths affect shale hydrocarbon production
NASA Astrophysics Data System (ADS)
Hu, Q.; Barber, T.; Zhang, Y.; Md Golam, K.
2017-12-01
Aided with integrated characterization approaches of droplet contact angle measurement, mercury intrusion capillary pressure, low-pressure gas physisorption, scanning electron microscopy, and small angle neutron scattering, we have systematically studied how pore connectivity and wettability are associated with mineral and organic matter phases of shales (Barnett, Bakken, Eagle Ford), as well as their influence on macroscopic fluid flow and hydrocarbon movement, from the following complementary tests: vacuum saturation with vacuum-pulling on dry shale followed with tracer introduction and high-pressure intrusion, tracer diffusion into fluid-saturated shale, fluid and tracer imbibition into partially-saturated shale, and Wood's metal intrusion followed with imaging and elemental mapping. The first three tests use tracer-bearing fluids (hydrophilic API brine and hydrophobic n-decane) fluids with a suite of wettability tracers of different sizes and reactivities developed in our laboratory. These innovative and integrated approaches indicate a Dalmatian wettability behavior at a scale of microns, limited connectivity (<500 microns from shale sample edge) shale pores, and disparity of well-connected hydrophobic pore network ( 10 nm) and sparsely connected hydrophilic pore systems (>50-100 nm), which is linked to the steep initial decline and low overall recovery because of the limited connection of hydrocarbon molecules in the shale matrix to the stimulated fracture network.
Ghanbari, Yasser; Smith, Alex R.; Schultz, Robert T.; Verma, Ragini
2014-01-01
Diffusion tensor imaging (DTI) offers rich insights into the physical characteristics of white matter (WM) fiber tracts and their development in the brain, facilitating a network representation of brain’s traffic pathways. Such a network representation of brain connectivity has provided a novel means of investigating brain changes arising from pathology, development or aging. The high dimensionality of these connectivity networks necessitates the development of methods that identify the connectivity building blocks or sub-network components that characterize the underlying variation in the population. In addition, the projection of the subject networks into the basis set provides a low dimensional representation of it, that teases apart different sources of variation in the sample, facilitating variation-specific statistical analysis. We propose a unified framework of non-negative matrix factorization and graph embedding for learning sub-network patterns of connectivity by their projective non-negative decomposition into a reconstructive basis set, as well as, additional basis sets representing variational sources in the population like age and pathology. The proposed framework is applied to a study of diffusion-based connectivity in subjects with autism that shows localized sparse sub-networks which mostly capture the changes related to pathology and developmental variations. PMID:25037933
Prediction of vein connectivity using the percolation approach: model test with field data
NASA Astrophysics Data System (ADS)
Belayneh, M.; Masihi, M.; Matthäi, S. K.; King, P. R.
2006-09-01
Evaluating the uncertainty in fracture connectivity and its effect on the flow behaviour of natural fracture networks formed under in situ conditions is an extremely difficult task. One widely used probabilistic approach is to use percolation theory, which is well adapted to estimate the connectivity and conductivity of geometrical objects near the percolation threshold. In this paper, we apply scaling laws from percolation theory to predict the connectivity of vein sets exposed on the southern margin of the Bristol Channel Basin. Two vein sets in a limestone bed interbedded with shales on the limb of a rollover fold were analysed for length, spacing and aperture distributions. Eight scan lines, low-level aerial photographs and mosaics of photographs taken with a tripod were used. The analysed veins formed contemporaneously with the rollover fold during basin subsidence on the hanging wall of a listric normal fault. The first vein set, V1, is fold axis-parallel (i.e. striking ~100°) and normal to bedding. The second vein set, V2, strikes 140° and crosscuts V1. We find a close agreement in connectivity between our predictions using the percolation approach and the field data. The implication is that reasonable predictions of vein connectivity can be made from sparse data obtained from boreholes or (limited) sporadic outcrop.
Large-scale DCMs for resting-state fMRI.
Razi, Adeel; Seghier, Mohamed L; Zhou, Yuan; McColgan, Peter; Zeidman, Peter; Park, Hae-Jeong; Sporns, Olaf; Rees, Geraint; Friston, Karl J
2017-01-01
This paper considers the identification of large directed graphs for resting-state brain networks based on biophysical models of distributed neuronal activity, that is, effective connectivity . This identification can be contrasted with functional connectivity methods based on symmetric correlations that are ubiquitous in resting-state functional MRI (fMRI). We use spectral dynamic causal modeling (DCM) to invert large graphs comprising dozens of nodes or regions. The ensuing graphs are directed and weighted, hence providing a neurobiologically plausible characterization of connectivity in terms of excitatory and inhibitory coupling. Furthermore, we show that the use of to discover the most likely sparse graph (or model) from a parent (e.g., fully connected) graph eschews the arbitrary thresholding often applied to large symmetric (functional connectivity) graphs. Using empirical fMRI data, we show that spectral DCM furnishes connectivity estimates on large graphs that correlate strongly with the estimates provided by stochastic DCM. Furthermore, we increase the efficiency of model inversion using functional connectivity modes to place prior constraints on effective connectivity. In other words, we use a small number of modes to finesse the potentially redundant parameterization of large DCMs. We show that spectral DCM-with functional connectivity priors-is ideally suited for directed graph theoretic analyses of resting-state fMRI. We envision that directed graphs will prove useful in understanding the psychopathology and pathophysiology of neurodegenerative and neurodevelopmental disorders. We will demonstrate the utility of large directed graphs in clinical populations in subsequent reports, using the procedures described in this paper.
Network architecture of the cerebral nuclei (basal ganglia) association and commissural connectome
Swanson, Larry W.; Sporns, Olaf; Hahn, Joel D.
2016-01-01
The cerebral nuclei form the ventral division of the cerebral hemisphere and are thought to play an important role in neural systems controlling somatic movement and motivation. Network analysis was used to define global architectural features of intrinsic cerebral nuclei circuitry in one hemisphere (association connections) and between hemispheres (commissural connections). The analysis was based on more than 4,000 reports of histologically defined axonal connections involving all 45 gray matter regions of the rat cerebral nuclei and revealed the existence of four asymmetrically interconnected modules. The modules form four topographically distinct longitudinal columns that only partly correspond to previous interpretations of cerebral nuclei structure–function organization. The network of connections within and between modules in one hemisphere or the other is quite dense (about 40% of all possible connections), whereas the network of connections between hemispheres is weak and sparse (only about 5% of all possible connections). Particularly highly interconnected regions (rich club and hubs within it) form a topologically continuous band extending through two of the modules. Connection path lengths among numerous pairs of regions, and among some of the network’s modules, are relatively long, thus accounting for low global efficiency in network communication. These results provide a starting point for reexamining the connectional organization of the cerebral hemispheres as a whole (right and left cerebral cortex and cerebral nuclei together) and their relation to the rest of the nervous system. PMID:27647882
Intrapartum fetal heart rate classification from trajectory in Sparse SVM feature space.
Spilka, J; Frecon, J; Leonarduzzi, R; Pustelnik, N; Abry, P; Doret, M
2015-01-01
Intrapartum fetal heart rate (FHR) constitutes a prominent source of information for the assessment of fetal reactions to stress events during delivery. Yet, early detection of fetal acidosis remains a challenging signal processing task. The originality of the present contribution are three-fold: multiscale representations and wavelet leader based multifractal analysis are used to quantify FHR variability ; Supervised classification is achieved by means of Sparse-SVM that aim jointly to achieve optimal detection performance and to select relevant features in a multivariate setting ; Trajectories in the feature space accounting for the evolution along time of features while labor progresses are involved in the construction of indices quantifying fetal health. The classification performance permitted by this combination of tools are quantified on a intrapartum FHR large database (≃ 1250 subjects) collected at a French academic public hospital.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Sparse distributed memory: understanding the speed and robustness of expert memory
Brogliato, Marcelo S.; Chada, Daniel M.; Linhares, Alexandre
2014-01-01
How can experts, sometimes in exacting detail, almost immediately and very precisely recall memory items from a vast repertoire? The problem in which we will be interested concerns models of theoretical neuroscience that could explain the speed and robustness of an expert's recollection. The approach is based on Sparse Distributed Memory, which has been shown to be plausible, both in a neuroscientific and in a psychological manner, in a number of ways. A crucial characteristic concerns the limits of human recollection, the “tip-of-tongue” memory event—which is found at a non-linearity in the model. We expand the theoretical framework, deriving an optimization formula to solve this non-linearity. Numerical results demonstrate how the higher frequency of rehearsal, through work or study, immediately increases the robustness and speed associated with expert memory. PMID:24808842
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-01-01
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-04-03
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.
Sparse and stable Markowitz portfolios
Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace
2009-01-01
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio. PMID:19617537
Maximizing algebraic connectivity in air transportation networks
NASA Astrophysics Data System (ADS)
Wei, Peng
In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the weight assignment can not be studied separately for the problem with operating cost constraint. Therefore a relaxed SDP method with golden section search is developed to solve both at the same time. The cluster decomposition is utilized to solve large scale networks.
Holographic implementation of a binary associative memory for improved recognition
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Somnath; Ghosh, Ajay; Datta, Asit K.
1998-03-01
Neural network associate memory has found wide application sin pattern recognition techniques. We propose an associative memory model for binary character recognition. The interconnection strengths of the memory are binary valued. The concept of sparse coding is sued to enhance the storage efficiency of the model. The question of imposed preconditioning of pattern vectors, which is inherent in a sparsely coded conventional memory, is eliminated by using a multistep correlation technique an the ability of correct association is enhanced in a real-time application. A potential optoelectronic implementation of the proposed associative memory is also described. The learning and recall is possible by using digital optical matrix-vector multiplication, where full use of parallelism and connectivity of optics is made. A hologram is used in the experiment as a longer memory (LTM) for storing all input information. The short-term memory or the interconnection weight matrix required during the recall process is configured by retrieving the necessary information from the holographic LTM.
Competition in high dimensional spaces using a sparse approximation of neural fields.
Quinton, Jean-Charles; Girau, Bernard; Lefort, Mathieu
2011-01-01
The Continuum Neural Field Theory implements competition within topologically organized neural networks with lateral inhibitory connections. However, due to the polynomial complexity of matrix-based implementations, updating dense representations of the activity becomes computationally intractable when an adaptive resolution or an arbitrary number of input dimensions is required. This paper proposes an alternative to self-organizing maps with a sparse implementation based on Gaussian mixture models, promoting a trade-off in redundancy for higher computational efficiency and alleviating constraints on the underlying substrate.This version reproduces the emergent attentional properties of the original equations, by directly applying them within a continuous approximation of a high dimensional neural field. The model is compatible with preprocessed sensory flows but can also be interfaced with artificial systems. This is particularly important for sensorimotor systems, where decisions and motor actions must be taken and updated in real-time. Preliminary tests are performed on a reactive color tracking application, using spatially distributed color features.
Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric
2013-06-01
Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.
Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric
2014-01-01
Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph (DAG)—a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer’s disease (AD) and reveal findings that could lead to advancements in AD research. PMID:22665720
Efficient transformation of an auditory population code in a small sensory system.
Clemens, Jan; Kutzki, Olaf; Ronacher, Bernhard; Schreiber, Susanne; Wohlgemuth, Sandra
2011-08-16
Optimal coding principles are implemented in many large sensory systems. They include the systematic transformation of external stimuli into a sparse and decorrelated neuronal representation, enabling a flexible readout of stimulus properties. Are these principles also applicable to size-constrained systems, which have to rely on a limited number of neurons and may only have to fulfill specific and restricted tasks? We studied this question in an insect system--the early auditory pathway of grasshoppers. Grasshoppers use genetically fixed songs to recognize mates. The first steps of neural processing of songs take place in a small three-layer feed-forward network comprising only a few dozen neurons. We analyzed the transformation of the neural code within this network. Indeed, grasshoppers create a decorrelated and sparse representation, in accordance with optimal coding theory. Whereas the neuronal input layer is best read out as a summed population, a labeled-line population code for temporal features of the song is established after only two processing steps. At this stage, information about song identity is maximal for a population decoder that preserves neuronal identity. We conclude that optimal coding principles do apply to the early auditory system of the grasshopper, despite its size constraints. The inputs, however, are not encoded in a systematic, map-like fashion as in many larger sensory systems. Already at its periphery, part of the grasshopper auditory system seems to focus on behaviorally relevant features, and is in this property more reminiscent of higher sensory areas in vertebrates.
Optimizing Sparse Representations of Kinetic Distributions via Information Theory
2017-07-31
for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...sources, gathering and maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden...estimate or any other aspect of this collection of information , including suggestions for reducing the burden, to Department of Defense, Washington
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Maximizing algebraic connectivity in interconnected networks.
Shakeri, Heman; Albin, Nathan; Darabi Sahneh, Faryad; Poggi-Corradini, Pietro; Scoglio, Caterina
2016-03-01
Algebraic connectivity, the second eigenvalue of the Laplacian matrix, is a measure of node and link connectivity on networks. When studying interconnected networks it is useful to consider a multiplex model, where the component networks operate together with interlayer links among them. In order to have a well-connected multilayer structure, it is necessary to optimally design these interlayer links considering realistic constraints. In this work, we solve the problem of finding an optimal weight distribution for one-to-one interlayer links under budget constraint. We show that for the special multiplex configurations with identical layers, the uniform weight distribution is always optimal. On the other hand, when the two layers are arbitrary, increasing the budget reveals the existence of two different regimes. Up to a certain threshold budget, the second eigenvalue of the supra-Laplacian is simple, the optimal weight distribution is uniform, and the Fiedler vector is constant on each layer. Increasing the budget past the threshold, the optimal weight distribution can be nonuniform. The interesting consequence of this result is that there is no need to solve the optimization problem when the available budget is less than the threshold, which can be easily found analytically.
Optimizing Barrier Removal to Restore Connectivity in Utah's Weber Basin
NASA Astrophysics Data System (ADS)
Kraft, M.; Null, S. E.
2016-12-01
Instream barriers, such as dams, culverts and diversions are economically important for water supply, but negatively affect river ecosystems and disrupt hydrologic processes. Removal of uneconomical and aging in-stream barriers to improve habitat connectivity is increasingly used to restore river connectivity. Most past barrier removal projects focused on individual barriers using a score-and-rank technique, ignoring cumulative change from multiple, spatially-connected barrier removals. Similarly, most water supply models optimize either human water use or aquatic connectivity, failing to holistically represent human and environmental benefits. In this study, a dual objective optimization model identified in-stream barriers that impede aquatic habitat connectivity for trout, using streamflow, temperature, and channel gradient as indicators of aquatic habitat suitability. Water scarcity costs are minimized using agricultural and urban economic penalty functions to incorporate water supply benefits and a budget monetizes costs of removing small barriers like culverts and road crossings. The optimization model developed is applied to a case study in Utah's Weber basin to prioritize removal of the most environmentally harmful barriers, while maintaining human water uses. The dual objective solution basis was developed to quantify and graphically visualize tradeoffs between connected quality-weighted habitat for Bonneville cutthroat trout and economic water uses. Modeled results include a spectrum of barrier removal alternatives based on budget and quality-weighted reconnected habitat that can be communicated with local stakeholders. This research will help prioritize barrier removals and future restoration decisions. The modeling approach expands current barrier removal optimization methods by explicitly including economic and environmental water uses.
Visual recognition and inference using dynamic overcomplete sparse learning.
Murray, Joseph F; Kreutz-Delgado, Kenneth
2007-09-01
We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.
Smith, Jason F.; Pillai, Ajay; Chen, Kewei; Horwitz, Barry
2012-01-01
Analysis of directionally specific or causal interactions between regions in functional magnetic resonance imaging (fMRI) data has proliferated. Here we identify six issues with existing effective connectivity methods that need to be addressed. The issues are discussed within the framework of linear dynamic systems for fMRI (LDSf). The first concerns the use of deterministic models to identify inter-regional effective connectivity. We show that deterministic dynamics are incapable of identifying the trial-to-trial variability typically investigated as the marker of connectivity while stochastic models can capture this variability. The second concerns the simplistic (constant) connectivity modeled by most methods. Connectivity parameters of the LDSf model can vary at the same timescale as the input data. Further, extending LDSf to mixtures of multiple models provides more robust connectivity variation. The third concerns the correct identification of the network itself including the number and anatomical origin of the network nodes. Augmentation of the LDSf state space can identify additional nodes of a network. The fourth concerns the locus of the signal used as a “node” in a network. A novel extension LDSf incorporating sparse canonical correlations can select most relevant voxels from an anatomically defined region based on connectivity. The fifth concerns connection interpretation. Individual parameter differences have received most attention. We present alternative network descriptors of connectivity changes which consider the whole network. The sixth concerns the temporal resolution of fMRI data relative to the timescale of the inter-regional interactions in the brain. LDSf includes an “instantaneous” connection term to capture connectivity occurring at timescales faster than the data resolution. The LDS framework can also be extended to statistically combine fMRI and EEG data. The LDSf framework is a promising foundation for effective connectivity analysis. PMID:22279430
Reconstruction of networks from one-step data by matching positions
NASA Astrophysics Data System (ADS)
Wu, Jianshe; Dang, Ni; Jiao, Yang
2018-05-01
It is a challenge in estimating the topology of a network from short time series data. In this paper, matching positions is developed to reconstruct the topology of a network from only one-step data. We consider a general network model of coupled agents, in which the phase transformation of each node is determined by its neighbors. From the phase transformation information from one step to the next, the connections of the tail vertices are reconstructed firstly by the matching positions. Removing the already reconstructed vertices, and repeatedly reconstructing the connections of tail vertices, the topology of the entire network is reconstructed. For sparse scale-free networks with more than ten thousands nodes, we almost obtain the actual topology using only the one-step data in simulations.
Information jet: Handling noisy big data from weakly disconnected network
NASA Astrophysics Data System (ADS)
Aurongzeb, Deeder
Sudden aggregation (information jet) of large amount of data is ubiquitous around connected social networks, driven by sudden interacting and non-interacting events, network security threat attacks, online sales channel etc. Clustering of information jet based on time series analysis and graph theory is not new but little work is done to connect them with particle jet statistics. We show pre-clustering based on context can element soft network or network of information which is critical to minimize time to calculate results from noisy big data. We show difference between, stochastic gradient boosting and time series-graph clustering. For disconnected higher dimensional information jet, we use Kallenberg representation theorem (Kallenberg, 2005, arXiv:1401.1137) to identify and eliminate jet similarities from dense or sparse graph.
Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.
2013-01-01
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380
A Novel Locally Linear KNN Method With Applications to Visual Recognition.
Liu, Qingfeng; Liu, Chengjun
2017-09-01
A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.
Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR
Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng
2018-01-01
In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner. PMID:29439447
Sensor-Oriented Path Planning for Multiregion Surveillance with a Single Lightweight UAV SAR.
Li, Jincheng; Chen, Jie; Wang, Pengbo; Li, Chunsheng
2018-02-11
In the surveillance of interested regions by unmanned aerial vehicle (UAV), system performance relies greatly on the motion control strategy of the UAV and the operation characteristics of the onboard sensors. This paper investigates the 2D path planning problem for the lightweight UAV synthetic aperture radar (SAR) system in an environment of multiple regions of interest (ROIs), the sizes of which are comparable to the radar swath width. Taking into account the special requirements of the SAR system on the motion of the platform, we model path planning for UAV SAR as a constrained multiobjective optimization problem (MOP). Based on the fact that the UAV route can be designed in the map image, an image-based path planner is proposed in this paper. First, the neighboring ROIs are merged by the morphological operation. Then, the parts of routes for data collection of the ROIs can be located according to the geometric features of the ROIs and the observation geometry of UAV SAR. Lastly, the route segments for ROIs surveillance are connected by a path planning algorithm named the sampling-based sparse A* search (SSAS) algorithm. Simulation experiments in real scenarios demonstrate that the proposed sensor-oriented path planner can improve the reconnaissance performance of lightweight UAV SAR greatly compared with the conventional zigzag path planner.
Cholestasis‐induced adaptive remodeling of interlobular bile ducts
Damle‐Vartak, Amruta; Richter, Beate; Dirsch, Olaf; Dahmen, Uta; Hammad, Seddik
2016-01-01
Cholestasis is a common complication in liver diseases that triggers a proliferative response of the biliary tree. Bile duct ligation (BDL) is a frequently used model of cholestasis in rodents. To determine which changes occur in the three‐dimensional (3D) architecture of the interlobular bile duct during cholestasis, we used 3D confocal imaging, surface reconstructions, and automated image quantification covering a period up to 28 days after BDL. We show a highly reproducible sequence of interlobular duct remodeling, where cholangiocyte proliferation initially causes corrugation of the luminal duct surface, leading to an approximately five‐fold increase in surface area. This is analogous to the function of villi in the intestine or sulci in the brain, where an expansion of area is achieved within a restricted volume. The increase in surface area is further enhanced by duct branching, branch elongation, and loop formation through self‐joining, whereby an initially relatively sparse mesh surrounding the portal vein becomes five‐fold denser through elongation, corrugation, and ramification. The number of connections between the bile duct and the lobular bile canalicular network by the canals of Hering decreases proportionally to the increase in bile duct length, suggesting that no novel connections are established. The diameter of the interlobular bile duct remains constant after BDL, a response that is qualitatively distinct from that of large bile ducts, which tend to enlarge their diameters. Therefore, volume enhancement is only due to net elongation of the ducts. Because curvature and tortuosity of the bile duct are unaltered, this enlargement of the biliary tree is caused by branching and not by convolution. Conclusion: BDL causes adaptive remodeling that aims at optimizing the intraluminal surface area by way of corrugation and branching. (Hepatology 2016;63:951–964) PMID:26610202
Towards Optimal Connectivity on Multi-layered Networks.
Chen, Chen; He, Jingrui; Bliss, Nadya; Tong, Hanghang
2017-10-01
Networks are prevalent in many high impact domains. Moreover, cross-domain interactions are frequently observed in many applications, which naturally form the dependencies between different networks. Such kind of highly coupled network systems are referred to as multi-layered networks , and have been used to characterize various complex systems, including critical infrastructure networks, cyber-physical systems, collaboration platforms, biological systems and many more. Different from single-layered networks where the functionality of their nodes is mainly affected by within-layer connections, multi-layered networks are more vulnerable to disturbance as the impact can be amplified through cross-layer dependencies, leading to the cascade failure to the entire system. To manipulate the connectivity in multi-layered networks, some recent methods have been proposed based on two-layered networks with specific types of connectivity measures. In this paper, we address the above challenges in multiple dimensions. First, we propose a family of connectivity measures (SUBLINE) that unifies a wide range of classic network connectivity measures. Third, we reveal that the connectivity measures in SUBLINE family enjoy diminishing returns property , which guarantees a near-optimal solution with linear complexity for the connectivity optimization problem. Finally, we evaluate our proposed algorithm on real data sets to demonstrate its effectiveness and efficiency.
Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal
NASA Astrophysics Data System (ADS)
Zamudio, Gabriel S.; José, Marco V.
2018-03-01
In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.
Influence of topology in the mobility enhancement of pulse-coupled oscillator synchronization
NASA Astrophysics Data System (ADS)
Beardo, A.; Prignano, L.; Sagarra, O.; Díaz-Guilera, A.
2017-12-01
In this work we revisit the nonmonotonic behavior (NMB) of synchronization time with velocity reported for systems of mobile pulse-coupled oscillators (PCOs). We devise a control parameter that allows us to predict in which range of velocities NMB may occur, also uncovering the conditions allowing us to establish the emergence of NMB based on specific features of the connectivity rule. Specifically, our results show that if the connectivity rule is such that the interaction patterns are sparse and, more importantly, include a large fraction of nonreciprocal interactions, then the system will display NMB. We furthermore provide a microscopic explanation relating the presence of such features of the connectivity patterns to the existence of local clusters unable to synchronize, termed frustrated clusters, for which we also give a precise definition in terms of simple graph concepts. We conclude that, if the probability of finding a frustrated cluster in a system of moving PCOs is high enough, NMB occurs in a predictable range of velocities.
A neural network with modular hierarchical learning
NASA Technical Reports Server (NTRS)
Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)
1994-01-01
This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613
Yger, Pierre; El Boustani, Sami; Destexhe, Alain; Frégnac, Yves
2011-10-01
The relationship between the dynamics of neural networks and their patterns of connectivity is far from clear, despite its importance for understanding functional properties. Here, we have studied sparsely-connected networks of conductance-based integrate-and-fire (IF) neurons with balanced excitatory and inhibitory connections and with finite axonal propagation speed. We focused on the genesis of states with highly irregular spiking activity and synchronous firing patterns at low rates, called slow Synchronous Irregular (SI) states. In such balanced networks, we examined the "macroscopic" properties of the spiking activity, such as ensemble correlations and mean firing rates, for different intracortical connectivity profiles ranging from randomly connected networks to networks with Gaussian-distributed local connectivity. We systematically computed the distance-dependent correlations at the extracellular (spiking) and intracellular (membrane potential) levels between randomly assigned pairs of neurons. The main finding is that such properties, when they are averaged at a macroscopic scale, are invariant with respect to the different connectivity patterns, provided the excitatory-inhibitory balance is the same. In particular, the same correlation structure holds for different connectivity profiles. In addition, we examined the response of such networks to external input, and found that the correlation landscape can be modulated by the mean level of synchrony imposed by the external drive. This modulation was found again to be independent of the external connectivity profile. We conclude that first and second-order "mean-field" statistics of such networks do not depend on the details of the connectivity at a microscopic scale. This study is an encouraging step toward a mean-field description of topological neuronal networks.
Viscoelastic properties of bovine orbital connective tissue and fat: constitutive models
Yoo, Lawrence; Gupta, Vijay; Lee, Choongyeop; Kavehpore, Pirouz
2012-01-01
Reported mechanical properties of orbital connective tissue and fat have been too sparse to model strain–stress relationships underlying biomechanical interactions in strabismus. We performed rheological tests to develop a multi-mode upper convected Maxwell (UCM) model of these tissues under shear loading. From 20 fresh bovine orbits, 30 samples of connective tissue were taken from rectus pulley regions and 30 samples of fatty tissues from the posterior orbit. Additional samples were defatted to determine connective tissue weight proportion, which was verified histologically. Mechanical testing in shear employed a triborheometer to perform: strain sweeps at 0.5–2.0 Hz; shear stress relaxation with 1% strain; viscometry at 0.01–0.5 s−1 strain rate; and shear oscillation at 1% strain. Average connective tissue weight proportion was 98% for predominantly connective tissue and 76% for fatty tissue. Connective tissue specimens reached a long-term relaxation modulus of 668 Pa after 1,500 s, while corresponding values for fatty tissue specimens were 290 Pa and 1,100 s. Shear stress magnitude for connective tissue exceeded that of fatty tissue by five-fold. Based on these data, we developed a multimode UCM model with variable viscosities and time constants, and a damped hyperelastic response that accurately described measured properties of both connective and fatty tissues. Model parameters differed significantly between the two tissues. Viscoelastic properties of predominantly connective orbital tissues under shear loading differ markedly from properties of orbital fat, but both are accurately reflected using UCM models. These viscoelastic models will facilitate realistic global modeling of EOM behavior in binocular alignment and strabismus. PMID:21207094
Contextual Modulation is Related to Efficiency in a Spiking Network Model of Visual Cortex.
Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo; Vanni, Simo
2015-01-01
In the visual cortex, stimuli outside the classical receptive field (CRF) modulate the neural firing rate, without driving the neuron by themselves. In the primary visual cortex (V1), such contextual modulation can be parametrized with an area summation function (ASF): increasing stimulus size causes first an increase and then a decrease of firing rate before reaching an asymptote. Earlier work has reported increase of sparseness when CRF stimulation is extended to its surroundings. However, there has been no clear connection between the ASF and network efficiency. Here we aimed to investigate possible link between ASF and network efficiency. In this study, we simulated the responses of a biomimetic spiking neural network model of the visual cortex to a set of natural images. We varied the network parameters, and compared the V1 excitatory neuron spike responses to the corresponding responses predicted from earlier single neuron data from primate visual cortex. The network efficiency was quantified with firing rate (which has direct association to neural energy consumption), entropy per spike and population sparseness. All three measures together provided a clear association between the network efficiency and the ASF. The association was clear when varying the horizontal connectivity within V1, which influenced both the efficiency and the distance to ASF, DAS. Given the limitations of our biophysical model, this association is qualitative, but nevertheless suggests that an ASF-like receptive field structure can cause efficient population response.
Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan
2016-01-01
We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.
Fuel-optimal low-thrust formation reconfiguration via Radau pseudospectral method
NASA Astrophysics Data System (ADS)
Li, Jing
2016-07-01
This paper investigates fuel-optimal low-thrust formation reconfiguration near circular orbit. Based on the Clohessy-Wiltshire equations, first-order necessary optimality conditions are derived from the Pontryagin's maximum principle. The fuel-optimal impulsive solution is utilized to divide the low-thrust trajectory into thrust and coast arcs. By introducing the switching times as optimization variables, the fuel-optimal low-thrust formation reconfiguration is posed as a nonlinear programming problem (NLP) via direct transcription using multiple-phase Radau pseudospectral method (RPM), which is then solved by a sparse nonlinear optimization software SNOPT. To facilitate optimality verification and, if necessary, further refinement of the optimized solution of the NLP, formulas for mass costate estimation and initial costates scaling are presented. Numerical examples are given to show the application of the proposed optimization method. To fix the problem, generic fuel-optimal low-thrust formation reconfiguration can be simplified as reconfiguration without any initial and terminal coast arcs, whose optimal solutions can be efficiently obtained from the multiple-phase RPM at the cost of a slight fuel increment. Finally, influence of the specific impulse and maximum thrust magnitude on the fuel-optimal low-thrust formation reconfiguration is analyzed. Numerical results shown the links and differences between the fuel-optimal impulsive and low-thrust solutions.
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.
Lam, Clifford; Fan, Jianqing
2009-01-01
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (s(n) log p(n)/n)(1/2), where s(n) is the number of nonzero elements, p(n) is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λ(n) goes to 0 have been made explicit and compared under different penalties. As a result, for the L(1)-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: sn'=O(pn) at most, among O(pn2) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where sn' is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...
2017-06-01
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
With the development of Connected Vehicle Technology that facilitates wireless communication among vehicles and road-side infrastructure, the Advanced Driver Assistance Systems (ADAS) can be adopted as an effective tool for accelerating traffic safety and mobility optimization at...
Organization of an optimal adaptive immune system
NASA Astrophysics Data System (ADS)
Walczak, Aleksandra; Mayer, Andreas; Balasubramanian, Vijay; Mora, Thierry
The repertoire of lymphocyte receptors in the adaptive immune system protects organisms from a diverse set of pathogens. A well-adapted repertoire should be tuned to the pathogenic environment to reduce the cost of infections. I will discuss a general framework for predicting the optimal repertoire that minimizes the cost of infections contracted from a given distribution of pathogens. The theory predicts that the immune system will have more receptors for rare antigens than expected from the frequency of encounters and individuals exposed to the same infections will have sparse repertoires that are largely different, but nevertheless exploit cross-reactivity to provide the same coverage of antigens. I will show that the optimal repertoires can be reached by dynamics that describes the competitive binding of antigens by receptors, and selective amplification of stimulated receptors.
Compressed learning and its applications to subcellular localization.
Zheng, Zhong-Long; Guo, Li; Jia, Jiong; Xie, Chen-Mao; Zeng, Wen-Cai; Yang, Jie
2011-09-01
One of the main challenges faced by biological applications is to predict protein subcellular localization in automatic fashion accurately. To achieve this in these applications, a wide variety of machine learning methods have been proposed in recent years. Most of them focus on finding the optimal classification scheme and less of them take the simplifying the complexity of biological systems into account. Traditionally, such bio-data are analyzed by first performing a feature selection before classification. Motivated by CS (Compressed Sensing) theory, we propose the methodology which performs compressed learning with a sparseness criterion such that feature selection and dimension reduction are merged into one analysis. The proposed methodology decreases the complexity of biological system, while increases protein subcellular localization accuracy. Experimental results are quite encouraging, indicating that the aforementioned sparse methods are quite promising in dealing with complicated biological problems, such as predicting the subcellular localization of Gram-negative bacterial proteins.
Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.
Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai
2017-07-15
Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Characteristic investigation of Golay9 multiple mirror telescope with a spherical primary mirror
NASA Astrophysics Data System (ADS)
Wu, Feng; Wu, Quanying; Zhu, Xifang; Xiang, Ruxi; Qian, Lin
2017-10-01
The sparse aperture provides a novel solution to the manufacturing difficulties of modern super large telescopes. Golay configurations are optimal in the sparse aperture family. Characteristics of the Golay9 multiple mirror telescope having a spherical primary mirror are investigated. The arrangement of the nine sub-mirrors is discussed after the planar Golay9 configuration is analyzed. The characteristics of the entrance pupil are derived by analyzing the sub-aperture shapes with different relative apertures and sub-mirror sizes. Formulas about the fill factor and the overlay factor are deduced. Their maximal values are presented based on the derived tangency condition. Formulas for the point spread function (PSF) and the modulation transfer function (MTF) of the Golay9 MMT are also deduced. Two Golay9 MMT have been developed by Zemax simulation. Their PSF, MTF, fill factors, and overlay factors prove that our theoretical results are consistent with the practical simulation ones.
Higher criticism approach to detect rare variants using whole genome sequencing data
2014-01-01
Because of low statistical power of single-variant tests for whole genome sequencing (WGS) data, the association test for variant groups is a key approach for genetic mapping. To address the features of sparse and weak genetic effects to be detected, the higher criticism (HC) approach has been proposed and theoretically has proven optimal for detecting sparse and weak genetic effects. Here we develop a strategy to apply the HC approach to WGS data that contains rare variants as the majority. By using Genetic Analysis Workshop 18 "dose" genetic data with simulated phenotypes, we assess the performance of HC under a variety of strategies for grouping variants and collapsing rare variants. The HC approach is compared with the minimal p-value method and the sequence kernel association test. The results show that the HC approach is preferred for detecting weak genetic effects. PMID:25519367
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
Research on sparse feature matching of improved RANSAC algorithm
NASA Astrophysics Data System (ADS)
Kong, Xiangsi; Zhao, Xian
2018-04-01
In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Parametric Human Body Reconstruction Based on Sparse Key Points.
Cheng, Ke-Li; Tong, Ruo-Feng; Tang, Min; Qian, Jing-Ye; Sarkis, Michel
2016-11-01
We propose an automatic parametric human body reconstruction algorithm which can efficiently construct a model using a single Kinect sensor. A user needs to stand still in front of the sensor for a couple of seconds to measure the range data. The user's body shape and pose will then be automatically constructed in several seconds. Traditional methods optimize dense correspondences between range data and meshes. In contrast, our proposed scheme relies on sparse key points for the reconstruction. It employs regression to find the corresponding key points between the scanned range data and some annotated training data. We design two kinds of feature descriptors as well as corresponding regression stages to make the regression robust and accurate. Our scheme follows with dense refinement where a pre-factorization method is applied to improve the computational efficiency. Compared with other methods, our scheme achieves similar reconstruction accuracy but significantly reduces runtime.
Compressive Information Extraction: A Dynamical Systems Approach
2016-01-24
sparsely encoded in very large data streams. (a) Target tracking in an urban canyon; (b) and (c) sample frames showing contextually abnormal events: onset...extraction to identify contextually abnormal se- quences (see section 2.2.3). Formally, the problem of interest can be stated as establishing whether a noisy...relaxations with optimality guarantees can be obtained using tools from semi-algebraic geometry. 2.2 Application: Detecting Contextually Abnormal Events
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
Li, Kaiming; Guo, Lei; Zhu, Dajiang; Hu, Xintao; Han, Junwei; Liu, Tianming
2013-01-01
Studying connectivities among functional brain regions and the functional dynamics on brain networks has drawn increasing interest. A fundamental issue that affects functional connectivity and dynamics studies is how to determine the best possible functional brain regions or ROIs (regions of interest) for a group of individuals, since the connectivity measurements are heavily dependent on ROI locations. Essentially, identification of accurate, reliable and consistent corresponding ROIs is challenging due to the unclear boundaries between brain regions, variability across individuals, and nonlinearity of the ROIs. In response to these challenges, this paper presents a novel methodology to computationally optimize ROIs locations derived from task-based fMRI data for individuals so that the optimized ROIs are more consistent, reproducible and predictable across brains. Our computational strategy is to formulate the individual ROI location optimization as a group variance minimization problem, in which group-wise consistencies in functional/structural connectivity patterns and anatomic profiles are defined as optimization constraints. Our experimental results from multimodal fMRI and DTI data show that the optimized ROIs have significantly improved consistency in structural and functional profiles across individuals. These improved functional ROIs with better consistency could contribute to further study of functional interaction and dynamics in the human brain. PMID:22281931
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Bui Quoc, Emmanuel; Ribot, Jérôme; Quenech’Du, Nicole; Doutremer, Suzette; Lebas, Nicolas; Grantyn, Alexej; Aushana, Yonane; Milleret, Chantal
2011-01-01
In the mammalian primary visual cortex, the corpus callosum contributes to the unification of the visual hemifields that project to the two hemispheres. Its development depends on visual experience. When this is abnormal, callosal connections must undergo dramatic anatomical and physiological changes. However, data concerning these changes are sparse and incomplete. Thus, little is known about the impact of abnormal postnatal visual experience on the development of callosal connections and their role in unifying representation of the two hemifields. Here, the effects of early unilateral convergent strabismus (a model of abnormal visual experience) were fully characterized with respect to the development of the callosal connections in cat visual cortex, an experimental model for humans. Electrophysiological responses and 3D reconstruction of single callosal axons show that abnormally asymmetrical callosal connections develop after unilateral convergent strabismus, resulting from an extension of axonal branches of specific orders in the hemisphere ipsilateral to the deviated eye and a decreased number of nodes and terminals in the other (ipsilateral to the non-deviated eye). Furthermore this asymmetrical organization prevents the establishment of a unifying representation of the two visual hemifields. As a general rule, we suggest that crossed and uncrossed retino-geniculo-cortical pathways contribute successively to the development of the callosal maps in visual cortex. PMID:22275883
Predicting impacts of climate change on habitat connectivity of Kalopanax septemlobus in South Korea
NASA Astrophysics Data System (ADS)
Kang, Wanmo; Minor, Emily S.; Lee, Dowon; Park, Chan-Ryul
2016-02-01
Understanding the drivers of habitat distribution patterns and assessing habitat connectivity are crucial for conservation in the face of climate change. In this study, we examined a sparsely distributed tree species, Kalopanax septemlobus (Araliaceae), which has been heavily disturbed by human use in temperate forests of South Korea. We used maximum entropy distribution modeling (MaxEnt) to identify the climatic and topographic factors driving the distribution of the species. Then, we constructed habitat models under current and projected climate conditions for the year 2050 and evaluated changes in the extent and connectivity of the K. septemlobus habitat. Annual mean temperature and terrain slope were the two most important predictors of species distribution. Our models predicted the range shift of K. septemlobus toward higher elevations under medium-low and high emissions scenarios for 2050, with dramatic reductions in suitable habitat (51% and 85%, respectively). In addition, connectivity analysis indicated that climate change is expected to reduce future levels of habitat connectivity. Even under the Representative Construction Pathway (RCP) 4.5 medium-low warming scenario, the projected climate conditions will decrease habitat connectivity by 78%. Overall, suitable habitats for K. septemlobus populations will likely become more isolated depending on the severity of global warming. The approach presented here can be used to efficiently assess species and habitat vulnerability to climate change.
The Effect of DEM Source and Grid Size on the Index of Connectivity in Savanna Catchments
NASA Astrophysics Data System (ADS)
Jarihani, Ben; Sidle, Roy; Bartley, Rebecca; Roth, Christian
2017-04-01
The term "hydrological connectivity" is increasingly used instead of sediment delivery ratio to describe the linkage between the sources of water and sediment within a catchment to the catchment outlet. Sediment delivery ratio is an empirical parameter that is highly site-specific and tends to lump all processes, whilst hydrological connectivity focuses on the spatially-explicit hydrologic drivers of surficial processes. Detailed topographic information plays a fundamental role in geomorphological interpretations as well as quantitative modelling of sediment fluxes and connectivity. Geomorphometric analysis permits a detailed characterization of drainage area and drainage pattern together with the possibility of characterizing surface roughness. High resolution topographic data (i.e., LiDAR) are not available for all areas; however, remotely sensed topographic data from multiple sources with different grid sizes are used to undertake geomorphologic analysis in data-sparse regions. The Index of Connectivity (IC), a geomorphometric model based only on DEM data, is applied in two small savanna catchments in Queensland, Australia. The influence of the scale of the topographic data is explored by using DEMs from LiDAR ( 1 m), WorldDEM ( 10 m), raw SRTM and hydrologically corrected SRTM derived data ( 30 m) to calculate the index of connectivity. The effect of the grid size is also investigated by resampling the high resolution LiDAR DEM to multiple grid sizes (e.g. 5, 10, 20 m) and comparing the extracted IC.
Contour Tracking in Echocardiographic Sequences via Sparse Representation and Dictionary Learning
Huang, Xiaojie; Dione, Donald P.; Compas, Colin B.; Papademetris, Xenophon; Lin, Ben A.; Bregasi, Alda; Sinusas, Albert J.; Staib, Lawrence H.; Duncan, James S.
2013-01-01
This paper presents a dynamical appearance model based on sparse representation and dictionary learning for tracking both endocardial and epicardial contours of the left ventricle in echocardiographic sequences. Instead of learning offline spatiotemporal priors from databases, we exploit the inherent spatiotemporal coherence of individual data to constraint cardiac contour estimation. The contour tracker is initialized with a manual tracing of the first frame. It employs multiscale sparse representation of local image appearance and learns online multiscale appearance dictionaries in a boosting framework as the image sequence is segmented frame-by-frame sequentially. The weights of multiscale appearance dictionaries are optimized automatically. Our region-based level set segmentation integrates a spectrum of complementary multilevel information including intensity, multiscale local appearance, and dynamical shape prediction. The approach is validated on twenty-six 4D canine echocardiographic images acquired from both healthy and post-infarct canines. The segmentation results agree well with expert manual tracings. The ejection fraction estimates also show good agreement with manual results. Advantages of our approach are demonstrated by comparisons with a conventional pure intensity model, a registration-based contour tracker, and a state-of-the-art database-dependent offline dynamical shape model. We also demonstrate the feasibility of clinical application by applying the method to four 4D human data sets. PMID:24292554
Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.
Gijsberts, Arjan; Metta, Giorgio
2013-05-01
Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
Yang, Xiaomei; Zhou, Chenghu; Li, Zhi
2017-01-01
Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features. PMID:28914787
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu; Li, Zhi
2017-09-15
Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features.
Rosa, Maria J; Mehta, Mitul A; Pich, Emilio M; Risterucci, Celine; Zelaya, Fernando; Reinders, Antje A T S; Williams, Steve C R; Dazzan, Paola; Doyle, Orla M; Marquand, Andre F
2015-01-01
An increasing number of neuroimaging studies are based on either combining more than one data modality (inter-modal) or combining more than one measurement from the same modality (intra-modal). To date, most intra-modal studies using multivariate statistics have focused on differences between datasets, for instance relying on classifiers to differentiate between effects in the data. However, to fully characterize these effects, multivariate methods able to measure similarities between datasets are needed. One classical technique for estimating the relationship between two datasets is canonical correlation analysis (CCA). However, in the context of high-dimensional data the application of CCA is extremely challenging. A recent extension of CCA, sparse CCA (SCCA), overcomes this limitation, by regularizing the model parameters while yielding a sparse solution. In this work, we modify SCCA with the aim of facilitating its application to high-dimensional neuroimaging data and finding meaningful multivariate image-to-image correspondences in intra-modal studies. In particular, we show how the optimal subset of variables can be estimated independently and we look at the information encoded in more than one set of SCCA transformations. We illustrate our framework using Arterial Spin Labeling data to investigate multivariate similarities between the effects of two antipsychotic drugs on cerebral blood flow.
Kim, Sang-Yoon; Lim, Woochang
2017-09-01
We consider an inhomogeneous small-world network (SWN) composed of inhibitory short-range (SR) and long-range (LR) interneurons, and investigate the effect of network architecture on emergence of synchronized brain rhythms by varying the fraction of LR interneurons p long . The betweenness centralities of the LR and SR interneurons (characterizing the potentiality in controlling communication between other interneurons) are distinctly different. Hence, in view of the betweenness, SWNs we consider are inhomogeneous, unlike the "canonical" Watts-Strogatz SWN with nearly the same betweenness centralities. For small p long , the load of communication traffic is much concentrated on a few LR interneurons. However, as p long is increased, the number of LR connections (coming from LR interneurons) increases, and then the load of communication traffic is less concentrated on LR interneurons, which leads to better efficiency of global communication between interneurons. Sparsely synchronized rhythms are thus found to emerge when passing a small critical value p long (c) (≃0.16). The population frequency of the sparsely synchronized rhythm is ultrafast (higher than 100 Hz), while the mean firing rate of individual interneurons is much lower (∼30 Hz) due to stochastic and intermittent neural discharges. These dynamical behaviors in the inhomogeneous SWN are also compared with those in the homogeneous Watts-Strogatz SWN, in connection with their network topologies. Particularly, we note that the main difference between the two types of SWNs lies in the distribution of betweenness centralities. Unlike the case of the Watts-Strogatz SWN, dynamical responses to external stimuli vary depending on the type of stimulated interneurons in the inhomogeneous SWN. We consider two cases of external time-periodic stimuli applied to sub-populations of the LR and SR interneurons, respectively. Dynamical responses (such as synchronization suppression and enhancement) to these two cases of stimuli are studied and discussed in relation to the betweenness centralities of stimulated interneurons, representing the effectiveness for transfer of stimulation effect in the whole network. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automated map sharpening by maximization of detail and connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Automated map sharpening by maximization of detail and connectivity
Terwilliger, Thomas C.; Sobolev, Oleg V.; Afonine, Pavel V.; ...
2018-05-18
An algorithm for automatic map sharpening is presented that is based on optimization of the detail and connectivity of the sharpened map. The detail in the map is reflected in the surface area of an iso-contour surface that contains a fixed fraction of the volume of the map, where a map with high level of detail has a high surface area. The connectivity of the sharpened map is reflected in the number of connected regions defined by the same iso-contour surfaces, where a map with high connectivity has a small number of connected regions. By combining these two measures inmore » a metric termed the `adjusted surface area', map quality can be evaluated in an automated fashion. This metric was used to choose optimal map-sharpening parameters without reference to a model or other interpretations of the map. Map sharpening by optimization of the adjusted surface area can be carried out for a map as a whole or it can be carried out locally, yielding a locally sharpened map. To evaluate the performance of various approaches, a simple metric based on map–model correlation that can reproduce visual choices of optimally sharpened maps was used. The map–model correlation is calculated using a model withBfactors (atomic displacement factors; ADPs) set to zero. Finally, this model-based metric was used to evaluate map sharpening and to evaluate map-sharpening approaches, and it was found that optimization of the adjusted surface area can be an effective tool for map sharpening.« less
Dynamic graph system for a semantic database
Mizell, David
2016-04-12
A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.
Dynamic graph system for a semantic database
Mizell, David
2015-01-27
A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.
Accounting for connectivity and spatial correlation in the optimal placement of wildlife habitat
John Hof; Curtis H. Flather
1996-01-01
This paper investigates optimization approaches to simultaneously modelling habitat fragmentation and spatial correlation between patch populations. The problem is formulated with habitat connectivity affecting population means and variances, with spatial correlations accounted for in covariance calculations. Population with a pre-specifled confidence level is then...
Dausey, David J; Chandra, Anita; Schaefer, Agnes G; Bahney, Ben; Haviland, Amelia; Zakowski, Sarah; Lurie, Nicole
2008-09-01
We tested telephone-based disease surveillance systems in local health departments to identify system characteristics associated with consistent and timely responses to urgent case reports. We identified a stratified random sample of 74 health departments and conducted a series of unannounced tests of their telephone-based surveillance systems. We used regression analyses to identify system characteristics that predicted fast connection with an action officer (an appropriate public health professional). Optimal performance in consistently connecting callers with an action officer in 30 minutes or less was achieved by 31% of participating health departments. Reaching a live person upon dialing, regardless of who that person was, was the strongest predictor of optimal performance both in being connected with an action officer and in consistency of connection times. Health departments can achieve optimal performance in consistently connecting a caller with an action officer in 30 minutes or less and may improve performance by using a telephone-based disease surveillance system in which the phone is answered by a live person at all times.
Optimization of thrie beam terminal end shoe connection.
DOT National Transportation Integrated Search
2017-04-01
Terminal thrie end shoes connect nested thrie beams to parapets or other bridge rail structure to provide a robust connectivity between a transition section and a rigid railing section. When connecting terminal end shoe to thrie beam transitions, the...
Energetics in a model of prebiotic evolution
NASA Astrophysics Data System (ADS)
Intoy, B. F.; Halley, J. W.
2017-12-01
Previously we reported [A. Wynveen et al., Phys. Rev. E 89, 022725 (2014), 10.1103/PhysRevE.89.022725] that requiring that the systems regarded as lifelike be out of chemical equilibrium in a model of abstracted polymers undergoing ligation and scission first introduced by Kauffman [S. A. Kauffman, The Origins of Order (Oxford University Press, New York, 1993), Chap. 7] implied that lifelike systems were most probable when the reaction network was sparse. The model was entirely statistical and took no account of the bond energies or other energetic constraints. Here we report results of an extension of the model to include effects of a finite bonding energy in the model. We studied two conditions: (1) A food set is continuously replenished and the total polymer population is constrained but the system is otherwise isolated and (2) in addition to the constraints in (1) the system is in contact with a finite-temperature heat bath. In each case, detailed balance in the dynamics is guaranteed during the computations by continuous recomputation of a temperature [in case (1)] and of the chemical potential (in both cases) toward which the system is driven by the dynamics. In the isolated case, the probability of reaching a metastable nonequilibrium state in this model depends significantly on the composition of the food set, and the nonequilibrium states satisfying lifelike condition turn out to be at energies and particle numbers consistent with an equilibrium state at high negative temperature. As a function of the sparseness of the reaction network, the lifelike probability is nonmonotonic, as in our previous model, but the maximum probability occurs when the network is less sparse. In the case of contact with a thermal bath at a positive ambient temperature, we identify two types of metastable nonequilibrium states, termed locally and thermally alive, and locally dead and thermally alive, and evaluate their likelihood of appearance, finding maxima at an optimal temperature and an optimal degree of sparseness in the network. We use a Euclidean metric in the space of polymer populations to distinguish these states from one another and from fully equilibrated states. The metric can be used to characterize the degree and type of chemical equilibrium in observed systems, as we illustrate for the proteome of the ribosome.
Quantum-assisted learning of graphical models with arbitrary pairwise connectivity
NASA Astrophysics Data System (ADS)
Realpe-Gómez, John; Benedetti, Marcello; Biswas, Rupak; Perdomo-Ortiz, Alejandro
Mainstream machine learning techniques rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speedup these tasks. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful machine learning models. Here we show how to surpass this `curse of limited connectivity' bottleneck and illustrate our findings by training probabilistic generative models with arbitrary pairwise connectivity on a real dataset of handwritten digits and two synthetic datasets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding Boltzmann-like distribution. Therefore, the need to infer the effective temperature at each iteration is avoided, speeding up learning, and the effect of noise in the control parameters is mitigated, improving accuracy. This work was supported in part by NASA, AFRL, ODNI, and IARPA.
NASA Astrophysics Data System (ADS)
Li, Fengxian; Yi, Jianhong; Eckert, Jürgen
2017-12-01
Powder forged connecting rods have the problem of non-uniform density distributions because of their complex geometric shape. The densification behaviors of powder metallurgy (PM) connecting rod preforms during hot forging processes play a significant role in optimizing the connecting rod quality. The deformation behaviors of a connecting rod preform, a Fe-3Cu-0.5C (wt pct) alloy compacted and sintered by the powder metallurgy route (PM Fe-Cu-C), were investigated using the finite element method, while damage and friction behaviors of the material were considered in the complicated forging process. The calculated results agree well with the experimental results. The relationship between the processing parameters of hot forging and the relative density of the connecting rod was revealed. The results showed that the relative density of the hot forged connecting rod at the central shank changed significantly compared with the relative density at the big end and at the small end. Moreover, the relative density of the connecting rod was sensitive to the processing parameters such as the forging velocity and the initial density of the preform. The optimum forging processing parameters were determined and presented by using an orthogonal design method. This work suggests that the processing parameters can be optimized to prepare a connecting rod with uniform density distribution and can help to better meet the requirements of the connecting rod industry.
A Stimulus-Locked Vector Autoregressive Model for Slow Event-Related fMRI Designs
Siegle, Greg
2009-01-01
Summary Neuroscientists have become increasingly interested in exploring dynamic relationships among brain regions. Such a relationship, when directed from one region toward another, is denoted by “effective connectivity.” An fMRI experimental paradigm which is well-suited for examination of effective connectivity is the slow event-related design. This design presents stimuli at sufficient temporal spacing for determining within-trial trajectories of BOLD activation, allowing for the analysis of stimulus-locked temporal covariation of brain responses in multiple regions. This may be especially important for emotional stimuli processing, which can evolve over the course of several seconds, if not longer. However, while several methods have been devised for determining fMRI effective connectivity, few are adapted to event-related designs, which include non-stationary BOLD responses and multiple levels of nesting. We propose a model tailored for exploring effective connectivity of multiple brain regions in event-related fMRI designs - a semi-parametric adaptation of vector autoregressive (VAR) models, termed “stimulus-locked VAR” (SloVAR). Connectivity coefficients vary as a function of time relative to stimulus onset, are regularized via basis expansions, and vary randomly across subjects. SloVAR obtains flexible, data-driven estimates of effective connectivity and hence is useful for building connectivity models when prior information on dynamic regional relationships is sparse. Indices derived from the coefficient estimates can also be used to relate effective connectivity estimates to behavioral or clinical measures. We demonstrate the SloVAR model on a sample of clinically depressed and normal controls, showing that early but not late cortico-amygdala connectivity appears crucial to emotional control and early but not late cortico-cortico connectivity predicts depression severity in the depressed group, relationships that would have been missed in a more traditional VAR analysis. PMID:19236927
Viscoelastic properties of bovine orbital connective tissue and fat: constitutive models.
Yoo, Lawrence; Gupta, Vijay; Lee, Choongyeop; Kavehpore, Pirouz; Demer, Joseph L
2011-12-01
Reported mechanical properties of orbital connective tissue and fat have been too sparse to model strain-stress relationships underlying biomechanical interactions in strabismus. We performed rheological tests to develop a multi-mode upper convected Maxwell (UCM) model of these tissues under shear loading. From 20 fresh bovine orbits, 30 samples of connective tissue were taken from rectus pulley regions and 30 samples of fatty tissues from the posterior orbit. Additional samples were defatted to determine connective tissue weight proportion, which was verified histologically. Mechanical testing in shear employed a triborheometer to perform: strain sweeps at 0.5-2.0 Hz; shear stress relaxation with 1% strain; viscometry at 0.01-0.5 s(-1) strain rate; and shear oscillation at 1% strain. Average connective tissue weight proportion was 98% for predominantly connective tissue and 76% for fatty tissue. Connective tissue specimens reached a long-term relaxation modulus of 668 Pa after 1,500 s, while corresponding values for fatty tissue specimens were 290 Pa and 1,100 s. Shear stress magnitude for connective tissue exceeded that of fatty tissue by five-fold. Based on these data, we developed a multi-mode UCM model with variable viscosities and time constants, and a damped hyperelastic response that accurately described measured properties of both connective and fatty tissues. Model parameters differed significantly between the two tissues. Viscoelastic properties of predominantly connective orbital tissues under shear loading differ markedly from properties of orbital fat, but both are accurately reflected using UCM models. These viscoelastic models will facilitate realistic global modeling of EOM behavior in binocular alignment and strabismus.
Phase Helps Find Geometrically Optimal Gaits
NASA Astrophysics Data System (ADS)
Revzen, Shai; Hatton, Ross
Geometric motion planning describes motions of animals and machines governed by g ˙ = gA (q) q ˙ - a connection A (.) relating shape q and shape velocity q ˙ to body frame velocity g-1 g ˙ ∈ se (3) . Measuring the entire connection over a multidimensional q is often unfeasible with current experimental methods. We show how using a phase estimator can make tractable measuring the local structure of the connection surrounding a periodic motion q (φ) driven by a phase φ ∈S1 . This approach reduces the complexity of the estimation problem by a factor of dimq . The results suggest that phase estimation can be combined with geometric optimization into an iterative gait optimization algorithm usable on experimental systems, or alternatively, to allow the geometric optimality of an observed gait to be detected. ARO W911NF-14-1-0573, NSF 1462555.
Recovery of Sparse Positive Signals on the Sphere from Low Resolution Measurements
NASA Astrophysics Data System (ADS)
Bendory, Tamir; Eldar, Yonina C.
2015-12-01
This letter considers the problem of recovering a positive stream of Diracs on a sphere from its projection onto the space of low-degree spherical harmonics, namely, from its low-resolution version. We suggest recovering the Diracs via a tractable convex optimization problem. The resulting recovery error is proportional to the noise level and depends on the density of the Diracs. We validate the theory by numerical experiments.
Million, Lynn; Donaldson, Sarah S.
2012-01-01
It remains unclear which children and adolescents with resected nonrhabdomyosarcoma soft tissue sarcoma (NRSTS) benefit from radiation therapy, as well as the optimal dose, volume, and timing of radiotherapy when used with primary surgical resection. This paper reviews the sparse literature from clinical trials and retrospective studies of resected pediatric NRSTS to discern local recurrence rates in relationship to the use of radiation therapy. PMID:22523704
Optimizing Dynamical Network Structure for Pinning Control
NASA Astrophysics Data System (ADS)
Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo
2016-04-01
Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.
Optimal Power Flow in Multiphase Radial Networks with Delta Connections: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Changhong; Dall-Anese, Emiliano; Low, Steven H.
This paper focuses on multiphase radial distribution networks with mixed wye and delta connections, and proposes a semidefinite relaxation of the AC optimal power flow (OPF) problem. Two multiphase power-flow models are developed to facilitate the integration of delta-connected generation units/loads in the OPF problem. The first model extends traditional branch flow models - and it is referred to as extended branch flow model (EBFM). The second model leverages a linear relationship between per-phase power injections and delta connections, which holds under a balanced voltage approximation (BVA). Based on these models, pertinent OPF problems are formulated and relaxed to semidefinitemore » programs (SDPs). Numerical studies on IEEE test feeders show that SDP relaxations can be solved efficiently by a generic optimization solver. Numerical evidences indicate that solving the resultant SDP under BVA is faster than under EBFM. Moreover, both SDP solutions are numerically exact with respect to voltages and branch flows. It is also shown that the SDP solution under BVA has a small optimality gap, while the BVA model is accurate in the sense that it reflects actual system voltages.« less
Finite element method for optimal guidance of an advanced launch vehicle
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Bless, Robert R.; Calise, Anthony J.; Leung, Martin
1992-01-01
A temporal finite element based on a mixed form of Hamilton's weak principle is summarized for optimal control problems. The resulting weak Hamiltonian finite element method is extended to allow for discontinuities in the states and/or discontinuities in the system equations. An extension of the formulation to allow for control inequality constraints is also presented. The formulation does not require element quadrature, and it produces a sparse system of nonlinear algebraic equations. To evaluate its feasibility for real-time guidance applications, this approach is applied to the trajectory optimization of a four-state, two-stage model with inequality constraints for an advanced launch vehicle. Numerical results for this model are presented and compared to results from a multiple-shooting code. The results show the accuracy and computational efficiency of the finite element method.