NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-03-01
A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.
Computing sparse derivatives and consecutive zeros problem
NASA Astrophysics Data System (ADS)
Chandra, B. V. Ravi; Hossain, Shahadat
2013-02-01
We describe a substitution based sparse Jacobian matrix determination method using algorithmic differentiation. Utilizing the a priori known sparsity pattern, a compression scheme is determined using graph coloring. The "compressed pattern" of the Jacobian matrix is then reordered into a form suitable for computation by substitution. We show that the column reordering of the compressed pattern matrix (so as to align the zero entries into consecutive locations in each row) can be viewed as a variant of traveling salesman problem. Preliminary computational results show that on the test problems the performance of nearest-neighbor type heuristic algorithms is highly encouraging.
NASA Astrophysics Data System (ADS)
He, Xingyu; Tong, Ningning; Hu, Xiaowei
2018-01-01
Compressive sensing has been successfully applied to inverse synthetic aperture radar (ISAR) imaging of moving targets. By exploiting the block sparse structure of the target image, sparse solution for multiple measurement vectors (MMV) can be applied in ISAR imaging and a substantial performance improvement can be achieved. As an effective sparse recovery method, sparse Bayesian learning (SBL) for MMV involves a matrix inverse at each iteration. Its associated computational complexity grows significantly with the problem size. To address this problem, we develop a fast inverse-free (IF) SBL method for MMV. A relaxed evidence lower bound (ELBO), which is computationally more amiable than the traditional ELBO used by SBL, is obtained by invoking fundamental property for smooth functions. A variational expectation-maximization scheme is then employed to maximize the relaxed ELBO, and a computationally efficient IF-MSBL algorithm is proposed. Numerical results based on simulated and real data show that the proposed method can reconstruct row sparse signal accurately and obtain clear superresolution ISAR images. Moreover, the running time and computational complexity are reduced to a great extent compared with traditional SBL methods.
Method and apparatus for optimized processing of sparse matrices
Taylor, Valerie E.
1993-01-01
A computer architecture for processing a sparse matrix is disclosed. The apparatus stores a value-row vector corresponding to nonzero values of a sparse matrix. Each of the nonzero values is located at a defined row and column position in the matrix. The value-row vector includes a first vector including nonzero values and delimiting characters indicating a transition from one column to another. The value-row vector also includes a second vector which defines row position values in the matrix corresponding to the nonzero values in the first vector and column position values in the matrix corresponding to the column position of the nonzero values in the first vector. The architecture also includes a circuit for detecting a special character within the value-row vector. Matrix-vector multiplication is executed on the value-row vector. This multiplication is performed by multiplying an index value of the first vector value by a column value from a second matrix to form a matrix-vector product which is added to a previous matrix-vector product.
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
GPU-accelerated element-free reverse-time migration with Gauss points partition
NASA Astrophysics Data System (ADS)
Zhou, Zhen; Jia, Xiaofeng; Qiang, Xiaodong
2018-06-01
An element-free method (EFM) has been demonstrated successfully in elasticity, heat conduction and fatigue crack growth problems. We present the theory of EFM and its numerical applications in seismic modelling and reverse time migration (RTM). Compared with the finite difference method and the finite element method, the EFM has unique advantages: (1) independence of grids in computation and (2) lower expense and more flexibility (because only the information of the nodes and the boundary of the concerned area is required). However, in EFM, due to improper computation and storage of some large sparse matrices, such as the mass matrix and the stiffness matrix, the method is difficult to apply to seismic modelling and RTM for a large velocity model. To solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition and utilise the graphics processing unit to improve the computational efficiency. We employ the compressed sparse row format to compress the intermediate large sparse matrices and attempt to simplify the operations by solving the linear equations with CULA solver. To improve the computation efficiency further, we introduce the concept of the lumped mass matrix. Numerical experiments indicate that the proposed method is accurate and more efficient than the regular EFM.
Computing row and column counts for sparse QR and LU factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, John R.; Li, Xiaoye S.; Ng, Esmond G.
2001-01-01
We present algorithms to determine the number of nonzeros in each row and column of the factors of a sparse matrix, for both the QR factorization and the LU factorization with partial pivoting. The algorithms use only the nonzero structure of the input matrix, and run in time nearly linear in the number of nonzeros in that matrix. They may be used to set up data structures or schedule parallel operations in advance of the numerical factorization. The row and column counts we compute are upper bounds on the actual counts. If the input matrix is strong Hall and theremore » is no coincidental numerical cancellation, the counts are exact for QR factorization and are the tightest bounds possible for LU factorization. These algorithms are based on our earlier work on computing row and column counts for sparse Cholesky factorization, plus an efficient method to compute the column elimination tree of a sparse matrix without explicitly forming the product of the matrix and its transpose.« less
Memory hierarchy using row-based compression
Loh, Gabriel H.; O'Connor, James M.
2016-10-25
A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...
2017-06-01
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel
As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less
Biclustering sparse binary genomic data.
van Uitert, Miranda; Meuleman, Wouter; Wessels, Lodewyk
2008-12-01
Genomic datasets often consist of large, binary, sparse data matrices. In such a dataset, one is often interested in finding contiguous blocks that (mostly) contain ones. This is a biclustering problem, and while many algorithms have been proposed to deal with gene expression data, only two algorithms have been proposed that specifically deal with binary matrices. None of the gene expression biclustering algorithms can handle the large number of zeros in sparse binary matrices. The two proposed binary algorithms failed to produce meaningful results. In this article, we present a new algorithm that is able to extract biclusters from sparse, binary datasets. A powerful feature is that biclusters with different numbers of rows and columns can be detected, varying from many rows to few columns and few rows to many columns. It allows the user to guide the search towards biclusters of specific dimensions. When applying our algorithm to an input matrix derived from TRANSFAC, we find transcription factors with distinctly dissimilar binding motifs, but a clear set of common targets that are significantly enriched for GO categories.
Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM).
Gao, Hao; Yu, Hengyong; Osher, Stanley; Wang, Ge
2011-11-01
We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations.
FPGA implementation of sparse matrix algorithm for information retrieval
NASA Astrophysics Data System (ADS)
Bojanic, Slobodan; Jevtic, Ruzica; Nieto-Taladriz, Octavio
2005-06-01
Information text data retrieval requires a tremendous amount of processing time because of the size of the data and the complexity of information retrieval algorithms. In this paper the solution to this problem is proposed via hardware supported information retrieval algorithms. Reconfigurable computing may adopt frequent hardware modifications through its tailorable hardware and exploits parallelism for a given application through reconfigurable and flexible hardware units. The degree of the parallelism can be tuned for data. In this work we implemented standard BLAS (basic linear algebra subprogram) sparse matrix algorithm named Compressed Sparse Row (CSR) that is showed to be more efficient in terms of storage space requirement and query-processing timing over the other sparse matrix algorithms for information retrieval application. Although inverted index algorithm is treated as the de facto standard for information retrieval for years, an alternative approach to store the index of text collection in a sparse matrix structure gains more attention. This approach performs query processing using sparse matrix-vector multiplication and due to parallelization achieves a substantial efficiency over the sequential inverted index. The parallel implementations of information retrieval kernel are presented in this work targeting the Virtex II Field Programmable Gate Arrays (FPGAs) board from Xilinx. A recent development in scientific applications is the use of FPGA to achieve high performance results. Computational results are compared to implementations on other platforms. The design achieves a high level of parallelism for the overall function while retaining highly optimised hardware within processing unit.
Mihata, Teruhisa; Watanabe, Chisato; Fukunishi, Kunimoto; Ohue, Mutsumi; Tsujimura, Tomoyuki; Fujiwara, Kenta; Kinoshita, Mitsuo
2011-10-01
Although previous biomechanical research has demonstrated the superiority of the suture-bridge rotator cuff repair over double-row repair from a mechanical point of view, no articles have described the structural and functional outcomes of this type of procedure. The structural and functional outcomes after arthroscopic rotator cuff repair may be different between the single-row, double-row, and combined double-row and suture-bridge (compression double-row) techniques. Cohort study; Level of evidence, 3. There were 206 shoulders in 201 patients with full-thickness rotator cuff tears that underwent arthroscopic rotator cuff repair. Eleven patients were lost to follow-up. Sixty-five shoulders were repaired using the single-row, 23 shoulders using the double-row, and 107 shoulders using the compression double-row techniques. Clinical outcomes were evaluated at an average of 38.5 months (range, 24-74 months) after rotator cuff repair. Postoperative cuff integrity was determined using Sugaya's classification of magnetic resonance imaging (MRI). The retear rates after arthroscopic rotator cuff repair were 10.8%, 26.1%, and 4.7%, respectively, for the single-row, double-row, and compression double-row techniques. In the subcategory of large and massive rotator cuff tears, the retear rate in the compression double-row group (3 of 40 shoulders, 7.5%) was significantly less than those in the single-row group (5 of 8 shoulders, 62.5%, P < .001) and the double-row group (5 of 12 shoulders, 41.7%, P < .01). Postoperative clinical outcomes in patients with a retear were significantly lower than those in patients without a retear for all 3 techniques. The additional suture bridges decreased the retear rate for large and massive tears. The combination of the double-row and suture-bridge techniques, which had the lowest rate of postoperative retear, is an effective option for arthroscopic repair of the rotator cuff tendons because the postoperative functional outcome in patients with a retear is inferior to that without retear.
Bridging suture makes consistent and secure fixation in double-row rotator cuff repair.
Fukuhara, Tetsutaro; Mihata, Teruhisa; Jun, Bong Jae; Neo, Masashi
2017-09-01
Inconsistent tension distribution may decrease the biomechanical properties of the rotator cuff tendon after double-row repair, resulting in repair failure. The purpose of this study was to compare the tension distribution along the repaired rotator cuff tendon among three double-row repair techniques. In each of 42 fresh-frozen porcine shoulders, a simulated infraspinatus tendon tear was repaired by using 1 of 3 double-row techniques: (1) conventional double-row repair (no bridging suture); (2) transosseous-equivalent repair (bridging suture alone); and (3) compression double-row repair (which combined conventional double-row and bridging sutures). Each specimen underwent cyclic testing at a simulated shoulder abduction angle of 0° or 40° on a material-testing machine. Gap formation and tendon strain were measured during the 1st and 30th cycles. To evaluate tension distribution after cuff repair, difference in gap and tendon strain between the superior and inferior fixations was compared among three double-row techniques. At an abduction angle of 0°, gap formation after either transosseous-equivalent or compression double-row repair was significantly less than that after conventional double-row repair (p < 0.01). During the 30th cycle, both transosseous-equivalent repair (p = 0.02) and compression double-row repair (p = 0.01) at 0° abduction had significantly less difference in gap formation between the superior and inferior fixations than did conventional double-row repair. After the 30th cycle, the difference in longitudinal strain between the superior and inferior fixations at 0° abduction was significantly less with compression double-row repair (2.7% ± 2.4%) than with conventional double-row repair (8.6% ± 5.5%, p = 0.03). Bridging sutures facilitate consistent and secure fixation in double-row rotator cuff repairs, suggesting that bridging sutures may be beneficial for distributing tension equally among all sutures during double-row repair of rotator cuff tears. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less
Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.
Mihata, Teruhisa; Fukuhara, Tetsutaro; Jun, Bong Jae; Watanabe, Chisato; Kinoshita, Mitsuo
2011-03-01
After rotator cuff repair, the shoulder is immobilized in various abduction positions. However, there is no consensus on the proper abduction angle. To assess the effect of shoulder abduction angle on the biomechanical properties of the repaired rotator cuff tendons among 3 types of double-row techniques. Controlled laboratory study. Thirty-two fresh-frozen porcine shoulders were used. A simulated rotator cuff tear was repaired by 1 of 3 double-row techniques: conventional double-row repair, transosseous-equivalent repair, and a combination of conventional double-row and bridging sutures (compression double-row repair). Each specimen underwent cyclic testing followed by tensile testing to failure at a simulated shoulder abduction angle of 0° or 40° on a material testing machine. Gap formation and failure loads were measured. Gap formation in conventional double-row repair at 0° (1.2 ± 0.5 mm) was significantly greater than that at 40° (0.5 ± 0.3mm, P = .01). The yield and ultimate failure loads for conventional double-row repair at 40° were significantly larger than those at 0° (P < .01), whereas those for transosseous-equivalent repair (P < .01) and compression double-row repair (P < .0001) at 0° were significantly larger than those at 40°. The failure load for compression double-row repair was the greatest among the 3 double-row techniques at both 0° and 40° of abduction. Bridging sutures have a greater effect on the biomechanical properties of the repaired rotator cuff tendon at a low abduction angle, and the conventional double-row technique has a greater effect at a high abduction angle. Proper abduction position after rotator cuff repair differs between conventional double-row repair and transosseous-equivalent repair. The authors recommend the use of the combined technique of conventional double-row and bridging sutures to obtain better biomechanical properties at both low and high abduction angles.
NASA Astrophysics Data System (ADS)
Liu, Jianming; Grant, Steven L.; Benesty, Jacob
2015-12-01
A new reweighted proportionate affine projection algorithm (RPAPA) with memory and row action projection (MRAP) is proposed in this paper. The reweighted PAPA is derived from a family of sparseness measures, which demonstrate performance similar to mu-law and the l 0 norm PAPA but with lower computational complexity. The sparseness of the channel is taken into account to improve the performance for dispersive system identification. Meanwhile, the memory of the filter's coefficients is combined with row action projections (RAP) to significantly reduce computational complexity. Simulation results demonstrate that the proposed RPAPA MRAP algorithm outperforms both the affine projection algorithm (APA) and PAPA, and has performance similar to l 0 PAPA and mu-law PAPA, in terms of convergence speed and tracking ability. Meanwhile, the proposed RPAPA MRAP has much lower computational complexity than PAPA, mu-law PAPA, and l 0 PAPA, etc., which makes it very appealing for real-time implementation.
Casing for a gas turbine engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebe, David J.; Little, David A.; Charron, Richard C.
2016-07-12
A casing for a can annular gas turbine engine, including: a compressed air section (40) spanning between a last row of compressor blades (26) and a first row of turbine blades (28), the compressed air section (40) having a plurality of openings (50) there through, wherein a single combustor/advanced duct assembly (64) extends through each opening (50); and one top hat (68) associated with each opening (50) configured to enclose the associated combustor/advanced duct assembly (64) and seal the opening (50). A volume enclosed by the compressed air section (40) is not greater than a volume of a frustum (54)more » defined at an upstream end (56) by an inner diameter of the casing at the last row of compressor blades (26) and at a downstream end (60) by an inner diameter of the casing at the first row of turbine blades (28).« less
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.
Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai
2016-02-01
The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.
Dynamic graph system for a semantic database
Mizell, David
2016-04-12
A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.
Dynamic graph system for a semantic database
Mizell, David
2015-01-27
A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
Archer, A.W.; Maples, C.G.
1989-01-01
Numerous departures from ideal relationships are revealed by Monte Carlo simulations of widely accepted binomial coefficients. For example, simulations incorporating varying levels of matrix sparseness (presence of zeros indicating lack of data) and computation of expected values reveal that not only are all common coefficients influenced by zero data, but also that some coefficients do not discriminate between sparse or dense matrices (few zero data). Such coefficients computationally merge mutually shared and mutually absent information and do not exploit all the information incorporated within the standard 2 ?? 2 contingency table; therefore, the commonly used formulae for such coefficients are more complicated than the actual range of values produced. Other coefficients do differentiate between mutual presences and absences; however, a number of these coefficients do not demonstrate a linear relationship to matrix sparseness. Finally, simulations using nonrandom matrices with known degrees of row-by-row similarities signify that several coefficients either do not display a reasonable range of values or are nonlinear with respect to known relationships within the data. Analyses with nonrandom matrices yield clues as to the utility of certain coefficients for specific applications. For example, coefficients such as Jaccard, Dice, and Baroni-Urbani and Buser are useful if correction of sparseness is desired, whereas the Russell-Rao coefficient is useful when sparseness correction is not desired. ?? 1989 International Association for Mathematical Geology.
Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avron, Haim; Ng, Esmond G.; Toledo, Sivan
2008-03-21
We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we canmore » drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.« less
Low Complexity Compression and Speed Enhancement for Optical Scanning Holography
Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.
2016-01-01
In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410
Bitshuffle: Filter for improving compression of typed binary data
NASA Astrophysics Data System (ADS)
Masui, Kiyoshi
2017-12-01
Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.
Designing for Compressive Sensing: Compressive Art, Camouflage, Fonts, and Quick Response Codes
2018-01-01
an example where the signal is non-sparse in the standard basis, but sparse in the discrete cosine basis . The top plot shows the signal from the...previous example, now used as sparse discrete cosine transform (DCT) coefficients . The next plot shows the non-sparse signal in the standard...Romberg JK, Tao T. Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math . 2006;59(8):1207–1223. 3. Donoho DL
Distributed Compressive Sensing
2009-01-01
example, smooth signals are sparse in the Fourier basis, and piecewise smooth signals are sparse in a wavelet basis [8]; the commercial coding standards MP3...including wavelets [8], Gabor bases [8], curvelets [35], etc., are widely used for representation and compression of natural signals, images, and...spikes and the sine waves of a Fourier basis, or the Fourier basis and wavelets . Signals that are sparsely represented in frames or unions of bases can
NASA Astrophysics Data System (ADS)
Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.
2018-01-01
Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.
Some aspects of adaptive transform coding of multispectral data
NASA Technical Reports Server (NTRS)
Ahmed, N.; Natarajan, T.
1977-01-01
This paper concerns a data compression study pertaining to multi-spectral scanner (MSS) data. The motivation for this undertaking is the need for securing data compression of images obtained in connection with the Landsat Follow-On Mission, where a compression of at least 6:1 is required. The MSS data used in this study consisted of four scenes: Tristate, consisting of 256 pels per row and a total of 512 rows - i.e., (256x512), (2) Sacramento (256x512), (3) Portland (256x512), and (4) Bald Knob (200x256). All these scenes were on digital tape at 6 bits/pel. The corresponding reconstructed scenes of 1 bit/pel (i.e., a 6:1 compression) are included.
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi
2015-01-01
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195
NASA Astrophysics Data System (ADS)
Vishnukumar, S.; Wilscy, M.
2017-12-01
In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.
Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.
Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke
2018-04-29
The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
Smith, Geoffrey C S; Lam, Patrick H
2018-06-20
The self-reinforcement mechanism after double row suturebridge rotator cuff repair generates increasing compressive forces at the tendon footprint with increasing tendon load. Passive range of motion is usually allowed after rotator cuff repair. The mechanism of self-reinforcement could be adversely affected by shoulder abduction. Rotator cuff tears were created ex vivo in nine pairs of ovine shoulders. Two different repair techniques were used. One group was repaired using a double row 'suturebridge' construct with tied horizontal medial row mattress sutures (Knotted repair group). The other group was repaired identically except that medial row knots were not tied (Knotless repair group). Footprint compression was measured at varying amounts of abduction and under tendon loads of 0, 10, 20, 30, 40, 50 and 60N. The rate of increase of contact pressure (degree of self-reinforcement) was calculated for each abduction angle. Abduction diminishes footprint contact pressure in both knotted and knotless double row suturebridge constructs. Progressive abduction from 0 to 40 abduction in the knotless group and 0-30 in the knotted group results in a decrease in self-reinforcement. Abduction beyond this does not cause a further decrease in self-reinforcement. There was no difference in the rate of increase of footprint contact pressure at each angle of abduction when comparing the knotted and knotless groups. In the post-operative period, high tendon load combined with minimal abduction would be expected to generate the greatest amount of footprint compression which may improve tendon healing. Therefore, to maximize footprint compression the use of abduction pillows should be avoided while early isometric strengthening should be used.
Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
Liao, Ke; Zhu, Min; Ding, Lei
2013-08-01
The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2017-09-01
A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.
NASA Astrophysics Data System (ADS)
Orović, Irena; Stanković, Srdjan; Amin, Moeness
2013-05-01
A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.
Bayesian sparse channel estimation
NASA Astrophysics Data System (ADS)
Chen, Chulong; Zoltowski, Michael D.
2012-05-01
In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.
Fpack and Funpack Utilities for FITS Image Compression and Uncompression
NASA Technical Reports Server (NTRS)
Pence, W.
2008-01-01
Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.
Curvelet-based compressive sensing for InSAR raw data
NASA Astrophysics Data System (ADS)
Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David
2015-10-01
The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.
Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D
2013-02-01
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
Knopman, Debra S.; Voss, Clifford I.; Garabedian, Stephen P.
1991-01-01
Tests of a one-dimensional sampling design methodology on measurements of bromide concentration collected during the natural gradient tracer test conducted by the U.S. Geological Survey on Cape Cod, Massachusetts, demonstrate its efficacy for field studies of solute transport in groundwater and the utility of one-dimensional analysis. The methodology was applied to design of sparse two-dimensional networks of fully screened wells typical of those often used in engineering practice. In one-dimensional analysis, designs consist of the downstream distances to rows of wells oriented perpendicular to the groundwater flow direction and the timing of sampling to be carried out on each row. The power of a sampling design is measured by its effectiveness in simultaneously meeting objectives of model discrimination, parameter estimation, and cost minimization. One-dimensional models of solute transport, differing in processes affecting the solute and assumptions about the structure of the flow field, were considered for description of tracer cloud migration. When fitting each model using nonlinear regression, additive and multiplicative error forms were allowed for the residuals which consist of both random and model errors. The one-dimensional single-layer model of a nonreactive solute with multiplicative error was judged to be the best of those tested. Results show the efficacy of the methodology in designing sparse but powerful sampling networks. Designs that sample five rows of wells at five or fewer times in any given row performed as well for model discrimination as the full set of samples taken up to eight times in a given row from as many as 89 rows. Also, designs for parameter estimation judged to be good by the methodology were as effective in reducing the variance of parameter estimates as arbitrary designs with many more samples. Results further showed that estimates of velocity and longitudinal dispersivity in one-dimensional models based on data from only five rows of fully screened wells each sampled five or fewer times were practically equivalent to values determined from moments analysis of the complete three-dimensional set of 29,285 samples taken during 16 sampling times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaodong; Xia, Yidong; Luo, Hong
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
Liu, Xiaodong; Xia, Yidong; Luo, Hong; ...
2016-10-05
A comparative study of two classes of third-order implicit time integration schemes is presented for a third-order hierarchical WENO reconstructed discontinuous Galerkin (rDG) method to solve the 3D unsteady compressible Navier-Stokes equations: — 1) the explicit first stage, single diagonally implicit Runge-Kutta (ESDIRK3) scheme, and 2) the Rosenbrock-Wanner (ROW) schemes based on the differential algebraic equations (DAEs) of Index-2. Compared with the ESDIRK3 scheme, a remarkable feature of the ROW schemes is that, they only require one approximate Jacobian matrix calculation every time step, thus considerably reducing the overall computational cost. A variety of test cases, ranging from inviscid flowsmore » to DNS of turbulent flows, are presented to assess the performance of these schemes. Here, numerical experiments demonstrate that the third-order ROW scheme for the DAEs of index-2 can not only achieve the designed formal order of temporal convergence accuracy in a benchmark test, but also require significantly less computing time than its ESDIRK3 counterpart to converge to the same level of discretization errors in all of the flow simulations in this study, indicating that the ROW methods provide an attractive alternative for the higher-order time-accurate integration of the unsteady compressible Navier-Stokes equations.« less
GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition
NASA Astrophysics Data System (ADS)
Zhen, Z.; Jia, X.
2014-12-01
Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the speedup ratio time consumption of RTM is 11.5. At the same time, the accuracy of imaging is not harmed. Another advantage of the GPUs-GPP method is its easy applications in other numerical methods such as the FEM. Finally, in the GPUs-GPP method, the arrays require quite limited memory storage, which makes the method promising in dealing with large-scale 3D problems.
Compressive sampling by artificial neural networks for video
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt
2011-06-01
We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.
A tight and explicit representation of Q in sparse QR factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, E.G.; Peyton, B.W.
1992-05-01
In QR factorization of a sparse m{times}n matrix A (m {ge} n) the orthogonal factor Q is often stored implicitly as a lower trapezoidal matrix H known as the Householder matrix. This paper presents a simple characterization of the row structure of Q, which could be used as the basis for a sparse data structure that can store Q explicitly. The new characterization is a simple extension of a well known row-oriented characterization of the structure of H. Hare, Johnson, Olesky, and van den Driessche have recently provided a complete sparsity analysis of the QR factorization. Let U be themore » matrix consisting of the first n columns of Q. Using results from, we show that the data structures for H and U resulting from our characterizations are tight when A is a strong Hall matrix. We also show that H and the lower trapezoidal part of U have the same sparsity characterization when A is strong Hall. We then show that this characterization can be extended to any weak Hall matrix that has been permuted into block upper triangular form. Finally, we show that permuting to block triangular form never increases the fill incurred during the factorization.« less
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
High-dimensional statistical inference: From vector to matrix
NASA Astrophysics Data System (ADS)
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
NASA Astrophysics Data System (ADS)
Noh, Hae Young; Kiremidjian, Anne S.
2011-04-01
This paper introduces a data compression method using the K-SVD algorithm and its application to experimental ambient vibration data for structural health monitoring purposes. Because many damage diagnosis algorithms that use system identification require vibration measurements of multiple locations, it is necessary to transmit long threads of data. In wireless sensor networks for structural health monitoring, however, data transmission is often a major source of battery consumption. Therefore, reducing the amount of data to transmit can significantly lengthen the battery life and reduce maintenance cost. The K-SVD algorithm was originally developed in information theory for sparse signal representation. This algorithm creates an optimal over-complete set of bases, referred to as a dictionary, using singular value decomposition (SVD) and represents the data as sparse linear combinations of these bases using the orthogonal matching pursuit (OMP) algorithm. Since ambient vibration data are stationary, we can segment them and represent each segment sparsely. Then only the dictionary and the sparse vectors of the coefficients need to be transmitted wirelessly for restoration of the original data. We applied this method to ambient vibration data measured from a four-story steel moment resisting frame. The results show that the method can compress the data efficiently and restore the data with very little error.
High-performance sparse matrix-matrix products on Intel KNL and multicore architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagasaka, Y; Matsuoka, S; Azad, A
Sparse matrix-matrix multiplication (SpGEMM) is a computational primitive that is widely used in areas ranging from traditional numerical applications to recent big data analysis and machine learning. Although many SpGEMM algorithms have been proposed, hardware specific optimizations for multi- and many-core processors are lacking and a detailed analysis of their performance under various use cases and matrices is not available. We firstly identify and mitigate multiple bottlenecks with memory management and thread scheduling on Intel Xeon Phi (Knights Landing or KNL). Specifically targeting multi- and many-core processors, we develop a hash-table-based algorithm and optimize a heap-based shared-memory SpGEMM algorithm. Wemore » examine their performance together with other publicly available codes. Different from the literature, our evaluation also includes use cases that are representative of real graph algorithms, such as multi-source breadth-first search or triangle counting. Our hash-table and heap-based algorithms are showing significant speedups from libraries in the majority of the cases while different algorithms dominate the other scenarios with different matrix size, sparsity, compression factor and operation type. We wrap up in-depth evaluation results and make a recipe to give the best SpGEMM algorithm for target scenario. A critical finding is that hash-table-based SpGEMM gets a significant performance boost if the nonzeros are not required to be sorted within each row of the output matrix.« less
Compressive sensing for single-shot two-dimensional coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, E.; Spencer, A.; Spokoyny, B.
2017-02-01
In this work, we explore the use of compressive sensing for the rapid acquisition of two-dimensional optical spectra that encodes the electronic structure and ultrafast dynamics of condensed-phase molecular species. Specifically, we have developed a means to combine multiplexed single-element detection and single-shot and phase-resolved two-dimensional coherent spectroscopy. The method described, which we call Single Point Array Reconstruction by Spatial Encoding (SPARSE) eliminates the need for costly array detectors while speeding up acquisition by several orders of magnitude compared to scanning methods. Physical implementation of SPARSE is facilitated by combining spatiotemporal encoding of the nonlinear optical response and signal modulation by a high-speed digital micromirror device. We demonstrate the approach by investigating a well-characterized cyanine molecule and a photosynthetic pigment-protein complex. Hadamard and compressive sensing algorithms are demonstrated, with the latter achieving compression factors as high as ten. Both show good agreement with directly detected spectra. We envision a myriad of applications in nonlinear spectroscopy using SPARSE with broadband femtosecond light sources in so-far unexplored regions of the electromagnetic spectrum.
Sparse-View Ultrasound Diffraction Tomography Using Compressed Sensing with Nonuniform FFT
2014-01-01
Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed. PMID:24868241
Practical Sub-Nyquist Sampling via Array-Based Compressed Sensing Receiver Architecture
2016-07-10
different array ele- ments at different sub-Nyquist sampling rates. Signal processing inspired by the sparse fast Fourier transform allows for signal...reconstruction algorithms can be computationally demanding (REF). The related sparse Fourier transform algorithms aim to reduce the processing time nec- essary to...compute the DFT of frequency-sparse signals [7]. In particular, the sparse fast Fourier transform (sFFT) achieves processing time better than the
Spectrum recovery method based on sparse representation for segmented multi-Gaussian model
NASA Astrophysics Data System (ADS)
Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan
2016-09-01
Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe
2018-02-15
We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Mean and Turbulent Flow Statistics in a Trellised Agricultural Canopy
NASA Astrophysics Data System (ADS)
Miller, Nathan E.; Stoll, Rob; Mahaffee, Walter F.; Pardyjak, Eric R.
2017-10-01
Flow physics is investigated in a two-dimensional trellised agricultural canopy to examine that architecture's unique signature on turbulent transport. Analysis of meteorological data from an Oregon vineyard demonstrates that the canopy strongly influences the flow by channelling the mean flow into the vine-row direction regardless of the above-canopy wind direction. Additionally, other flow statistics in the canopy sub-layer show a dependance on the difference between the above-canopy wind direction and the vine-row direction. This includes an increase in the canopy displacement height and a decrease in the canopy-top shear length scale as the above-canopy flow rotates from row-parallel towards row-orthogonal. Distinct wind-direction-based variations are also observed in the components of the stress tensor, turbulent kinetic energy budget, and the energy spectra. Although spectral results suggest that sonic anemometry is insufficient for resolving all of the important scales of motion within the canopy, the energy spectra peaks still exhibit dependencies on the canopy and the wind direction. These variations demonstrate that the trellised-canopy's effect on the flow during periods when the flow is row-aligned is similar to that seen by sparse canopies, and during periods when the flow is row-orthogonal, the effect is similar to that seen by dense canopies.
The fast algorithm of spark in compressive sensing
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.
A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data
Fan, Ya Ju; Kamath, Chandrika
2016-09-01
The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less
A Comparison of Compressed Sensing and Sparse Recovery Algorithms Applied to Simulation Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Ya Ju; Kamath, Chandrika
The move toward exascale computing for scientific simulations is placing new demands on compression techniques. It is expected that the I/O system will not be able to support the volume of data that is expected to be written out. To enable quantitative analysis and scientific discovery, we are interested in techniques that compress high-dimensional simulation data and can provide perfect or near-perfect reconstruction. In this paper, we explore the use of compressed sensing (CS) techniques to reduce the size of the data before they are written out. Using large-scale simulation data, we investigate how the sufficient sparsity condition and themore » contrast in the data affect the quality of reconstruction and the degree of compression. Also, we provide suggestions for the practical implementation of CS techniques and compare them with other sparse recovery methods. Finally, our results show that despite longer times for reconstruction, compressed sensing techniques can provide near perfect reconstruction over a range of data with varying sparsity.« less
C-FSCV: Compressive Fast-Scan Cyclic Voltammetry for Brain Dopamine Recording.
Zamani, Hossein; Bahrami, Hamid Reza; Chalwadi, Preeti; Garris, Paul A; Mohseni, Pedram
2018-01-01
This paper presents a novel compressive sensing framework for recording brain dopamine levels with fast-scan cyclic voltammetry (FSCV) at a carbon-fiber microelectrode. Termed compressive FSCV (C-FSCV), this approach compressively samples the measured total current in each FSCV scan and performs basic FSCV processing steps, e.g., background current averaging and subtraction, directly with compressed measurements. The resulting background-subtracted faradaic currents, which are shown to have a block-sparse representation in the discrete cosine transform domain, are next reconstructed from their compressively sampled counterparts with the block sparse Bayesian learning algorithm. Using a previously recorded dopamine dataset, consisting of electrically evoked signals recorded in the dorsal striatum of an anesthetized rat, the C-FSCV framework is shown to be efficacious in compressing and reconstructing brain dopamine dynamics and associated voltammograms with high fidelity (correlation coefficient, ), while achieving compression ratio, CR, values as high as ~ 5. Moreover, using another set of dopamine data recorded 5 minutes after administration of amphetamine (AMPH) to an ambulatory rat, C-FSCV once again compresses (CR = 5) and reconstructs the temporal pattern of dopamine release with high fidelity ( ), leading to a true-positive rate of 96.4% in detecting AMPH-induced dopamine transients.
McDermott, Danielle; Olson Reichhardt, Cynthia J; Reichhardt, Charles
2016-11-28
Using numerical simulations, we study the dynamical evolution of particles interacting via competing long-range repulsion and short-range attraction in two dimensions. The particles are compressed using a time-dependent quasi-one dimensional trough potential that controls the local density, causing the system to undergo a series of structural phase transitions from a low density clump lattice to stripes, voids, and a high density uniform state. The compression proceeds via slow elastic motion that is interrupted with avalanche-like bursts of activity as the system collapses to progressively higher densities via plastic rearrangements. The plastic events vary in magnitude from small rearrangements of particles, including the formation of quadrupole-like defects, to large-scale vorticity and structural phase transitions. In the dense uniform phase, the system compresses through row reduction transitions mediated by a disorder-order process. We characterize the rearrangement events by measuring changes in the potential energy, the fraction of sixfold coordinated particles, the local density, and the velocity distribution. At high confinements, we find power law scaling of the velocity distribution during row reduction transitions. We observe hysteresis under a reversal of the compression when relatively few plastic rearrangements occur. The decompressing system exhibits distinct phase morphologies, and the phase transitions occur at lower compression forces as the system expands compared to when it is compressed.
Structural transitions and hysteresis in clump- and stripe-forming systems under dynamic compression
McDermott, Danielle; Olson Reichhardt, Cynthia J.; Reichhardt, Charles
2016-11-11
In using numerical simulations, we study the dynamical evolution of particles interacting via competing long-range repulsion and short-range attraction in two dimensions. The particles are compressed using a time-dependent quasi-one dimensional trough potential that controls the local density, causing the system to undergo a series of structural phase transitions from a low density clump lattice to stripes, voids, and a high density uniform state. The compression proceeds via slow elastic motion that is interrupted with avalanche-like bursts of activity as the system collapses to progressively higher densities via plastic rearrangements. The plastic events vary in magnitude from small rearrangements ofmore » particles, including the formation of quadrupole-like defects, to large-scale vorticity and structural phase transitions. In the dense uniform phase, the system compresses through row reduction transitions mediated by a disorder-order process. We also characterize the rearrangement events by measuring changes in the potential energy, the fraction of sixfold coordinated particles, the local density, and the velocity distribution. At high confinements, we find power law scaling of the velocity distribution during row reduction transitions. We observe hysteresis under a reversal of the compression when relatively few plastic rearrangements occur. The decompressing system exhibits distinct phase morphologies, and the phase transitions occur at lower compression forces as the system expands compared to when it is compressed.« less
NASA Technical Reports Server (NTRS)
Korde-Patel, Asmita (Inventor); Barry, Richard K.; Mohsenin, Tinoosh
2016-01-01
Compressive Sensing is a technique for simultaneous acquisition and compression of data that is sparse or can be made sparse in some domain. It is currently under intense development and has been profitably employed for industrial and medical applications. We here describe the use of this technique for the processing of astronomical data. We outline the procedure as applied to exoplanet gravitational microlensing and analyze measurement results and uncertainty values. We describe implications for on-spacecraft data processing for space observatories. Our findings suggest that application of these techniques may yield significant, enabling benefits especially for power and volume-limited space applications such as miniaturized or micro-constellation satellites.
RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing
NASA Astrophysics Data System (ADS)
Gui, Guan; Xu, Li; Adachi, Fumiyuki
2014-12-01
Nonlinear sparse sensing (NSS) techniques have been adopted for realizing compressive sensing in many applications such as radar imaging. Unlike the NSS, in this paper, we propose an adaptive sparse sensing (ASS) approach using the reweighted zero-attracting normalized least mean fourth (RZA-NLMF) algorithm which depends on several given parameters, i.e., reweighted factor, regularization parameter, and initial step size. First, based on the independent assumption, Cramer-Rao lower bound (CRLB) is derived as for the performance comparisons. In addition, reweighted factor selection method is proposed for achieving robust estimation performance. Finally, to verify the algorithm, Monte Carlo-based computer simulations are given to show that the ASS achieves much better mean square error (MSE) performance than the NSS.
Sparse signals recovered by non-convex penalty in quasi-linear systems.
Cui, Angang; Li, Haiyang; Wen, Meng; Peng, Jigen
2018-01-01
The goal of compressed sensing is to reconstruct a sparse signal under a few linear measurements far less than the dimension of the ambient space of the signal. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures, and the linear model is no longer suitable. Compared with the compressed sensing under the linear circumstance, this nonlinear compressed sensing is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the [Formula: see text]-norm and the nonlinearity. In order to get a convenience for sparse signal recovery, we set the nonlinear models have a smooth quasi-linear nature in this paper, and study a non-convex fraction function [Formula: see text] in this quasi-linear compressed sensing. We propose an iterative fraction thresholding algorithm to solve the regularization problem [Formula: see text] for all [Formula: see text]. With the change of parameter [Formula: see text], our algorithm could get a promising result, which is one of the advantages for our algorithm compared with some state-of-art algorithms. Numerical experiments show that our method performs much better than some state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David
2012-04-01
The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.
Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging
Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528
Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.
Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-05-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.
NASA Astrophysics Data System (ADS)
Fiandrotti, Attilio; Fosson, Sophie M.; Ravazzi, Chiara; Magli, Enrico
2018-04-01
Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
A CFD analysis of blade row interactions within a high-speed axial compressor
NASA Astrophysics Data System (ADS)
Richman, Michael Scott
Aircraft engine design provides many technical and financial hurdles. In an effort to streamline the design process, save money, and improve reliability and performance, many manufacturers are relying on computational fluid dynamic simulations. An overarching goal of the design process for military aircraft engines is to reduce size and weight while maintaining (or improving) reliability. Designers often turn to the compression system to accomplish this goal. As pressure ratios increase and the number of compression stages decrease, many problems arise, for example stability and high cycle fatigue (HCF) become significant as individual stage loading is increased. CFD simulations have recently been employed to assist in the understanding of the aeroelastic problems. For accurate multistage blade row HCF prediction, it is imperative that advanced three-dimensional blade row unsteady aerodynamic interaction codes be validated with appropriate benchmark data. This research addresses this required validation process for TURBO, an advanced three-dimensional multi-blade row turbomachinery CFD code. The solution/prediction accuracy is characterized, identifying key flow field parameters driving the inlet guide vane (IGV) and stator response to the rotor generated forcing functions. The result is a quantified evaluation of the ability of TURBO to predict not only the fundamental flow field characteristics but the three dimensional blade loading.
SparseCT: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction
NASA Astrophysics Data System (ADS)
Koesters, Thomas; Knoll, Florian; Sodickson, Aaron; Sodickson, Daniel K.; Otazo, Ricardo
2017-03-01
State-of-the-art low-dose CT methods reduce the x-ray tube current and use iterative reconstruction methods to denoise the resulting images. However, due to compromises between denoising and image quality, only moderate dose reductions up to 30-40% are accepted in clinical practice. An alternative approach is to reduce the number of x-ray projections and use compressed sensing to reconstruct the full-tube-current undersampled data. This idea was recognized in the early days of compressed sensing and proposals for CT dose reduction appeared soon afterwards. However, no practical means of undersampling has yet been demonstrated in the challenging environment of a rapidly rotating CT gantry. In this work, we propose a moving multislit collimator as a practical incoherent undersampling scheme for compressed sensing CT and evaluate its application for radiation dose reduction. The proposed collimator is composed of narrow slits and moves linearly along the slice dimension (z), to interrupt the incident beam in different slices for each x-ray tube angle (θ). The reduced projection dataset is then reconstructed using a sparse approach, where 3D image gradients are employed to enforce sparsity. The effects of the collimator slits on the beam profile were measured and represented as a continuous slice profile. SparseCT was tested using retrospective undersampling and compared against commercial current-reduction techniques on phantoms and in vivo studies. Initial results suggest that SparseCT may enable higher performance than current-reduction, particularly for high dose reduction factors.
Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.
2018-03-20
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less
Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less
Compressed digital holography: from micro towards macro
NASA Astrophysics Data System (ADS)
Schretter, Colas; Bettens, Stijn; Blinder, David; Pesquet-Popescu, Béatrice; Cagnazzo, Marco; Dufaux, Frédéric; Schelkens, Peter
2016-09-01
signal processing methods from software-driven computer engineering and applied mathematics. The compressed sensing theory in particular established a practical framework for reconstructing the scene content using few linear combinations of complex measurements and a sparse prior for regularizing the solution. Compressed sensing found direct applications in digital holography for microscopy. Indeed, the wave propagation phenomenon in free space mixes in a natural way the spatial distribution of point sources from the 3-dimensional scene. As the 3-dimensional scene is mapped to a 2-dimensional hologram, the hologram samples form a compressed representation of the scene as well. This overview paper discusses contributions in the field of compressed digital holography at the micro scale. Then, an outreach on future extensions towards the real-size macro scale is discussed. Thanks to advances in sensor technologies, increasing computing power and the recent improvements in sparse digital signal processing, holographic modalities are on the verge of practical high-quality visualization at a macroscopic scale where much higher resolution holograms must be acquired and processed on the computer.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-09-01
We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.
Towards sparse characterisation of on-body ultra-wideband wireless channels.
Yang, Xiaodong; Ren, Aifeng; Zhang, Zhiya; Ur Rehman, Masood; Abbasi, Qammer Hussain; Alomainy, Akram
2015-06-01
With the aim of reducing cost and power consumption of the receiving terminal, compressive sensing (CS) framework is applied to on-body ultra-wideband (UWB) channel estimation. It is demonstrated in this Letter that the sparse on-body UWB channel impulse response recovered by the CS framework fits the original sparse channel well; thus, on-body channel estimation can be achieved using low-speed sampling devices.
Towards sparse characterisation of on-body ultra-wideband wireless channels
Ren, Aifeng; Zhang, Zhiya; Ur Rehman, Masood; Abbasi, Qammer Hussain; Alomainy, Akram
2015-01-01
With the aim of reducing cost and power consumption of the receiving terminal, compressive sensing (CS) framework is applied to on-body ultra-wideband (UWB) channel estimation. It is demonstrated in this Letter that the sparse on-body UWB channel impulse response recovered by the CS framework fits the original sparse channel well; thus, on-body channel estimation can be achieved using low-speed sampling devices. PMID:26609409
Bolted joints in graphite-epoxy composites
NASA Technical Reports Server (NTRS)
Hart-Smith, L. J.
1976-01-01
All-graphite/epoxy laminates and hybrid graphite-glass/epoxy laminates were tested. The tests encompassed a range of geometries for each laminate pattern to cover the three basic failure modes - net section tension failure through the bolt hole, bearing and shearout. Static tensile and compressive loads were applied. A constant bolt diameter of 6.35 mm (0.25 in.) was used in the tests. The interaction of stress concentrations associated with multi-row bolted joints was investigated by testing single- and double-row bolted joints and open-hole specimens in tension. For tension loading, linear interaction was found to exist between the bearing stress reacted at a given bolt hole and the remaining tension stress running by that hole to be reacted elsewhere. The interaction under compressive loading was found to be non-linear. Comparative tests were run using single-lap bolted joints and double-lap joints with pin connection. Both of these joint types exhibited lower strengths than were demonstrated by the corresponding double-lap joints. The analysis methods developed here for single bolt joints are shown to be capable of predicting the behavior of multi-row joints.
Extended frequency turbofan model
NASA Technical Reports Server (NTRS)
Mason, J. R.; Park, J. W.; Jaekel, R. F.
1980-01-01
The fan model was developed using two dimensional modeling techniques to add dynamic radial coupling between the core stream and the bypass stream of the fan. When incorporated into a complete TF-30 engine simulation, the fan model greatly improved compression system frequency response to planar inlet pressure disturbances up to 100 Hz. The improved simulation also matched engine stability limits at 15 Hz, whereas the one dimensional fan model required twice the inlet pressure amplitude to stall the simulation. With verification of the two dimensional fan model, this program formulated a high frequency F-100(3) engine simulation using row by row compression system characteristics. In addition to the F-100(3) remote splitter fan, the program modified the model fan characteristics to simulate a proximate splitter version of the F-100(3) engine.
Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; Huang, Lianjie
2015-01-28
Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less
Eastern Tent Caterpillar (Pest Alert)
Robert Rabaglia; Daniel Twardus
1990-01-01
The eastern tent caterpillar is often mistaken for the gypsy moth. Though they are similar in appearance, they differ in habits. The fully grown eastern tent caterpillar is about 2 inches long, black with a white stripe along the middle of the back and a row of pale blue oval spots on each side. It is sparsely covered with fine light brown hairs. The gypsy moth...
Artificial neural network does better spatiotemporal compressive sampling
NASA Astrophysics Data System (ADS)
Lee, Soo-Young; Hsu, Charles; Szu, Harold
2012-06-01
Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.
Subspace aware recovery of low rank and jointly sparse signals
Biswas, Sampurna; Dasgupta, Soura; Mudumbai, Raghuraman; Jacob, Mathews
2017-01-01
We consider the recovery of a matrix X, which is simultaneously low rank and joint sparse, from few measurements of its columns using a two-step algorithm. Each column of X is measured using a combination of two measurement matrices; one which is the same for every column, while the the second measurement matrix varies from column to column. The recovery proceeds by first estimating the row subspace vectors from the measurements corresponding to the common matrix. The estimated row subspace vectors are then used to recover X from all the measurements using a convex program of joint sparsity minimization. Our main contribution is to provide sufficient conditions on the measurement matrices that guarantee the recovery of such a matrix using the above two-step algorithm. The results demonstrate quite significant savings in number of measurements when compared to the standard multiple measurement vector (MMV) scheme, which assumes same time invariant measurement pattern for all the time frames. We illustrate the impact of the sampling pattern on reconstruction quality using breath held cardiac cine MRI and cardiac perfusion MRI data, while the utility of the algorithm to accelerate the acquisition is demonstrated on MR parameter mapping. PMID:28630889
Intelligent Data Granulation on Load: Improving Infobright's Knowledge Grid
NASA Astrophysics Data System (ADS)
Ślęzak, Dominik; Kowalski, Marcin
One of the major aspects of Infobright's relational database technology is automatic decomposition of each of data tables onto Rough Rows, each consisting of 64K of original rows. Rough Rows are automatically annotated by Knowledge Nodes that represent compact information about the rows' values. Query performance depends on the quality of Knowledge Nodes, i.e., their efficiency in minimizing the access to the compressed portions of data stored on disk, according to the specific query optimization procedures. We show how to implement the mechanism of organizing the incoming data into such Rough Rows that maximize the quality of the corresponding Knowledge Nodes. Given clear business-driven requirements, the implemented mechanism needs to be fully integrated with the data load process, causing no decrease in the data load speed. The performance gain resulting from better data organization is illustrated by some tests over our benchmark data. The differences between the proposed mechanism and some well-known procedures of database clustering or partitioning are discussed. The paper is a continuation of our patent application [22].
Algorithms for solving large sparse systems of simultaneous linear equations on vector processors
NASA Technical Reports Server (NTRS)
David, R. E.
1984-01-01
Very efficient algorithms for solving large sparse systems of simultaneous linear equations have been developed for serial processing computers. These involve a reordering of matrix rows and columns in order to obtain a near triangular pattern of nonzero elements. Then an LU factorization is developed to represent the matrix inverse in terms of a sequence of elementary Gaussian eliminations, or pivots. In this paper it is shown how these algorithms are adapted for efficient implementation on vector processors. Results obtained on the CYBER 200 Model 205 are presented for a series of large test problems which show the comparative advantages of the triangularization and vector processing algorithms.
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
NASA Astrophysics Data System (ADS)
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
Two-dimensional sparse wavenumber recovery for guided wavefields
NASA Astrophysics Data System (ADS)
Sabeti, Soroosh; Harley, Joel B.
2018-04-01
The multi-modal and dispersive behavior of guided waves is often characterized by their dispersion curves, which describe their frequency-wavenumber behavior. In prior work, compressive sensing based techniques, such as sparse wavenumber analysis (SWA), have been capable of recovering dispersion curves from limited data samples. A major limitation of SWA, however, is the assumption that the structure is isotropic. As a result, SWA fails when applied to composites and other anisotropic structures. There have been efforts to address this issue in the literature, but they either are not easily generalizable or do not sufficiently express the data. In this paper, we enhance the existing approaches by employing a two-dimensional wavenumber model to account for direction-dependent velocities in anisotropic media. We integrate this model with tools from compressive sensing to reconstruct a wavefield from incomplete data. Specifically, we create a modified two-dimensional orthogonal matching pursuit algorithm that takes an undersampled wavefield image, with specified unknown elements, and determines its sparse wavenumber characteristics. We then recover the entire wavefield from the sparse representations obtained with our small number of data samples.
Matched field localization based on CS-MUSIC algorithm
NASA Astrophysics Data System (ADS)
Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng
2016-04-01
The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.
NASA Astrophysics Data System (ADS)
Chen, Yong-fei; Gao, Hong-xia; Wu, Zi-ling; Kang, Hui
2018-01-01
Compressed sensing (CS) has achieved great success in single noise removal. However, it cannot restore the images contaminated with mixed noise efficiently. This paper introduces nonlocal similarity and cosparsity inspired by compressed sensing to overcome the difficulties in mixed noise removal, in which nonlocal similarity explores the signal sparsity from similar patches, and cosparsity assumes that the signal is sparse after a possibly redundant transform. Meanwhile, an adaptive scheme is designed to keep the balance between mixed noise removal and detail preservation based on local variance. Finally, IRLSM and RACoSaMP are adopted to solve the objective function. Experimental results demonstrate that the proposed method is superior to conventional CS methods, like K-SVD and state-of-art method nonlocally centralized sparse representation (NCSR), in terms of both visual results and quantitative measures.
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
A method of vehicle license plate recognition based on PCANet and compressive sensing
NASA Astrophysics Data System (ADS)
Ye, Xianyi; Min, Feng
2018-03-01
The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.
Statistical regularities of art images and natural scenes: spectra, sparseness and nonlinearities.
Graham, Daniel J; Field, David J
2007-01-01
Paintings are the product of a process that begins with ordinary vision in the natural world and ends with manipulation of pigments on canvas. Because artists must produce images that can be seen by a visual system that is thought to take advantage of statistical regularities in natural scenes, artists are likely to replicate many of these regularities in their painted art. We have tested this notion by computing basic statistical properties and modeled cell response properties for a large set of digitized paintings and natural scenes. We find that both representational and non-representational (abstract) paintings from our sample (124 images) show basic similarities to a sample of natural scenes in terms of their spatial frequency amplitude spectra, but the paintings and natural scenes show significantly different mean amplitude spectrum slopes. We also find that the intensity distributions of paintings show a lower skewness and sparseness than natural scenes. We account for this by considering the range of luminances found in the environment compared to the range available in the medium of paint. A painting's range is limited by the reflective properties of its materials. We argue that artists do not simply scale the intensity range down but use a compressive nonlinearity. In our studies, modeled retinal and cortical filter responses to the images were less sparse for the paintings than for the natural scenes. But when a compressive nonlinearity was applied to the images, both the paintings' sparseness and the modeled responses to the paintings showed the same or greater sparseness compared to the natural scenes. This suggests that artists achieve some degree of nonlinear compression in their paintings. Because paintings have captivated humans for millennia, finding basic statistical regularities in paintings' spatial structure could grant insights into the range of spatial patterns that humans find compelling.
Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data
NASA Astrophysics Data System (ADS)
Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam
2018-06-01
Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.
Fast and low-dose computed laminography using compressive sensing based technique
NASA Astrophysics Data System (ADS)
Abbas, Sajid; Park, Miran; Cho, Seungryong
2015-03-01
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.
Compressed learning and its applications to subcellular localization.
Zheng, Zhong-Long; Guo, Li; Jia, Jiong; Xie, Chen-Mao; Zeng, Wen-Cai; Yang, Jie
2011-09-01
One of the main challenges faced by biological applications is to predict protein subcellular localization in automatic fashion accurately. To achieve this in these applications, a wide variety of machine learning methods have been proposed in recent years. Most of them focus on finding the optimal classification scheme and less of them take the simplifying the complexity of biological systems into account. Traditionally, such bio-data are analyzed by first performing a feature selection before classification. Motivated by CS (Compressed Sensing) theory, we propose the methodology which performs compressed learning with a sparseness criterion such that feature selection and dimension reduction are merged into one analysis. The proposed methodology decreases the complexity of biological system, while increases protein subcellular localization accuracy. Experimental results are quite encouraging, indicating that the aforementioned sparse methods are quite promising in dealing with complicated biological problems, such as predicting the subcellular localization of Gram-negative bacterial proteins.
An Online Dictionary Learning-Based Compressive Data Gathering Algorithm in Wireless Sensor Networks
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-01-01
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It’s theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods. PMID:27669250
Wang, Donghao; Wan, Jiangwen; Chen, Junying; Zhang, Qiang
2016-09-22
To adapt to sense signals of enormous diversities and dynamics, and to decrease the reconstruction errors caused by ambient noise, a novel online dictionary learning method-based compressive data gathering (ODL-CDG) algorithm is proposed. The proposed dictionary is learned from a two-stage iterative procedure, alternately changing between a sparse coding step and a dictionary update step. The self-coherence of the learned dictionary is introduced as a penalty term during the dictionary update procedure. The dictionary is also constrained with sparse structure. It's theoretically demonstrated that the sensing matrix satisfies the restricted isometry property (RIP) with high probability. In addition, the lower bound of necessary number of measurements for compressive sensing (CS) reconstruction is given. Simulation results show that the proposed ODL-CDG algorithm can enhance the recovery accuracy in the presence of noise, and reduce the energy consumption in comparison with other dictionary based data gathering methods.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
Sun, Jiedi; Yu, Yang; Wen, Jiangtao
2017-01-01
Remote monitoring of bearing conditions, using wireless sensor network (WSN), is a developing trend in the industrial field. In complicated industrial environments, WSN face three main constraints: low energy, less memory, and low operational capability. Conventional data-compression methods, which concentrate on data compression only, cannot overcome these limitations. Aiming at these problems, this paper proposed a compressed data acquisition and reconstruction scheme based on Compressed Sensing (CS) which is a novel signal-processing technique and applied it for bearing conditions monitoring via WSN. The compressed data acquisition is realized by projection transformation and can greatly reduce the data volume, which needs the nodes to process and transmit. The reconstruction of original signals is achieved in the host computer by complicated algorithms. The bearing vibration signals not only exhibit the sparsity property, but also have specific structures. This paper introduced the block sparse Bayesian learning (BSBL) algorithm which works by utilizing the block property and inherent structures of signals to reconstruct CS sparsity coefficients of transform domains and further recover the original signals. By using the BSBL, CS reconstruction can be improved remarkably. Experiments and analyses showed that BSBL method has good performance and is suitable for practical bearing-condition monitoring. PMID:28635623
Sparse Reconstruction Techniques in MRI: Methods, Applications, and Challenges to Clinical Adoption
Yang, Alice Chieh-Yu; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-01-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in Magnetic Resonance Imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be employed to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MR imaging, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold-standards, are discussed. PMID:27003227
Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.
Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian
2017-11-08
It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.
Weiss, Christian; Zoubir, Abdelhak M
2017-05-01
We propose a compressed sampling and dictionary learning framework for fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is generated from a model for the reflected sensor signal. Imperfect prior knowledge is considered in terms of uncertain local and global parameters. To estimate a sparse representation and the dictionary parameters, we present an alternating minimization algorithm that is equipped with a preprocessing routine to handle dictionary coherence. The support of the obtained sparse signal indicates the reflection delays, which can be used to measure impairments along the sensing fiber. The performance is evaluated by simulations and experimental data for a fiber sensor system with common core architecture.
Analog system for computing sparse codes
Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell
2010-08-24
A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.
Gartsman, Gary M; Drake, Gregory; Edwards, T Bradley; Elkousy, Hussein A; Hammerman, Steven M; O'Connor, Daniel P; Press, Cyrus M
2013-11-01
The purpose of this study was to compare the structural outcomes of a single-row rotator cuff repair and double-row suture bridge fixation after arthroscopic repair of a full-thickness supraspinatus rotator cuff tear. We evaluated with diagnostic ultrasound a consecutive series of ninety shoulders in ninety patients with full-thickness supraspinatus tears at an average of 10 months (range, 6-12) after operation. A single surgeon at a single hospital performed the repairs. Inclusion criteria were full-thickness supraspinatus tears less than 25 mm in their anterior to posterior dimension. Exclusion criteria were prior operations on the shoulder, partial thickness tears, subscapularis tears, infraspinatus tears, combined supraspinatus and infraspinatus repairs and irreparable supraspinatus tears. Forty-three shoulders were repaired with single-row technique and 47 shoulders with double-row suture bridge technique. Postoperative rehabilitation was identical for both groups. Ultrasound criteria for healed repair included visualization of a tendon with normal thickness and length, and a negative compression test. Eighty-three patients were available for ultrasound examination (40 single-row and 43 suture-bridge). Thirty of 40 patients (75%) with single-row repair demonstrated a healed rotator cuff repair compared to 40/43 (93%) patients with suture-bridge repair (P = .024). Arthroscopic double-row suture bridge repair (transosseous equivalent) of an isolated supraspinatus rotator cuff tear resulted in a significantly higher tendon healing rate (as determined by ultrasound examination) when compared to arthroscopic single-row repair. Copyright © 2013 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Enhancement of snow cover change detection with sparse representation and dictionary learning
NASA Astrophysics Data System (ADS)
Varade, D.; Dikshit, O.
2014-11-01
Sparse representation and decoding is often used for denoising images and compression of images with respect to inherent features. In this paper, we adopt a methodology incorporating sparse representation of a snow cover change map using the K-SVD trained dictionary and sparse decoding to enhance the change map. The pixels often falsely characterized as "changes" are eliminated using this approach. The preliminary change map was generated using differenced NDSI or S3 maps in case of Resourcesat-2 and Landsat 8 OLI imagery respectively. These maps are extracted into patches for compressed sensing using Discrete Cosine Transform (DCT) to generate an initial dictionary which is trained by the K-SVD approach. The trained dictionary is used for sparse coding of the change map using the Orthogonal Matching Pursuit (OMP) algorithm. The reconstructed change map incorporates a greater degree of smoothing and represents the features (snow cover changes) with better accuracy. The enhanced change map is segmented using kmeans to discriminate between the changed and non-changed pixels. The segmented enhanced change map is compared, firstly with the difference of Support Vector Machine (SVM) classified NDSI maps and secondly with a reference data generated as a mask by visual interpretation of the two input images. The methodology is evaluated using multi-spectral datasets from Resourcesat-2 and Landsat-8. The k-hat statistic is computed to determine the accuracy of the proposed approach.
Fast and low-dose computed laminography using compressive sensing based technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbas, Sajid, E-mail: scho@kaist.ac.kr; Park, Miran, E-mail: scho@kaist.ac.kr; Cho, Seungryong, E-mail: scho@kaist.ac.kr
2015-03-31
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspiredmore » total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.« less
Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less
Temporal flicker reduction and denoising in video using sparse directional transforms
NASA Astrophysics Data System (ADS)
Kanumuri, Sandeep; Guleryuz, Onur G.; Civanlar, M. Reha; Fujibayashi, Akira; Boon, Choong S.
2008-08-01
The bulk of the video content available today over the Internet and over mobile networks suffers from many imperfections caused during acquisition and transmission. In the case of user-generated content, which is typically produced with inexpensive equipment, these imperfections manifest in various ways through noise, temporal flicker and blurring, just to name a few. Imperfections caused by compression noise and temporal flicker are present in both studio-produced and user-generated video content transmitted at low bit-rates. In this paper, we introduce an algorithm designed to reduce temporal flicker and noise in video sequences. The algorithm takes advantage of the sparse nature of video signals in an appropriate transform domain that is chosen adaptively based on local signal statistics. When the signal corresponds to a sparse representation in this transform domain, flicker and noise, which are spread over the entire domain, can be reduced easily by enforcing sparsity. Our results show that the proposed algorithm reduces flicker and noise significantly and enables better presentation of compressed videos.
NASA Technical Reports Server (NTRS)
Tesch, W. A.; Moszee, R. H.; Steenken, W. G.
1976-01-01
NASA developed stability and frequency response analysis techniques were applied to a dynamic blade row compression component stability model to provide a more economic approach to surge line and frequency response determination than that provided by time-dependent methods. This blade row model was linearized and the Jacobian matrix was formed. The clean-inlet-flow stability characteristics of the compressors of two J85-13 engines were predicted by applying the alternate Routh-Hurwitz stability criterion to the Jacobian matrix. The predicted surge line agreed with the clean-inlet-flow surge line predicted by the time-dependent method to a high degree except for one engine at 94% corrected speed. No satisfactory explanation of this discrepancy was found. The frequency response of the linearized system was determined by evaluating its Laplace transfer function. The results of the linearized-frequency-response analysis agree with the time-dependent results when the time-dependent inlet total-pressure and exit-flow function amplitude boundary conditions are less than 1 percent and 3 percent, respectively. The stability analysis technique was extended to a two-sector parallel compressor model with and without interstage crossflow and predictions were carried out for total-pressure distortion extents of 180 deg, 90 deg, 60 deg, and 30 deg.
Work Function of Oxide Ultrathin Films on the Ag(100) Surface.
Sementa, Luca; Barcaro, Giovanni; Negreiros, Fabio R; Thomas, Iorwerth O; Netzer, Falko P; Ferrari, Anna Maria; Fortunelli, Alessandro
2012-02-14
Theoretical calculations of the work function of monolayer (ML) and bilayer (BL) oxide films on the Ag(100) surface are reported and analyzed as a function of the nature of the oxide for first-row transition metals. The contributions due to charge compression, charge transfer and rumpling are singled out. It is found that the presence of empty d-orbitals in the oxide metal can entail a charge flow from the Ag(100) surface to the oxide film which counteracts the decrease in the work function due to charge compression. This flow can also depend on the thickness of the film and be reduced in passing from ML to BL systems. A regular trend is observed along first-row transition metals, exhibiting a maximum for CuO, in which the charge flow to the oxide is so strong as to reverse the direction of rumpling. A simple protocol to estimate separately the contribution due to charge compression is discussed, and the difference between the work function of the bare metal surface and a Pauling-like electronegativity of the free oxide slabs is used as a descriptor quantity to predict the direction of charge transfer.
Sparse Representation for Color Image Restoration (PREPRINT)
2006-10-01
as a universal denoiser of images, which learns the posterior from the given image in a way inspired by the Lempel - Ziv universal compression ...such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data . In...describe the data source. Such a model becomes paramount when developing algorithms for processing these signals. In this context, Markov-Random-Field
2015-06-01
of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation
Leveraging EAP-Sparsity for Compressed Sensing of MS-HARDI in (k, q)-Space.
Sun, Jiaqi; Sakhaee, Elham; Entezari, Alireza; Vemuri, Baba C
2015-01-01
Compressed Sensing (CS) for the acceleration of MR scans has been widely investigated in the past decade. Lately, considerable progress has been made in achieving similar speed ups in acquiring multi-shell high angular resolution diffusion imaging (MS-HARDI) scans. Existing approaches in this context were primarily concerned with sparse reconstruction of the diffusion MR signal S(q) in the q-space. More recently, methods have been developed to apply the compressed sensing framework to the 6-dimensional joint (k, q)-space, thereby exploiting the redundancy in this 6D space. To guarantee accurate reconstruction from partial MS-HARDI data, the key ingredients of compressed sensing that need to be brought together are: (1) the function to be reconstructed needs to have a sparse representation, and (2) the data for reconstruction ought to be acquired in the dual domain (i.e., incoherent sensing) and (3) the reconstruction process involves a (convex) optimization. In this paper, we present a novel approach that uses partial Fourier sensing in the 6D space of (k, q) for the reconstruction of P(x, r). The distinct feature of our approach is a sparsity model that leverages surfacelets in conjunction with total variation for the joint sparse representation of P(x, r). Thus, our method stands to benefit from the practical guarantees for accurate reconstruction from partial (k, q)-space data. Further, we demonstrate significant savings in acquisition time over diffusion spectral imaging (DSI) which is commonly used as the benchmark for comparisons in reported literature. To demonstrate the benefits of this approach,.we present several synthetic and real data examples.
Ice Loads and Ship Response to Ice. Summer 1982/Winter 1983 Test Program
1984-12-01
approximately 100 ft2 (9.2 M 2) was instrumented to measure ice pressures by measuring compressive strains in the webs of transverse frames. The panel...compressive strains in the webs of transverse frames. The panel was divided into 60 sub-panel areas, six rows of,-ten frames, over which uniform pressures...the Web and the Selection of Gage Spacing . . .............. 18 4.3 Across the Frame Influence on Strain .......... 20 4.4 Construction of the Data
NASA Astrophysics Data System (ADS)
Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing
2017-11-01
Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.
Blind compressive sensing dynamic MRI
Lingala, Sajan Goud; Jacob, Mathews
2013-01-01
We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951
Robust Multi Sensor Classification via Jointly Sparse Representation
2016-03-14
rank, sensor network, dictionary learning REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8...with ultrafast laser pulses, Optics Express, (04 2015): 10521. doi: Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran. Task-Driven Dictionary Learning...in dictionary design, compressed sensors design, and optimization in sparse recovery also helps. We are able to advance the state of the art
A New Species of Culex (melanoconion) from Southern South America (Diptera: Culicidae)
1984-01-01
length about 1.7 mm. Proboscis with false joint about 0.6 from base . Maxillary palpus entirely dark; length about 2.3 mm, exceeding proboscis...rows of small setae extending from base to level of subapical lobe, lateral surface with patch of short sparse setae (lsp, Fig. 3) at level of...anterior margin thickened, dorsal end narrowly fused to base of lateral plate; distal part of lateral plate with apical, ventral and lateral
Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.
Inchang Choi; Seung-Hwan Baek; Kim, Min H
2017-11-01
For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.
Experimental scheme and restoration algorithm of block compression sensing
NASA Astrophysics Data System (ADS)
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node
Cai, Zhipeng; Zou, Fumin; Zhang, Xiangyu
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption. PMID:29599945
A Digital Compressed Sensing-Based Energy-Efficient Single-Spot Bluetooth ECG Node.
Luo, Kan; Cai, Zhipeng; Du, Keqin; Zou, Fumin; Zhang, Xiangyu; Li, Jianqing
2018-01-01
Energy efficiency is still the obstacle for long-term real-time wireless ECG monitoring. In this paper, a digital compressed sensing- (CS-) based single-spot Bluetooth ECG node is proposed to deal with the challenge in wireless ECG application. A periodic sleep/wake-up scheme and a CS-based compression algorithm are implemented in a node, which consists of ultra-low-power analog front-end, microcontroller, Bluetooth 4.0 communication module, and so forth. The efficiency improvement and the node's specifics are evidenced by the experiments using the ECG signals sampled by the proposed node under daily activities of lay, sit, stand, walk, and run. Under using sparse binary matrix (SBM), block sparse Bayesian learning (BSBL) method, and discrete cosine transform (DCT) basis, all ECG signals were essentially undistorted recovered with root-mean-square differences (PRDs) which are less than 6%. The proposed sleep/wake-up scheme and data compression can reduce the airtime over energy-hungry wireless links, the energy consumption of proposed node is 6.53 mJ, and the energy consumption of radio decreases 77.37%. Moreover, the energy consumption increase caused by CS code execution is negligible, which is 1.3% of the total energy consumption.
Predefined Redundant Dictionary for Effective Depth Maps Representation
NASA Astrophysics Data System (ADS)
Sebai, Dorsaf; Chaieb, Faten; Ghorbel, Faouzi
2016-01-01
The multi-view video plus depth (MVD) video format consists of two components: texture and depth map, where a combination of these components enables a receiver to generate arbitrary virtual views. However, MVD presents a very voluminous video format that requires a compression process for storage and especially for transmission. Conventional codecs are perfectly efficient for texture images compression but not for intrinsic depth maps properties. Depth images indeed are characterized by areas of smoothly varying grey levels separated by sharp discontinuities at the position of object boundaries. Preserving these characteristics is important to enable high quality view synthesis at the receiver side. In this paper, sparse representation of depth maps is discussed. It is shown that a significant gain in sparsity is achieved when particular mixed dictionaries are used for approximating these types of images with greedy selection strategies. Experiments are conducted to confirm the effectiveness at producing sparse representations, and competitiveness, with respect to candidate state-of-art dictionaries. Finally, the resulting method is shown to be effective for depth maps compression and represents an advantage over the ongoing 3D high efficiency video coding compression standard, particularly at medium and high bitrates.
Simulation of 3-D viscous compressible flow in multistage turbomachinery by finite element methods
NASA Astrophysics Data System (ADS)
Sleiman, Mohamad
1999-11-01
The flow in a multistage turbomachinery blade row is compressible, viscous, and unsteady. Complex flow features such as boundary layers, wake migration from upstream blade rows, shocks, tip leakage jets, and vortices interact together as the flow convects through the stages. These interactions contribute significantly to the aerodynamic losses of the system and degrade the performance of the machine. The unsteadiness also leads to blade vibration and a shortening of its life. It is therefore difficult to optimize the design of a blade row, whether aerodynamically or structurally, in isolation, without accounting for the effects of the upstream and downstream rows. The effects of axial spacing, blade count, clocking (relative position of follow-up rotors with respect to wakes shed by upstream ones), and levels of unsteadiness may have a significance on performance and durability. In this Thesis, finite element formulations for the simulation of multistage turbomachinery are presented in terms of the Reynolds-averaged Navier-Stokes equations for three-dimensional steady or unsteady, viscous, compressible, turbulent flows. Three methodologies are presented and compared. First, a steady multistage analysis using a a-mixing- plane model has been implemented and has been validated against engine data. For axial machines, it has been found that the mixing plane simulation methods match very well the experimental data. However, the results for a centrifugal stage, consisting of an impeller followed by a vane diffuser of equal pitch, show flagrant inconsistency with engine performance data, indicating that the mixing plane method has been found to be inappropriate for centrifugal machines. Following these findings, a more complete unsteady multistage model has been devised for a configuration with equal number of rotor and stator blades (equal pitches). Non-matching grids are used at the rotor-stator interface and an implicit interpolation procedure devised to ensure continuity of fluxes across. This permits the rotor and stator equations to be solved in a fully- coupled manner, allowing larger time steps in attaining a time-periodic solution. This equal pitch approach has been validated on the complex geometry of a centrifugal stage. Finally, for a stage configuration with unequal pitches, the time-inclined method, developed by Giles (1991) for 2-D viscous compressible flow, has been extended to 3-D and formulated in terms of the physical solution vector U, rather than Q, a non-physical one. The method has been evaluated for unsteady flow through a rotor blade passage of the power turbine of a turboprop.
A compressed sensing X-ray camera with a multilayer architecture
NASA Astrophysics Data System (ADS)
Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.
2018-01-01
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
NASA Technical Reports Server (NTRS)
Vandermey, Nancy E.; Morris, Don H.; Masters, John E.
1991-01-01
Damage initiation and growth under compression-compression fatigue loading were investigated for a stitched uniweave material system with an underlying AS4/3501-6 quasi-isotropic layup. Performance of unnotched specimens having stitch rows at either 0 degree or 90 degrees to the loading direction was compared. Special attention was given to the effects of stitching related manufacturing defects. Damage evaluation techniques included edge replication, stiffness monitoring, x-ray radiography, residual compressive strength, and laminate sectioning. It was found that the manufacturing defect of inclined stitches had the greatest adverse effect on material performance. Zero degree and 90 degree specimen performances were generally the same. While the stitches were the source of damage initiation, they also slowed damage propagation both along the length and across the width and affected through-the-thickness damage growth. A pinched layer zone formed by the stitches particularly affected damage initiation and growth. The compressive failure mode was transverse shear for all specimens, both in static compression and fatigue cycling effects.
Smith, Geoffrey C S; Bouwmeester, Theresia M; Lam, Patrick H
2017-12-01
In double-row SutureBridge (Arthrex, Naples, FL, USA) rotator cuff repairs, increasing tendon load may generate progressively greater compression forces at the repair footprint (self-reinforcement). SutureBridge rotator cuff repairs using tied horizontal mattress sutures medially may limit this effect compared with a knotless construct. Rotator cuff repairs were performed in 9 pairs of ovine shoulders. One group underwent repair with a double-row SutureBridge construct with tied horizontal medial-row mattress sutures. The other group underwent repair in an identical fashion except that medial-row knots were not tied. Footprint contact pressure was measured at 0° and 20° of abduction under loads of 0 to 60 N. Pull-to-failure tests were then performed. In both repair constructs, each 10-N increase in rotator cuff tensile load led to a significant increase in footprint contact pressure (P < .0001). The rate of increase in footprint contact pressure was greater in the knotless construct (P < .00022; ratio, 1.69). The yield point approached the ultimate load to failure more closely in the knotless model than in the knotted construct (P = .00094). There was no difference in stiffness, ultimate failure load, or total energy to failure between the knotless and knotted techniques. In rotator cuff repair with a double-row SutureBridge configuration, self-reinforcement is seen in repairs with and without medial-row knots. Self-reinforcement is greater with the knotless technique. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
BCYCLIC: A parallel block tridiagonal matrix cyclic solver
NASA Astrophysics Data System (ADS)
Hirshman, S. P.; Perumalla, K. S.; Lynch, V. E.; Sanchez, R.
2010-09-01
A block tridiagonal matrix is factored with minimal fill-in using a cyclic reduction algorithm that is easily parallelized. Storage of the factored blocks allows the application of the inverse to multiple right-hand sides which may not be known at factorization time. Scalability with the number of block rows is achieved with cyclic reduction, while scalability with the block size is achieved using multithreaded routines (OpenMP, GotoBLAS) for block matrix manipulation. This dual scalability is a noteworthy feature of this new solver, as well as its ability to efficiently handle arbitrary (non-powers-of-2) block row and processor numbers. Comparison with a state-of-the art parallel sparse solver is presented. It is expected that this new solver will allow many physical applications to optimally use the parallel resources on current supercomputers. Example usage of the solver in magneto-hydrodynamic (MHD), three-dimensional equilibrium solvers for high-temperature fusion plasmas is cited.
2008-06-01
rhizoids (hair-like filaments) at the base, all along the stem, or as clusters, and the rhizoids may be dense or sparse, colored or colorless (appearing...colorless rhizoids . They will only be present on the ventral side. The leaf arrangement is called succubous when the forward edge of a leaf (as viewed...camouflaged by rhizoids . Leaves of Blepharostoma trichophyllum. The leaves of Blepharostoma are very small and in three rows, and they look like
Compressed normalized block difference for object tracking
NASA Astrophysics Data System (ADS)
Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge
2018-04-01
Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.
NASA Astrophysics Data System (ADS)
Li, Jun; Song, Minghui; Peng, Yuanxi
2018-03-01
Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.
Mechanical stability of ordered droplet packings in microfluidic channels
NASA Astrophysics Data System (ADS)
Fleury, Jean-Baptiste; Claussen, Ohle; Herminghaus, Stephan; Brinkmann, Martin; Seemann, Ralf
2011-12-01
The mechanical response and stability of one and two-row packing of monodisperse emulsion droplets are studied in quasi 2d microchannels under longitudinal compression. Depending on the choice of parameter, a considered droplet arrangement is either transformed continuously into another packing under longitudinal compression or becomes mechanically unstable and segregates into domains of higher and lower packing fraction. Our experimental results are compared to analytical calculations for 2d-droplet arrangements with good quantitative agreement. This study also predicts important consequences for the stability of droplet arrangements in flowing systems.
Gas turbine engine with radial diffuser and shortened mid section
Charron, Richard C.; Montgomery, Matthew D.
2015-09-08
An industrial gas turbine engine (10), including: a can annular combustion assembly (80), having a plurality of discrete flow ducts configured to receive combustion gas from respective combustors (82) and deliver the combustion gas along a straight flow path at a speed and orientation appropriate for delivery directly onto the first row (56) of turbine blades (62); and a compressor diffuser (32) having a redirecting surface (130, 140) configured to receive an axial flow of compressed air and redirect the axial flow of compressed air radially outward.
Compressed sensing system considerations for ECG and EMG wireless biosensors.
Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J
2012-04-01
Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.
Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.
Singh, Anurag; Dandapat, Samarendra
2017-04-01
In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.
Air-propelled abrasive grit for postemergence in-row weed control in field corn
USDA-ARS?s Scientific Manuscript database
Organic growers need additional tools for weed control. A new technique involving abrasive grit propelled by compressed air was tested in field plots. Grit derived from corn cobs was directed at seedlings of summer annual weeds growing at the bases of corn plants when the corn was at differing early...
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.
Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang
2017-07-01
It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.
Pant, Jeevan K; Krishnan, Sridhar
2016-07-01
A new signal reconstruction algorithm for compressive sensing based on the minimization of a pseudonorm which promotes block-sparse structure on the first-order difference of the signal is proposed. Involved optimization is carried out by using a sequential version of Fletcher-Reeves' conjugate-gradient algorithm, and the line search is based on Banach's fixed-point theorem. The algorithm is suitable for the reconstruction of foot gait signals which admit block-sparse structure on the first-order difference. An additional algorithm for the estimation of stride-interval, swing-interval, and stance-interval time series from the reconstructed foot gait signals is also proposed. This algorithm is based on finding zero crossing indices of the foot gait signal and using the resulting indices for the computation of time series. Extensive simulation results demonstrate that the proposed signal reconstruction algorithm yields improved signal-to-noise ratio and requires significantly reduced computational effort relative to several competing algorithms over a wide range of compression ratio. For a compression ratio in the range from 88% to 94%, the proposed algorithm is found to offer improved accuracy for the estimation of clinically relevant time-series parameters, namely, the mean value, variance, and spectral index of stride-interval, stance-interval, and swing-interval time series, relative to its nearest competitor algorithm. The improvement in performance for compression ratio as high as 94% indicates that the proposed algorithms would be useful for designing compressive sensing-based systems for long-term telemonitoring of human gait signals.
A compressed sensing X-ray camera with a multilayer architecture
Wang, Zhehui; Laroshenko, O.; Li, S.; ...
2018-01-25
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
Multimode waveguide speckle patterns for compressive sensing.
Valley, George C; Sefler, George A; Justin Shaw, T
2016-06-01
Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.
A compressed sensing X-ray camera with a multilayer architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Laroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
Sugimoto, Motokazu; Gotohda, Naoto; Kato, Yuichiro; Takahashi, Shinichiro; Kinoshita, Takahiro; Shibasaki, Hidehito; Nomura, Shogo; Konishi, Masaru; Kaneko, Hironori
2013-06-01
Postoperative pancreatic fistula (POPF) is a major, intractable complication after distal pancreatectomy (DP). Risk factor evaluation and prevention of this complication are important tasks for pancreatic surgeons. One hundred and six patients who underwent DP using a stapler for pancreatic division were retrospectively investigated. The relationship between clinicopathological factors and the incidence of POPF was statistically analyzed. Clinically relevant, Grade B or C POPF by International Study Group of Pancreatic Fistula criteria occurred in 52 patients (49.1 %). Age, American Society of Anesthesiologists score, body mass index, and concomitant gastrointestinal tract resection did not influence the incidence of POPF. Use of a double-row stapler and a thick pancreatic stump were significant risk factors for POPF in multivariate analysis. Compression index was also shown to be an important factor in cases in which the pancreas was divided by a stapler. The most important risk factor for POPF after DP was suggested to be the thickness of the pancreatic stump, reflecting the volume of remnant pancreas. A triple-row stapler seemed to be superior to a double-row stapler in preventing POPF. However, triple-row stapler use in a thick pancreas is considered to be a future problem to be solved.
Compressive sensing using optimized sensing matrix for face verification
NASA Astrophysics Data System (ADS)
Oey, Endra; Jeffry; Wongso, Kelvin; Tommy
2017-12-01
Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.
NASA Technical Reports Server (NTRS)
Jones, H. W.; Hein, D. N.; Knauer, S. C.
1978-01-01
A general class of even/odd transforms is presented that includes the Karhunen-Loeve transform, the discrete cosine transform, the Walsh-Hadamard transform, and other familiar transforms. The more complex even/odd transforms can be computed by combining a simpler even/odd transform with a sparse matrix multiplication. A theoretical performance measure is computed for some even/odd transforms, and two image compression experiments are reported.
A guided wave dispersion compensation method based on compressed sensing
NASA Astrophysics Data System (ADS)
Xu, Cai-bin; Yang, Zhi-bo; Chen, Xue-feng; Tian, Shao-hua; Xie, Yong
2018-03-01
The ultrasonic guided wave has emerged as a promising tool for structural health monitoring (SHM) and nondestructive testing (NDT) due to their capability to propagate over long distances with minimal loss and sensitivity to both surface and subsurface defects. The dispersion effect degrades the temporal and spatial resolution of guided waves. A novel ultrasonic guided wave processing method for both single mode and multi-mode guided waves dispersion compensation is proposed in this work based on compressed sensing, in which a dispersion signal dictionary is built by utilizing the dispersion curves of the guided wave modes in order to sparsely decompose the recorded dispersive guided waves. Dispersion-compensated guided waves are obtained by utilizing a non-dispersion signal dictionary and the results of sparse decomposition. Numerical simulations and experiments are implemented to verify the effectiveness of the developed method for both single mode and multi-mode guided waves.
Chemically bonded phospho-silicate ceramics
Wagh, Arun S.; Jeong, Seung Y.; Lohan, Dirk; Elizabeth, Anne
2003-01-01
A chemically bonded phospho-silicate ceramic formed by chemically reacting a monovalent alkali metal phosphate (or ammonium hydrogen phosphate) and a sparsely soluble oxide, with a sparsely soluble silicate in an aqueous solution. The monovalent alkali metal phosphate (or ammonium hydrogen phosphate) and sparsely soluble oxide are both in powder form and combined in a stochiometric molar ratio range of (0.5-1.5):1 to form a binder powder. Similarly, the sparsely soluble silicate is also in powder form and mixed with the binder powder to form a mixture. Water is added to the mixture to form a slurry. The water comprises 50% by weight of the powder mixture in said slurry. The slurry is allowed to harden. The resulting chemically bonded phospho-silicate ceramic exhibits high flexural strength, high compression strength, low porosity and permeability to water, has a definable and bio-compatible chemical composition, and is readily and easily colored to almost any desired shade or hue.
Marcondes, Freddy Beretta; de Vasconcelos, Rodrigo Antunes; Marchetto, Adriano; de Andrade, André Luis Lugnani; Filho, Américo Zoppi; Etchebehere, Maurício
2015-01-01
Objetctive: Study was to translate and culturally adapt the modified Rowe score for overhead athletes. Methods: The translation and cultural adaptation process initially involved the stages of translation, synthesis, back-translation, and revision by the Translation Group. It was than created the pre-final version of the questionnaire, being the areas “function” and “pain” applied to 20 athletes that perform overhead movements and that suffered SLAP lesions in the dominant shoulder and the areas “active compression test and anterior apprehension test” and “motion” were applied to 15 health professionals. Results: During the translation process there were made little modifications in the questionnaire in order to adapt it to Brazilian culture, without changing the semantics and the idiomatic concept originally described. Conclusion: The questionnaire was easily understood by the subjects of the study, being possible to obtain the Brazilian version of the modified Rowe score for overhead athletes that underwent surgical treatment of the SLAP lesion. PMID:27047903
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).
Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-05-16
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)
Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-01-01
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731
Improved analysis of SP and CoSaMP under total perturbations
NASA Astrophysics Data System (ADS)
Li, Haifeng
2016-12-01
Practically, in the underdetermined model y= A x, where x is a K sparse vector (i.e., it has no more than K nonzero entries), both y and A could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector x is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.
Accurate sparse-projection image reconstruction via nonlocal TV regularization.
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.
Vera, José Fernando; de Rooij, Mark; Heiser, Willem J
2014-11-01
In this paper we propose a latent class distance association model for clustering in the predictor space of large contingency tables with a categorical response variable. The rows of such a table are characterized as profiles of a set of explanatory variables, while the columns represent a single outcome variable. In many cases such tables are sparse, with many zero entries, which makes traditional models problematic. By clustering the row profiles into a few specific classes and representing these together with the categories of the response variable in a low-dimensional Euclidean space using a distance association model, a parsimonious prediction model can be obtained. A generalized EM algorithm is proposed to estimate the model parameters and the adjusted Bayesian information criterion statistic is employed to test the number of mixture components and the dimensionality of the representation. An empirical example highlighting the advantages of the new approach and comparing it with traditional approaches is presented. © 2014 The British Psychological Society.
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-01-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations. PMID:28845484
Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi
2016-05-23
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2015-10-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations.
Compression of high-density EMG signals for trapezius and gastrocnemius muscles.
Itiki, Cinthia; Furuie, Sergio S; Merletti, Roberto
2014-03-10
New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR CONCLUSIONS: The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles.
Compression of high-density EMG signals for trapezius and gastrocnemius muscles
2014-01-01
Background New technologies for data transmission and multi-electrode arrays increased the demand for compressing high-density electromyography (HD EMG) signals. This article aims the compression of HD EMG signals recorded by two-dimensional electrode matrices at different muscle-contraction forces. It also shows methodological aspects of compressing HD EMG signals for non-pinnate (upper trapezius) and pinnate (medial gastrocnemius) muscles, using image compression techniques. Methods HD EMG signals were placed in image rows, according to two distinct electrode orders: parallel and perpendicular to the muscle longitudinal axis. For the lossless case, the images obtained from single-differential signals as well as their differences in time were compressed. For the lossy algorithm, the images associated to the recorded monopolar or single-differential signals were compressed for different compression levels. Results Lossless compression provided up to 59.3% file-size reduction (FSR), with lower contraction forces associated to higher FSR. For lossy compression, a 90.8% reduction on the file size was attained, while keeping the signal-to-noise ratio (SNR) at 21.19 dB. For a similar FSR, higher contraction forces corresponded to higher SNR Conclusions The computation of signal differences in time improves the performance of lossless compression while the selection of signals in the transversal order improves the lossy compression of HD EMG, for both pinnate and non-pinnate muscles. PMID:24612604
2011-01-01
Background During circulatory arrest, effective external chest compression (ECC) is a key element for patient survival. In 2005, international emergency medical organisations changed their recommended compression-ventilation ratio (CVR) from 15:2 to 30:2 to acknowledge the vital importance of ECC. We hypothesised that physical fitness, biometric data and gender can influence the quality of ECC. Furthermore, we aimed to determine objective parameters of physical fitness that can reliably predict the quality of ECC. Methods The physical fitness of 30 male and 10 female healthcare professionals was assessed by cycling and rowing ergometry (focussing on lower and upper body, respectively). During ergometry, continuous breath-by-breath ergospirometric measurements and heart rate (HR) were recorded. All participants performed two nine-minute sequences of ECC on a manikin using CVRs of 30:2 and 15:2. We measured the compression and decompression depths, compression rates and assessed the participants' perception of exhaustion and comfort. The median body mass index (BMI; male 25.4 kg/m2 and female 20.4 kg/m2) was used as the threshold for subgroup analyses of participants with higher and lower BMI. Results HR during rowing ergometry at 75 watts (HR75) correlated best with the quality of ECC (r = -0.57, p < 0.05). Participants with a higher BMI and better physical fitness performed better and showed less fatigue during ECC. These results are valid for the entire cohort, as well as for the gender-based subgroups. The compressions of female participants were too shallow and more rapid (mean compression depth was 32 mm and rate was 117/min with a CVR of 30:2). For participants with a lower BMI and higher HR75, the compression depth decreased over time, beginning after four minutes for the 15:2 CVR and after three minutes for the 30:2 CVR. Although found to be more exhausting, a CVR of 30:2 was rated as being more comfortable. Conclusion The quality of the ECC and fatigue can both be predicted by BMI and physical fitness. An evaluation focussing on the upper body may be a more valid predictor of ECC quality than cycling based tests. Our data strongly support the recommendation to relieve ECC providers after two minutes. PMID:22053981
Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fei; Piao, Yan
2018-04-01
In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.
Compressed multi-block local binary pattern for object tracking
NASA Astrophysics Data System (ADS)
Li, Tianwen; Gao, Yun; Zhao, Lei; Zhou, Hao
2018-04-01
Both robustness and real-time are very important for the application of object tracking under a real environment. The focused trackers based on deep learning are difficult to satisfy with the real-time of tracking. Compressive sensing provided a technical support for real-time tracking. In this paper, an object can be tracked via a multi-block local binary pattern feature. The feature vector was extracted based on the multi-block local binary pattern feature, which was compressed via a sparse random Gaussian matrix as the measurement matrix. The experiments showed that the proposed tracker ran in real-time and outperformed the existed compressive trackers based on Haar-like feature on many challenging video sequences in terms of accuracy and robustness.
Adaptive compressive ghost imaging based on wavelet trees and sparse representation.
Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie
2014-03-24
Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.
Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection
Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...
2016-08-04
This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less
NASA Astrophysics Data System (ADS)
Hollingsworth, Kieren Grant
2015-11-01
MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.
Applications of compressed sensing image reconstruction to sparse view phase tomography
NASA Astrophysics Data System (ADS)
Ueda, Ryosuke; Kudo, Hiroyuki; Dong, Jian
2017-10-01
X-ray phase CT has a potential to give the higher contrast in soft tissue observations. To shorten the measure- ment time, sparse-view CT data acquisition has been attracting the attention. This paper applies two major compressed sensing (CS) approaches to image reconstruction in the x-ray sparse-view phase tomography. The first CS approach is the standard Total Variation (TV) regularization. The major drawbacks of TV regularization are a patchy artifact and loss in smooth intensity changes due to the piecewise constant nature of image model. The second CS method is a relatively new approach of CS which uses a nonlinear smoothing filter to design the regularization term. The nonlinear filter based CS is expected to reduce the major artifact in the TV regular- ization. The both cost functions can be minimized by the very fast iterative reconstruction method. However, in the past research activities, it is not clearly demonstrated how much image quality difference occurs between the TV regularization and the nonlinear filter based CS in x-ray phase CT applications. We clarify the issue by applying the two CS applications to the case of x-ray phase tomography. We provide results with numerically simulated data, which demonstrates that the nonlinear filter based CS outperforms the TV regularization in terms of textures and smooth intensity changes.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
Compressed modes for variational problems in mathematics and physics.
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-11-12
This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.
Kim, Steve M; Ganguli, Surya; Frank, Loren M
2012-08-22
Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.
Momentum and particle transport in a nonhomogenous canopy
NASA Astrophysics Data System (ADS)
Gould, Andrew W.
Turbulent particle transport through the air plays an important role in the life cycle of many plant pathogens. In this study, data from a field experiment was analyzed to explore momentum and particle transport within a grape vineyard. The overall goal of these experiments was to understand how the architecture of a sparse agricultural canopy interacts with turbulent flow and ultimately determines the dispersion of airborne fungal plant pathogens. Turbulence in the vineyard canopy was measured using an array of four sonic anemometers deployed at heights z/H 0.4, 0.9, 1.45, and 1.95 where z is the height of the each sonic and H is the canopy height. In addition to turbulence measurements from the sonic anemometers, particle dispersion was measured using inert particles with the approximate size and density of powdery mildew spores and a roto-rod impaction trap array. Measurements from the sonic anemometers demonstrate that first and second order statistics of the wind field are dependent on wind direction orientation with respect to vineyard row direction. This dependence is a result of wind channeling which transfers energy between the velocity components when the wind direction is not aligned with the rows. Although the winds have a strong directional dependence, spectra analysis indicates that the structure of the turbulent flow is not fundamentally altered by the interaction between wind direction and row direction. Examination of a limited number of particle release events indicates that the wind turning and channeling observed in the momentum field impacts particle dispersion. For row-aligned flow, particle dispersion in the direction normal to the flow is decreased relative to the plume spread predicted by a standard Gaussian plume model. For flow that is not aligned with the row direction, the plume is found to rotate in the same manner as the momentum field.
Novel Spectral Representations and Sparsity-Driven Algorithms for Shape Modeling and Analysis
NASA Astrophysics Data System (ADS)
Zhong, Ming
In this dissertation, we focus on extending classical spectral shape analysis by incorporating spectral graph wavelets and sparsity-seeking algorithms. Defined with the graph Laplacian eigenbasis, the spectral graph wavelets are localized both in the vertex domain and graph spectral domain, and thus are very effective in describing local geometry. With a rich dictionary of elementary vectors and forcing certain sparsity constraints, a real life signal can often be well approximated by a very sparse coefficient representation. The many successful applications of sparse signal representation in computer vision and image processing inspire us to explore the idea of employing sparse modeling techniques with dictionary of spectral basis to solve various shape modeling problems. Conventional spectral mesh compression uses the eigenfunctions of mesh Laplacian as shape bases, which are highly inefficient in representing local geometry. To ameliorate, we advocate an innovative approach to 3D mesh compression using spectral graph wavelets as dictionary to encode mesh geometry. The spectral graph wavelets are locally defined at individual vertices and can better capture local shape information than Laplacian eigenbasis. The multi-scale SGWs form a redundant dictionary as shape basis, so we formulate the compression of 3D shape as a sparse approximation problem that can be readily handled by greedy pursuit algorithms. Surface inpainting refers to the completion or recovery of missing shape geometry based on the shape information that is currently available. We devise a new surface inpainting algorithm founded upon the theory and techniques of sparse signal recovery. Instead of estimating the missing geometry directly, our novel method is to find this low-dimensional representation which describes the entire original shape. More specifically, we find that, for many shapes, the vertex coordinate function can be well approximated by a very sparse coefficient representation with respect to the dictionary comprising its Laplacian eigenbasis, and it is then possible to recover this sparse representation from partial measurements of the original shape. Taking advantage of the sparsity cue, we advocate a novel variational approach for surface inpainting, integrating data fidelity constraints on the shape domain with coefficient sparsity constraints on the transformed domain. Because of the powerful properties of Laplacian eigenbasis, the inpainting results of our method tend to be globally coherent with the remaining shape. Informative and discriminative feature descriptors are vital in qualitative and quantitative shape analysis for a large variety of graphics applications. We advocate novel strategies to define generalized, user-specified features on shapes. Our new region descriptors are primarily built upon the coefficients of spectral graph wavelets that are both multi-scale and multi-level in nature, consisting of both local and global information. Based on our novel spectral feature descriptor, we developed a user-specified feature detection framework and a tensor-based shape matching algorithm. Through various experiments, we demonstrate the competitive performance of our proposed methods and the great potential of spectral basis and sparsity-driven methods for shape modeling.
NASA Astrophysics Data System (ADS)
López-González, Pablo J.; Gili, Josep-Maria
2008-12-01
A new soft coral species of the genus Nidalia, from seamounts to the south of the Azores Archipelago is described. The main features of Nidalia aurantia n. sp. are as following: colony torch-like, a capitulum light orange in colour, not laterally flattened, dome-shaped and not distinctly projecting beyond the stalk, an introvert with sparse sclerites transversally placed, and an anthocodial crown with 13 17 sclerite rows. The new species is compared with its closest congeners. This is the first time that a species of Nidalia has been located in the Mid-Atlantic Ocean.
Texture Studies and Compression Behaviour of Apple Flesh
NASA Astrophysics Data System (ADS)
James, Bryony; Fonseca, Celia
Compressive behavior of fruit flesh has been studied using mechanical tests and microstructural analysis. Apple flesh from two cultivars (Braeburn and Cox's Orange Pippin) was investigated to represent the extremes in a spectrum of fruit flesh types, hard and juicy (Braeburn) and soft and mealy (Cox's). Force-deformation curves produced during compression of unconstrained discs of apple flesh followed trends predicted from the literature for each of the "juicy" and "mealy" types. The curves display the rupture point and, in some cases, a point of inflection that may be related to the point of incipient juice release. During compression these discs of flesh generally failed along the centre line, perpendicular to the direction of loading, through a barrelling mechanism. Cryo-Scanning Electron Microscopy (cryo-SEM) was used to examine the behavior of the parenchyma cells during fracture and compression using a purpose designed sample holder and compression tester. Fracture behavior reinforced the difference in mechanical properties between crisp and mealy fruit flesh. During compression testing prior to cryo-SEM imaging the apple flesh was constrained perpendicular to the direction of loading. Microstructural analysis suggests that, in this arrangement, the material fails along a compression front ahead of the compressing plate. Failure progresses by whole lines of parenchyma cells collapsing, or rupturing, with juice filling intercellular spaces, before the compression force is transferred to the next row of cells.
Group sparse multiview patch alignment framework with view consistency for image classification.
Gui, Jie; Tao, Dacheng; Sun, Zhenan; Luo, Yong; You, Xinge; Tang, Yuan Yan
2014-07-01
No single feature can satisfactorily characterize the semantic concepts of an image. Multiview learning aims to unify different kinds of features to produce a consensual and efficient representation. This paper redefines part optimization in the patch alignment framework (PAF) and develops a group sparse multiview patch alignment framework (GSM-PAF). The new part optimization considers not only the complementary properties of different views, but also view consistency. In particular, view consistency models the correlations between all possible combinations of any two kinds of view. In contrast to conventional dimensionality reduction algorithms that perform feature extraction and feature selection independently, GSM-PAF enjoys joint feature extraction and feature selection by exploiting l(2,1)-norm on the projection matrix to achieve row sparsity, which leads to the simultaneous selection of relevant features and learning transformation, and thus makes the algorithm more discriminative. Experiments on two real-world image data sets demonstrate the effectiveness of GSM-PAF for image classification.
Sparse dictionary learning for resting-state fMRI analysis
NASA Astrophysics Data System (ADS)
Lee, Kangjoo; Han, Paul Kyu; Ye, Jong Chul
2011-09-01
Recently, there has been increased interest in the usage of neuroimaging techniques to investigate what happens in the brain at rest. Functional imaging studies have revealed that the default-mode network activity is disrupted in Alzheimer's disease (AD). However, there is no consensus, as yet, on the choice of analysis method for the application of resting-state analysis for disease classification. This paper proposes a novel compressed sensing based resting-state fMRI analysis tool called Sparse-SPM. As the brain's functional systems has shown to have features of complex networks according to graph theoretical analysis, we apply a graph model to represent a sparse combination of information flows in complex network perspectives. In particular, a new concept of spatially adaptive design matrix has been proposed by implementing sparse dictionary learning based on sparsity. The proposed approach shows better performance compared to other conventional methods, such as independent component analysis (ICA) and seed-based approach, in classifying the AD patients from normal using resting-state analysis.
Compression-based integral curve data reuse framework for flow visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Fan; Bi, Chongke; Guo, Hanqi
Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less
NASA Astrophysics Data System (ADS)
Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing
2014-07-01
Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.
The effects of wavelet compression on Digital Elevation Models (DEMs)
Oimoen, M.J.
2004-01-01
This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.
NASA Astrophysics Data System (ADS)
Ju, Yuman; Song, Na; Chen, Guobao; Sun, Dianrong; Han, Zhiqiang; Gao, Tianxiang
2017-06-01
A new record ponyfish, Deveximentum megalolepis Mochizuki and Hayashi, 1989, was documented based on its morphological characteristics and DNA barcode. Fifty specimens were collected from Beibu Gulf of China and identified as D. megalolepis by morphological characterization. The coloration, meristic traits, and morphometric measurements were consistent with previously published records. In general, it is a silver-white, laterally compressed and deep bodied ponyfish with 6-9 rows of scales on cheek; scale rows above lateral line 6-8; scale rows below lateral line 14-17. Mitochondrial cytochrome oxidase I subunit (COI) gene fragment was sequenced for phylogenetic analysis. There is no sequence variation of COI gene between the specimens collected in this study. The genetic distances between D. megalolepis and other congeneric species range from 3.6% to 14.0%, which were greater than the threshold for fish species delimitation. The COI sequence analysis also supported the validity of D. megalolepis at genetic level. However, the genetic distance between Chinese and Philippine individuals was about 1.2% and they formed two lineages in gene tree, which may be caused by the geographical distance.
Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan
2014-10-01
It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.
Sparse dynamics for partial differential equations
Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D.; Osher, Stanley
2013-01-01
We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms. PMID:23533273
Sparse dynamics for partial differential equations.
Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D; Osher, Stanley
2013-04-23
We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.
Solution of plane cascade flow using improved surface singularity methods
NASA Technical Reports Server (NTRS)
Mcfarland, E. R.
1981-01-01
A solution method has been developed for calculating compressible inviscid flow through a linear cascade of arbitrary blade shapes. The method uses advanced surface singularity formulations which were adapted from those found in current external flow analyses. The resulting solution technique provides a fast flexible calculation for flows through turbomachinery blade rows. The solution method and some examples of the method's capabilities are presented.
Long term mechanical properties of alkali activated slag
NASA Astrophysics Data System (ADS)
Zhu, J.; Zheng, W. Z.; Xu, Z. Z.; Leng, Y. F.; Qin, C. Z.
2018-01-01
This article reports a study on the microstructural and long-term mechanical properties of the alkali activated slag up to 180 days, and cement paste is studied as the comparison. The mechanical properties including compressive strength, flexural strength, axis tensile strength and splitting tensile strength are analyzed. The results showed that the alkali activated slag had higher compressive and tensile strength, Slag is activated by potassium silicate (K2SiO3) and sodium hydroxide (NaOH) solutions for attaining silicate modulus of 1 using 12 potassium silicate and 5.35% sodium hydroxide. The volume dosage of water is 35% and 42%. The results indicate that alkali activated slag is a kind of rapid hardening and early strength cementitious material with excellent long-term mechanical properties. Single row of holes block compressive strength, single-hole block compressive strength and standard solid brick compressive strength basically meet engineering requirements. The microstructures of alkali activated slag are studied by X-ray diffraction (XRD). The hydration products of alkali-activated slag are assured as hydrated calcium silicate and hydrated calcium aluminate.
Forest structure of oak plantations after silvicultural treatment to enhance habitat for wildlife
Twedt, Daniel J.; Phillip, Cherrie-Lee P.; Guilfoyle, Michael P.; Wilson, R. Randy; Schweitzer, Callie Jo; Clatterbuck, Wayne K.; Oswalt, Christopher M.
2016-01-01
During the past 30 years, thousands of hectares of oak-dominated bottomland hardwood plantations have been planted on agricultural fields in the Mississippi Alluvial Valley. Many of these plantations now have closed canopies and sparse understories. Silvicultural treatments could create a more heterogeneous forest structure, with canopy gaps and increased understory vegetation for wildlife. Lack of volume sufficient for commercial harvest in hardwood plantations has impeded treatments, but demand for woody biomass for energy production may provide a viable means to introduce disturbance beneficial for wildlife. We assessed forest structure in response to prescribed pre-commercial perturbations in hardwood plantations resulting from silvicultural treatments: 1) row thinning by felling every fourth planted row; 2) multiple patch cuts with canopy gaps of <1 0.25 – 2 ha; and 3) tree removal on intersecting corridors diagonal to planted rows. These 3 treatments, and an untreated control, were applied to oak plantations (20 - 30 years post-planting) on three National Wildlife Refuges (Cache River, AR; Grand Cote, LA; and Yazoo, MS) during summer 2010. We sampled habitat using fixed-radius plots in 2009 (pre-treatment) and in 2012 (post-treatment) at random locations. Retained basal area was least in diagonal corridor treatments but had greater variance in patch-cut treatments. All treatments increased canopy openness and the volume of coarse woody debris. Occurrence of birds using early successional habitats was greater on sites treated with patch cuts and diagonal intersections. Canopy openings on row-thinned stands are being filled by lateral crown growth of retained trees whereas patch cut and diagonal intersection gaps appear likely to be filled by regenerating saplings.
Compressed Sensing for Metrics Development
NASA Astrophysics Data System (ADS)
McGraw, R. L.; Giangrande, S. E.; Liu, Y.
2012-12-01
Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Multiresolution representation and numerical algorithms: A brief review
NASA Technical Reports Server (NTRS)
Harten, Amiram
1994-01-01
In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.
Russell, Barry C; Golani, Daniel; Tikochinski, Yaron
2015-05-12
Saurida lessepsianus n. sp., a lizardfish (Aulopiformes: Synodontidae) from the Red Sea and Mediterranean Sea, previously misidentified as S. undosquamis (Richardson) and more recently as S. macrolepis Tanaka, is described as a new species. It is characterised by the following combination of characters: dorsal fin with 11-12 rays; pectoral fins with 13-15 rays; lateral-line scales 47-51; transverse scale rows above lateral line 4½, below lateral line 5½; pectoral fins moderately long (extending to between just before or just beyond a line from origin of pelvic fins to origin of dorsal fin); 2 rows of teeth on outer palatines; 0-2 teeth on vomer; tongue with 3-6 rows of teeth posteriorly; caudal peduncle slightly compressed (depth a little more than width); upper margin of caudal fin with row of 3-8 (usually 6 or 7) small black spots; stomach pale grey to blackish anteriorly; intestine whitish. The species is common in the Red Sea and as a result of Lessepsian migration through the Suez Canal, it is now widely distributed in the eastern Mediterranean. The taxonomic status of two other Red Sea nominal species, Saurus badimottah Rüppell [= Saurida tumbil (Bloch)] and Saurida sinaitica Dollfus in Gruvel (a nomen nudum), is clarified. A key is provided for the species of Saurida in the Red Sea.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Two-level image authentication by two-step phase-shifting interferometry and compressive sensing
NASA Astrophysics Data System (ADS)
Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2018-01-01
A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Image super-resolution via sparse representation.
Yang, Jianchao; Wright, John; Huang, Thomas S; Ma, Yi
2010-11-01
This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework.
NASA Technical Reports Server (NTRS)
Mcfarland, E. R.
1981-01-01
A solution method was developed for calculating compressible inviscid flow through a linear cascade of arbitrary blade shapes. The method uses advanced surface singularity formulations which were adapted from those in current external flow analyses. The resulting solution technique provides a fast flexible calculation for flows through turbomachinery blade rows. The solution method and some examples of the method's capabilities are presented.
Computation of rotor-stator interaction using the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Whitfield, David L.; Chen, Jen-Ping
1995-01-01
The numerical scheme presented belongs to a family of codes known as UNCLE (UNsteady Computation of fieLd Equations) as reported by Whitfield (1995), that is being used to solve problems in a variety of areas including compressible and incompressible flows. This derivation is specifically developed for general unsteady multi-blade-row turbomachinery problems. The scheme solves the Reynolds-averaged N-S equations with the Baldwin-Lomax turbulence model.
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Fast Boundary Element Method for acoustics with the Sparse Cardinal Sine Decomposition
NASA Astrophysics Data System (ADS)
Alouges, François; Aussal, Matthieu; Parolin, Emile
2017-07-01
This paper presents the newly proposed method Sparse Cardinal Sine Decomposition that allows fast convolution on unstructured grids. We focus on its use when coupled with finite element techniques to solve acoustic problems with the (compressed) Boundary Element Method. In addition, we also compare the computational performances of two equivalent Matlab® and Python implementations of the method. We show validation test cases in order to assess the precision of the approach. Eventually, the performance of the method is illustrated by the computation of the acoustic target strength of a realistic submarine from the Benchmark Target Strength Simulation international workshop.
An infrared-visible image fusion scheme based on NSCT and compressed sensing
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Maldague, Xavier
2015-05-01
Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.
Single-pixel imaging based on compressive sensing with spectral-domain optical mixing
NASA Astrophysics Data System (ADS)
Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin
2017-11-01
In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.
Perceptually controlled doping for audio source separation
NASA Astrophysics Data System (ADS)
Mahé, Gaël; Nadalin, Everton Z.; Suyama, Ricardo; Romano, João MT
2014-12-01
The separation of an underdetermined audio mixture can be performed through sparse component analysis (SCA) that relies however on the strong hypothesis that source signals are sparse in some domain. To overcome this difficulty in the case where the original sources are available before the mixing process, the informed source separation (ISS) embeds in the mixture a watermark, which information can help a further separation. Though powerful, this technique is generally specific to a particular mixing setup and may be compromised by an additional bitrate compression stage. Thus, instead of watermarking, we propose a `doping' method that makes the time-frequency representation of each source more sparse, while preserving its audio quality. This method is based on an iterative decrease of the distance between the distribution of the signal and a target sparse distribution, under a perceptual constraint. We aim to show that the proposed approach is robust to audio coding and that the use of the sparsified signals improves the source separation, in comparison with the original sources. In this work, the analysis is made only in instantaneous mixtures and focused on voice sources.
Exact recovery of sparse multiple measurement vectors by [Formula: see text]-minimization.
Wang, Changlong; Peng, Jigen
2018-01-01
The joint sparse recovery problem is a generalization of the single measurement vector problem widely studied in compressed sensing. It aims to recover a set of jointly sparse vectors, i.e., those that have nonzero entries concentrated at a common location. Meanwhile [Formula: see text]-minimization subject to matrixes is widely used in a large number of algorithms designed for this problem, i.e., [Formula: see text]-minimization [Formula: see text] Therefore the main contribution in this paper is two theoretical results about this technique. The first one is proving that in every multiple system of linear equations there exists a constant [Formula: see text] such that the original unique sparse solution also can be recovered from a minimization in [Formula: see text] quasi-norm subject to matrixes whenever [Formula: see text]. The other one is showing an analytic expression of such [Formula: see text]. Finally, we display the results of one example to confirm the validity of our conclusions, and we use some numerical experiments to show that we increase the efficiency of these algorithms designed for [Formula: see text]-minimization by using our results.
Use of general purpose graphics processing units with MODFLOW
Hughes, Joseph D.; White, Jeremy T.
2013-01-01
To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.
Weninger, Patrick; Dall'Ara, Enrico; Drobetz, Herwig; Nemec, Wolfgang; Figl, Markus; Redl, Heinz; Hertz, Harald; Zysset, Philippe
2011-01-01
Volar fixed-angle plating is a popular treatment for unstable distal radius fractures. Despite the availability of plating systems for treating distal radius fractures, little is known about the mechanical properties of multidirectional fixed-angle plates. The aim of this study was to compare the primary fixation stability of three possible screw configurations in a distal extra-articular fracture model using a multidirectional fixed-angle plate with metaphyseal cancellous screws distally. Eighteen Sawbones radii (Sawbones, Sweden, model# 1027) were used to simulate an extra-articular distal radius fracture according to AO/OTA 23 A3. Plates were fixed to the shaft with one non-locking screw in the oval hole and two locking screws as recommended by the manufacturer. Three groups (n = 6) were defined by screw configuration in the distal metaphyseal fragment: Group 1: distal row of screws only; Group 2: 2 rows of screws, parallel insertion; Group 3: 2 rows of screws, proximal screws inserted with 30° of inclination. Specimens underwent mechanical testing under axial compression within the elastic range and load controlled between 20 N and 200 N at a rate of 40 N/s. Axial stiffness and type of construct failure were recorded. There was no difference regarding axial stiffness between the three groups. In every specimen, failure of the Sawbone-implant-construct occurred as plastic bending of the volar titanium plate when the dorsal wedge was closed. Considering the limitations of the study, the recommendation to use two rows of screws or to place screws in the proximal metaphyseal row with inclination cannot be supported by our mechanical data.
Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction
NASA Astrophysics Data System (ADS)
Yang, Senlin; Li, Xilong; Chong, Xin
2017-10-01
The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.
Application of the Envelope Difference Index to Spectrally Sparse Speech
ERIC Educational Resources Information Center
Souza, Pamela; Hoover, Eric; Gallun, Frederick
2012-01-01
Purpose: Amplitude compression is a common hearing aid processing strategy that can improve speech audibility and loudness comfort but also has the potential to alter important cues carried by the speech envelope. In previous work, a measure of envelope change, the Envelope Difference Index (EDI; Fortune, Woodruff, & Preves, 1994), was moderately…
Subspace Compressive Detection for Sparse Signals
2008-04-01
1(α)− xTx σ2 , #)% Q(x) (2π)− 12 ∞ x e −x 2/2dx P α...θ THTHθ = xTx , #)% M = K + GGT ≈ KIM×M GTG ≈ MIK ×K Ψ...82171% ;% 1 ’!=1 :"% , <<’ 2ɛ D 0% , % % + @ ) % 6 % E @% 74
GPU-Accelerated Hybrid Algorithm for 3D Localization of Fluorescent Emitters in Dense Clusters
NASA Astrophysics Data System (ADS)
Jung, Yoon; Barsic, Anthony; Piestun, Rafael; Fakhri, Nikta
In stochastic switching-based super-resolution imaging, a random subset of fluorescent emitters are imaged and localized for each frame to construct a single high resolution image. However, the condition of non-overlapping point spread functions (PSFs) imposes constraints on experimental parameters. Recent development in post processing methods such as dictionary-based sparse support recovery using compressive sensing has shown up to an order of magnitude higher recall rate than single emitter fitting methods. However, the computational complexity of this approach scales poorly with the grid size and requires long runtime. Here, we introduce a fast and accurate compressive sensing algorithm for localizing fluorescent emitters in high density in 3D, namely sparse support recovery using Orthogonal Matching Pursuit (OMP) and L1-Homotopy algorithm for reconstructing STORM images (SOLAR STORM). SOLAR STORM combines OMP with L1-Homotopy to reduce computational complexity, which is further accelerated by parallel implementation using GPUs. This method can be used in a variety of experimental conditions for both in vitro and live cell fluorescence imaging.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices
Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen
2013-01-01
In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588
Determining building interior structures using compressive sensing
NASA Astrophysics Data System (ADS)
Lagunas, Eva; Amin, Moeness G.; Ahmad, Fauzia; Nájar, Montse
2013-04-01
We consider imaging of the building interior structures using compressive sensing (CS) with applications to through-the-wall imaging and urban sensing. We consider a monostatic synthetic aperture radar imaging system employing stepped frequency waveform. The proposed approach exploits prior information of building construction practices to form an appropriate sparse representation of the building interior layout. We devise a dictionary of possible wall locations, which is consistent with the fact that interior walls are typically parallel or perpendicular to the front wall. The dictionary accounts for the dominant normal angle reflections from exterior and interior walls for the monostatic imaging system. CS is applied to a reduced set of observations to recover the true positions of the walls. Additional information about interior walls can be obtained using a dictionary of possible corner reflectors, which is the response of the junction of two walls. Supporting results based on simulation and laboratory experiments are provided. It is shown that the proposed sparsifying basis outperforms the conventional through-the-wall CS model, the wavelet sparsifying basis, and the block sparse model for building interior layout detection.
Self-expressive Dictionary Learning for Dynamic 3D Reconstruction.
Zheng, Enliang; Ji, Dinghuang; Dunn, Enrique; Frahm, Jan-Michael
2017-08-22
We target the problem of sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning, where the dictionary is defined as an aggregation of the temporally varying 3D structures. Given the smooth motion of dynamic objects, we observe any element in the dictionary can be well approximated by a sparse linear combination of other elements in the same dictionary (i.e. self-expression). Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. We further analyze the reconstructability of our approach under different capture scenarios, and its comparison and relation to existing methods. Experimental results on large amounts of synthetic data as well as real imagery demonstrate the effectiveness of our approach.
Split Bregman's optimization method for image construction in compressive sensing
NASA Astrophysics Data System (ADS)
Skinner, D.; Foo, S.; Meyer-Bäse, A.
2014-05-01
The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.
Improving M-SBL for Joint Sparse Recovery Using a Subspace Penalty
NASA Astrophysics Data System (ADS)
Ye, Jong Chul; Kim, Jong Min; Bresler, Yoram
2015-12-01
The multiple measurement vector problem (MMV) is a generalization of the compressed sensing problem that addresses the recovery of a set of jointly sparse signal vectors. One of the important contributions of this paper is to reveal that the seemingly least related state-of-art MMV joint sparse recovery algorithms - M-SBL (multiple sparse Bayesian learning) and subspace-based hybrid greedy algorithms - have a very important link. More specifically, we show that replacing the $\\log\\det(\\cdot)$ term in M-SBL by a rank proxy that exploits the spark reduction property discovered in subspace-based joint sparse recovery algorithms, provides significant improvements. In particular, if we use the Schatten-$p$ quasi-norm as the corresponding rank proxy, the global minimiser of the proposed algorithm becomes identical to the true solution as $p \\rightarrow 0$. Furthermore, under the same regularity conditions, we show that the convergence to a local minimiser is guaranteed using an alternating minimization algorithm that has closed form expressions for each of the minimization steps, which are convex. Numerical simulations under a variety of scenarios in terms of SNR, and condition number of the signal amplitude matrix demonstrate that the proposed algorithm consistently outperforms M-SBL and other state-of-the art algorithms.
Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation.
Grossi, Giuliano; Lanzarotti, Raffaella; Lin, Jianyi
2017-01-01
In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.
NASA Astrophysics Data System (ADS)
Aghamaleki, Javad Abbasi; Behrad, Alireza
2018-01-01
Double compression detection is a crucial stage in digital image and video forensics. However, the detection of double compressed videos is challenging when the video forger uses the same quantization matrix and synchronized group of pictures (GOP) structure during the recompression history to conceal tampering effects. A passive approach is proposed for detecting double compressed MPEG videos with the same quantization matrix and synchronized GOP structure. To devise the proposed algorithm, the effects of recompression on P frames are mathematically studied. Then, based on the obtained guidelines, a feature vector is proposed to detect double compressed frames on the GOP level. Subsequently, sparse representations of the feature vectors are used for dimensionality reduction and enrich the traces of recompression. Finally, a support vector machine classifier is employed to detect and localize double compression in temporal domain. The experimental results show that the proposed algorithm achieves the accuracy of more than 95%. In addition, the comparisons of the results of the proposed method with those of other methods reveal the efficiency of the proposed algorithm.
Optimal parallel solution of sparse triangular systems
NASA Technical Reports Server (NTRS)
Alvarado, Fernando L.; Schreiber, Robert
1990-01-01
A method for the parallel solution of triangular sets of equations is described that is appropriate when there are many right-handed sides. By preprocessing, the method can reduce the number of parallel steps required to solve Lx = b compared to parallel forward or backsolve. Applications are to iterative solvers with triangular preconditioners, to structural analysis, or to power systems applications, where there may be many right-handed sides (not all available a priori). The inverse of L is represented as a product of sparse triangular factors. The problem is to find a factored representation of this inverse of L with the smallest number of factors (or partitions), subject to the requirement that no new nonzero elements be created in the formation of these inverse factors. A method from an earlier reference is shown to solve this problem. This method is improved upon by constructing a permutation of the rows and columns of L that preserves triangularity and allow for the best possible such partition. A number of practical examples and algorithmic details are presented. The parallelism attainable is illustrated by means of elimination trees and clique trees.
Enhancing sparsity of Hermite polynomial expansions by iterative rotations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Baker, Nathan A.
2016-02-01
Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.
Avalanches, plasticity, and ordering in colloidal crystals under compression.
McDermott, D; Reichhardt, C J Olson; Reichhardt, C
2016-06-01
Using numerical simulations we examine colloids with a long-range Coulomb interaction confined in a two-dimensional trough potential undergoing dynamical compression. As the depth of the confining well is increased, the colloids move via elastic distortions interspersed with intermittent bursts or avalanches of plastic motion. In these avalanches, the colloids rearrange to minimize their colloid-colloid repulsive interaction energy by adopting an average lattice constant that is isotropic despite the anisotropic nature of the compression. The avalanches take the form of shear banding events that decrease or increase the structural order of the system. At larger compression, the avalanches are associated with a reduction of the number of rows of colloids that fit within the confining potential, and between avalanches the colloids can exhibit partially crystalline or anisotropic ordering. The colloid velocity distributions during the avalanches have a non-Gaussian form with power-law tails and exponents that are consistent with those found for the velocity distributions of gliding dislocations. We observe similar behavior when we subsequently decompress the system, and find a partially hysteretic response reflecting the irreversibility of the plastic events.
Compressive Spectral Embedding: Sidestepping the SVD
2015-09-28
of Rn, the distance between the p, q-th rows of f̃(S) can be written as ‖f̃L(S) (ip − iq)‖ = ‖f(S) (ip − iq)− Z (ip − iq)‖ ≤ ‖ ET (ip − iq)‖+ δ √ 2...2) Similarly, we have that ‖f̃L(S) (ip − iq)‖ ≥ ‖ ET (ip − iq)‖ − δ √ 2. Thus pairwise distances be- tween the rows of f̃L(S) approximate those between...iq) ∥∥∥ = ∥∥∥ΩT f̃L(S) (ip − iq) ∥∥∥ ≤ √ 1 + ǫ ∥∥∥f̃L(S) (ip − iq) ∥∥∥ Using (2), we can show that ‖ẼT (ip − iq)‖ ≤ √ 1 + ǫ ( ‖ ET (ip − iq)‖+ δ √ 2
Computer program for aerodynamic and blading design of multistage axial-flow compressors
NASA Technical Reports Server (NTRS)
Crouse, J. E.; Gorrell, W. T.
1981-01-01
A code for computing the aerodynamic design of a multistage axial-flow compressor and, if desired, the associated blading geometry input for internal flow analysis codes is presented. Compressible flow, which is assumed to be steady and axisymmetric, is the basis for a two-dimensional solution in the meridional plane with viscous effects modeled by pressure loss coefficients and boundary layer blockage. The radial equation of motion and the continuity equation are solved with the streamline curvature method on calculation stations outside the blade rows. The annulus profile, mass flow, pressure ratio, and rotative speed are input. A number of other input parameters specify and control the blade row aerodynamics and geometry. In particular, blade element centerlines and thicknesses can be specified with fourth degree polynomials for two segments. The output includes a detailed aerodynamic solution and, if desired, blading coordinates that can be used for internal flow analysis codes.
Multistable wireless micro-actuator based on antagonistic pre-shaped double beams
NASA Astrophysics Data System (ADS)
Liu, X.; Lamarque, F.; Doré, E.; Pouille, P.
2015-07-01
This paper presents a monolithic multistable micro-actuator based on antagonistic pre-shaped double beams. The designed micro-actuator is formed by two rows of bistable micro-actuators providing four stable positions. The bistable mechanism for each row is a pair of antagonistic pre-shaped beams. This bistable mechanism has an easier pre-load operation compared to the pre-compressed bistable beams method. Furthermore, it solves the asymmetrical force output problem of parallel pre-shaped bistable double beams. At the same time, the geometrical limit is lower than parallel pre-shaped bistable double beams, which ensures a smaller stroke of the micro-actuator with the same dimensions. The designed micro-actuator is fabricated using laser cutting machine on medium density fiberboard (MDF). The bistability and merits of antagonistic pre-shaped double beams are experimentally validated. Finally, a contactless actuation test is performed using 660 nm wavelength laser heating shape memory alloy (SMA) active elements.
Membrane tension controls adhesion positioning at the leading edge of cells
Pontes, Bruno; Gole, Laurent; Kosmalska, Anita Joanna; Tam, Zhi Yang; Luo, Weiwei; Kan, Sophie; Viasnoff, Virgile; Roca-Cusachs, Pere; Tucker-Kellogg, Lisa
2017-01-01
Cell migration is dependent on adhesion dynamics and actin cytoskeleton remodeling at the leading edge. These events may be physically constrained by the plasma membrane. Here, we show that the mechanical signal produced by an increase in plasma membrane tension triggers the positioning of new rows of adhesions at the leading edge. During protrusion, as membrane tension increases, velocity slows, and the lamellipodium buckles upward in a myosin II–independent manner. The buckling occurs between the front of the lamellipodium, where nascent adhesions are positioned in rows, and the base of the lamellipodium, where a vinculin-dependent clutch couples actin to previously positioned adhesions. As membrane tension decreases, protrusion resumes and buckling disappears, until the next cycle. We propose that the mechanical signal of membrane tension exerts upstream control in mechanotransduction by periodically compressing and relaxing the lamellipodium, leading to the positioning of adhesions at the leading edge of cells. PMID:28687667
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
Reconstruction of finite-valued sparse signals
NASA Astrophysics Data System (ADS)
Keiper, Sandra; Kutyniok, Gitta; Lee, Dae Gwan; Pfander, Götz
2017-08-01
The need of reconstructing discrete-valued sparse signals from few measurements, that is solving an undetermined system of linear equations, appears frequently in science and engineering. Those signals appear, for example, in error correcting codes as well as massive Multiple-Input Multiple-Output (MIMO) channel and wideband spectrum sensing. A particular example is given by wireless communications, where the transmitted signals are sequences of bits, i.e., with entries in f0; 1g. Whereas classical compressed sensing algorithms do not incorporate the additional knowledge of the discrete nature of the signal, classical lattice decoding approaches do not utilize sparsity constraints. In this talk, we present an approach that incorporates a discrete values prior into basis pursuit. In particular, we address finite-valued sparse signals, i.e., sparse signals with entries in a finite alphabet. We will introduce an equivalent null space characterization and show that phase transition takes place earlier than when using the classical basis pursuit approach. We will further discuss robustness of the algorithm and show that the nonnegative case is very different from the bipolar one. One of our findings is that the positioning of the zero in the alphabet - i.e., whether it is a boundary element or not - is crucial.
Motion-compensated compressed sensing for dynamic imaging
NASA Astrophysics Data System (ADS)
Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali
2010-08-01
The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.
Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P
2017-04-01
Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Mismatch and resolution in compressive imaging
NASA Astrophysics Data System (ADS)
Fannjiang, Albert; Liao, Wenjing
2011-09-01
Highly coherent sensing matrices arise in discretization of continuum problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold as well as in using highly coherent, redundant dictionaries as sparsifying operators. Algorithms (BOMP, BLOOMP) based on techniques of band exclusion and local optimization are proposed to enhance Orthogonal Matching Pursuit (OMP) and deal with such coherent sensing matrices. BOMP and BLOOMP have provably performance guarantee of reconstructing sparse, widely separated objects independent of the redundancy and have a sparsity constraint and computational cost similar to OMP's. Numerical study demonstrates the effectiveness of BLOOMP for compressed sensing with highly coherent, redundant sensing matrices.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation
Grossi, Giuliano; Lin, Jianyi
2017-01-01
In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD’s robustness and wide applicability. PMID:28103283
Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei
2015-01-01
The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments. PMID:26473858
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.
1993-01-01
The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOAR\\CR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This document is the final report describing the theoretical basis and analytical results from the ADPAC-AOACR codes developed under task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR Program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.
Detecting Unsteady Blade Row Interaction in a Francis Turbine using a Phase-Lag Boundary Condition
NASA Astrophysics Data System (ADS)
Wouden, Alex; Cimbala, John; Lewis, Bryan
2013-11-01
For CFD simulations in turbomachinery, methods are typically used to reduce the computational cost. For example, the standard periodic assumption reduces the underlying mesh to a single blade passage in axisymmetric applications. If the simulation includes only a single array of blades with an uniform inlet condition, this assumption is adequate. However, to compute the interaction between successive blade rows of differing periodicity in an unsteady simulation, the periodic assumption breaks down and may produce inaccurate results. As a viable alternative the phase-lag boundary condition assumes that the periodicity includes a temporal component which, if considered, allows for a single passage to be modeled per blade row irrespective of differing periodicity. Prominently used in compressible CFD codes for the analysis of gas turbines/compressors, the phase-lag boundary condition is adapted to analyze the interaction between the guide vanes and rotor blades in an incompressible simulation of the 1989 GAMM Workshop Francis turbine using OpenFOAM. The implementation is based on the ``direct-storage'' method proposed in 1977 by Erdos and Alzner. The phase-lag simulation is compared with available data from the GAMM workshop as well as a full-wheel simulation. Funding provided by DOE Award number: DE-EE0002667.
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Adamczyk, John J.; Miller, Christopher J.; Arnone, Andrea; Swanson, Charles
1993-01-01
The primary objective of this study was the development of a time-marching three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict steady and unsteady compressible transonic flows about ducted and unducted propfan propulsion systems employing multiple blade rows. The computer codes resulting from this study are referred to as ADPAC-AOACR (Advanced Ducted Propfan Analysis Codes-Angle of Attack Coupled Row). This report is intended to serve as a computer program user's manual for the ADPAC-AOACR codes developed under Task 5 of NASA Contract NAS3-25270, Unsteady Counterrotating Ducted Propfan Analysis. The ADPAC-AOACR program is based on a flexible multiple blocked grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. For convenience, several standard mesh block structures are described for turbomachinery applications. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. Numerical calculations are compared with experimental data for several test cases to demonstrate the utility of this approach for predicting the aerodynamics of modern turbomachinery configurations employing multiple blade rows.
Roller-gear drives for robotic manipulators design, fabrication and test
NASA Technical Reports Server (NTRS)
Anderson, William J.; Shipitalo, William
1991-01-01
Two single axis planetary roller-gear drives and a two axis roller-gear drive with dual inputs were designed for use as robotic transmissions. Each of the single axis drives is a two planet row, four planet arrangement with spur gears and compressively loaded cylindrical rollers acting in parallel. The two axis drive employs bevel gears and cone rollers acting in parallel. The rollers serve a dual function: they remove backlash from the system, and they transmit torque when the gears are not fully engaged.
Numerical Solution of the Three-Dimensional Navier-Stokes Equation.
1982-03-01
compressible, viscous fluid in an arbitrary geometry. We wish to use a grid generating scheme so we assume that the geometry of the physical problem given in...bian J of the mapping are provided. (For work on grid generating schemes see [4], [5] or [6).) Hence we must solve the following system of equations...these limitations the data structure used in the ILLIAC code is to partition the grid into 8 x 8 x 8 blocks. A row of these blocks in a given
Efficient and Robust Signal Approximations
2009-05-01
otherwise. Remark. Permutation matrices are both orthogonal and doubly- stochastic [62]. We will now show how to further simplify the Robust Coding...reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Keywords: signal processing, image compression, independent component analysis , sparse
Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.
2011-01-01
Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737
Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.
Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf
2016-01-01
One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.
Monitoring and diagnosis of Alzheimer's disease using noninvasive compressive sensing EEG
NASA Astrophysics Data System (ADS)
Morabito, F. C.; Labate, D.; Morabito, G.; Palamara, I.; Szu, H.
2013-05-01
The majority of elderly with Alzheimer's Disease (AD) receive care at home from caregivers. In contrast to standard tethered clinical settings, a wireless, real-time, body-area smartphone-based remote monitoring of electroencephalogram (EEG) can be extremely advantageous for home care of those patients. Such wearable tools pave the way to personalized medicine, for example giving the opportunity to control the progression of the disease and the effect of drugs. By applying Compressive Sensing (CS) techniques it is in principle possible to overcome the difficulty raised by smartphones spatial-temporal throughput rate bottleneck. Unfortunately, EEG and other physiological signals are often non-sparse. In this paper, it is instead shown that the EEG of AD patients becomes actually more compressible with the progression of the disease. EEG of Mild Cognitive Impaired (MCI) subjects is also showing clear tendency to enhanced compressibility. This feature favor the use of CS techniques and ultimately the use of telemonitoring with wearable sensors.
Tsutsui, Sadaaki; Kawasaki, Keikichi; Yamakoshi, Ken-Ichi; Uchiyama, Eiichi; Aoki, Mitsuhiro; Inagaki, Katsunori
2016-09-01
The present study compared the changes in biomechanical and radiographic properties under cyclic axial loadings between the 'double-tiered subchondral support' (DSS) group (wherein two rows of screws were used) and the 'non-DSS' (NDSS) group (wherein only one row of distal screws was used) using cadaveric forearm models of radius fractures fixed with a polyaxial locking plate. Fifteen fresh cadaveric forearms were surgically operated to generate an Arbeitsgemeinschaft für Osteosynthesefragen (AO) type 23-C2 fracture model with the fixation of polyaxial volar locking plates. The model specimens were randomized into two groups: DSS (n = 7) and NDSS (n = 8). Both the groups received 4 locking screws in the most distal row, as is usually applied, whereas the DSS group received 2 additional screws in the second row inserted at an inclination of about 15° to support the dorsal aspect of the dorsal subchondral bone. Cyclic axial compression test was performed (3000 cycles; 0-250 N; 60 mm/min) to measure absolute rigidity and displacement, after 1, 1000, 2000 and 3000 cycles, and values were normalized relative to cycle 1. These absolute and normalized values were compared between those two groups. Radiographic images were taken before and after the cyclic loading to measure changes in volar tilt (ΔVT) and radial inclination (ΔRI). The DSS group maintained significantly higher rigidity and lower displacement values than the NDSS group during the entire loading period. Radiographic analysis indicated that the ΔVT values of the DSS group were lower than those of the NDSS group. In contrast, the fixation design did not influence the impact of loading on the ΔRI values. Biomechanical and radiographic analyses demonstrated that two rows of distal locking screws in the DSS procedure conferred higher stability than one row of distal locking screws. Copyright © 2016 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.
Lecture Notes on Multigrid Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vassilevski, P S
The Lecture Notes are primarily based on a sequence of lectures given by the author while been a Fulbright scholar at 'St. Kliment Ohridski' University of Sofia, Sofia, Bulgaria during the winter semester of 2009-2010 academic year. The notes are somewhat expanded version of the actual one semester class he taught there. The material covered is slightly modified and adapted version of similar topics covered in the author's monograph 'Multilevel Block-Factorization Preconditioners' published in 2008 by Springer. The author tried to keep the notes as self-contained as possible. That is why the lecture notes begin with some basic introductory matrix-vectormore » linear algebra, numerical PDEs (finite element) facts emphasizing the relations between functions in finite dimensional spaces and their coefficient vectors and respective norms. Then, some additional facts on the implementation of finite elements based on relation tables using the popular compressed sparse row (CSR) format are given. Also, typical condition number estimates of stiffness and mass matrices, the global matrix assembly from local element matrices are given as well. Finally, some basic introductory facts about stationary iterative methods, such as Gauss-Seidel and its symmetrized version are presented. The introductory material ends up with the smoothing property of the classical iterative methods and the main definition of two-grid iterative methods. From here on, the second part of the notes begins which deals with the various aspects of the principal TG and the numerous versions of the MG cycles. At the end, in part III, we briefly introduce algebraic versions of MG referred to as AMG, focusing on classes of AMG specialized for finite element matrices.« less
NASA Astrophysics Data System (ADS)
Oiknine, Yaniv; August, Isaac Y.; Revah, Liat; Stern, Adrian
2016-05-01
Recently we introduced a Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) system. The system is based on a single Liquid Crystal (LC) cell and a parallel sensor array where the liquid crystal cell performs spectral encoding. Within the framework of compressive sensing, the CS-MUSI system is able to reconstruct ultra-spectral cubes captured with only an amount of ~10% samples compared to a conventional system. Despite the compression, the technique is extremely complex computationally, because reconstruction of ultra-spectral images requires processing huge data cubes of Gigavoxel size. Fortunately, the computational effort can be alleviated by using separable operation. An additional way to reduce the reconstruction effort is to perform the reconstructions on patches. In this work, we consider processing on various patch shapes. We present an experimental comparison between various patch shapes chosen to process the ultra-spectral data captured with CS-MUSI system. The patches may be one dimensional (1D) for which the reconstruction is carried out spatially pixel-wise, or two dimensional (2D) - working on spatial rows/columns of the ultra-spectral cube, as well as three dimensional (3D).
NASA Astrophysics Data System (ADS)
Jridi, Maher; Alfalou, Ayman
2018-03-01
In this paper, enhancement of an existing optical simultaneous fusion, compression and encryption (SFCE) scheme in terms of real-time requirements, bandwidth occupation and encryption robustness is proposed. We have used and approximate form of the DCT to decrease the computational resources. Then, a novel chaos-based encryption algorithm is introduced in order to achieve the confusion and diffusion effects. In the confusion phase, Henon map is used for row and column permutations, where the initial condition is related to the original image. Furthermore, the Skew Tent map is employed to generate another random matrix in order to carry out pixel scrambling. Finally, an adaptation of a classical diffusion process scheme is employed to strengthen security of the cryptosystem against statistical, differential, and chosen plaintext attacks. Analyses of key space, histogram, adjacent pixel correlation, sensitivity, and encryption speed of the encryption scheme are provided, and favorably compared to those of the existing crypto-compression system. The proposed method has been found to be digital/optical implementation-friendly which facilitates the integration of the crypto-compression system on a very broad range of scenarios.
On the Compressive Sensing Systems (Part 1)
2015-02-01
resolution between targets of classical radar is limited by the radar uncertainty principle. B. Fundamentals on CS and CS-Based Radar ( CSR ) Under...appropriate conditions, CSR can beat the traditional radar. We now consider K targets with unknown range-velocities and corresponding reflection...sparse target scene. A CSR has the following features: 1) Eliminating the need of matched filter at the receiver; 2) Requiring low sampling bandwidth
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
NASA Astrophysics Data System (ADS)
Gelmini, A.; Gottardi, G.; Moriyama, T.
2017-10-01
This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.
System design of an optical interferometer based on compressive sensing
NASA Astrophysics Data System (ADS)
Liu, Gang; Wen, De-Sheng; Song, Zong-Xi
2018-07-01
In this paper, we develop a new optical interferometric telescope architecture based on compressive sensing (CS) theory. Traditional optical telescopes with large apertures must be large in size, heavy and have high-power consumption, which limits the development of space-based telescopes. A turning point has occurred in the advent of imaging technology that utilizes Fourier-domain interferometry. This technology can reduce the system size, weight and power consumption by an order of magnitude compared to traditional optical telescopes at the same resolution. CS theory demonstrates that incomplete and noisy Fourier measurements may suffice for the exact reconstruction of sparse or compressible signals. Our proposed architecture combines advantages from the two frameworks, and the performance is evaluated through simulations. The results indicate the ability to efficiently sample spatial frequencies, while being lightweight and compact in size. Another attractive property of our architecture is the strong denoising ability for Gaussian noise.
Compression of Flow Can Reveal Overlapping-Module Organization in Networks
NASA Astrophysics Data System (ADS)
Viamontes Esquivel, Alcides; Rosvall, Martin
2011-10-01
To better understand the organization of overlapping modules in large networks with respect to flow, we introduce the map equation for overlapping modules. In this information-theoretic framework, we use the correspondence between compression and regularity detection. The generalized map equation measures how well we can compress a description of flow in the network when we partition it into modules with possible overlaps. When we minimize the generalized map equation over overlapping network partitions, we detect modules that capture flow and determine which nodes at the boundaries between modules should be classified in multiple modules and to what degree. With a novel greedy-search algorithm, we find that some networks, for example, the neural network of the nematode Caenorhabditis elegans, are best described by modules dominated by hard boundaries, but that others, for example, the sparse European-roads network, have an organization of highly overlapping modules.
Approximate equiangular tight frames for compressed sensing and CDMA applications
NASA Astrophysics Data System (ADS)
Tsiligianni, Evaggelia; Kondi, Lisimachos P.; Katsaggelos, Aggelos K.
2017-12-01
Performance guarantees for recovery algorithms employed in sparse representations, and compressed sensing highlights the importance of incoherence. Optimal bounds of incoherence are attained by equiangular unit norm tight frames (ETFs). Although ETFs are important in many applications, they do not exist for all dimensions, while their construction has been proven extremely difficult. In this paper, we construct frames that are close to ETFs. According to results from frame and graph theory, the existence of an ETF depends on the existence of its signature matrix, that is, a symmetric matrix with certain structure and spectrum consisting of two distinct eigenvalues. We view the construction of a signature matrix as an inverse eigenvalue problem and propose a method that produces frames of any dimensions that are close to ETFs. Due to the achieved equiangularity property, the so obtained frames can be employed as spreading sequences in synchronous code-division multiple access (s-CDMA) systems, besides compressed sensing.
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan
2017-04-01
Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials, the results obtained in our experiments demonstrate the sparse acquisition STEM imaging is potentially capable of reducing the electron dose by at least 20 times expanding the frontiers of our characterization capabilities for investigation of biological/organic molecules, polymers, soft materials and nanostructures in general.
NASA Astrophysics Data System (ADS)
Karimi, Davood; Ward, Rabab K.
2016-03-01
Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Feng; Xin, Lei; Fu, Jie; Huang, Puming
2017-10-01
Large amount of data is one of the most obvious features in satellite based remote sensing systems, which is also a burden for data processing and transmission. The theory of compressive sensing(CS) has been proposed for almost a decade, and massive experiments show that CS has favorable performance in data compression and recovery, so we apply CS theory to remote sensing images acquisition. In CS, the construction of classical sensing matrix for all sparse signals has to satisfy the Restricted Isometry Property (RIP) strictly, which limits applying CS in practical in image compression. While for remote sensing images, we know some inherent characteristics such as non-negative, smoothness and etc.. Therefore, the goal of this paper is to present a novel measurement matrix that breaks RIP. The new sensing matrix consists of two parts: the standard Nyquist sampling matrix for thumbnails and the conventional CS sampling matrix. Since most of sun-synchronous based satellites fly around the earth 90 minutes and the revisit cycle is also short, lots of previously captured remote sensing images of the same place are available in advance. This drives us to reconstruct remote sensing images through a deep learning approach with those measurements from the new framework. Therefore, we propose a novel deep convolutional neural network (CNN) architecture which takes in undersampsing measurements as input and outputs an intermediate reconstruction image. It is well known that the training procedure to the network costs long time, luckily, the training step can be done only once, which makes the approach attractive for a host of sparse recovery problems.
NASA Astrophysics Data System (ADS)
Gao, Yi; Zhu, Liangjia; Norton, Isaiah; Agar, Nathalie Y. R.; Tannenbaum, Allen
2014-03-01
Desorption electrospray ionization mass spectrometry (DESI-MS) provides a highly sensitive imaging technique for differentiating normal and cancerous tissue at the molecular level. This can be very useful, especially under intra-operative conditions where the surgeon has to make crucial decision about the tumor boundary. In such situations, the time it takes for imaging and data analysis becomes a critical factor. Therefore, in this work we utilize compressive sensing to perform the sparse sampling of the tissue, which halves the scanning time. Furthermore, sparse feature selection is performed, which not only reduces the dimension of data from about 104 to less than 50, and thus significantly shortens the analysis time. This procedure also identifies biochemically important molecules for further pathological analysis. The methods are validated on brain and breast tumor data sets.
A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2014-06-15
This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less
NASA Astrophysics Data System (ADS)
Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin
2018-05-01
When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.
Robust Curb Detection with Fusion of 3D-Lidar and Camera Data
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-01-01
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes. PMID:24854364
Fully parallel write/read in resistive synaptic array for accelerating on-chip learning
NASA Astrophysics Data System (ADS)
Gao, Ligang; Wang, I.-Ting; Chen, Pai-Yu; Vrudhula, Sarma; Seo, Jae-sun; Cao, Yu; Hou, Tuo-Hung; Yu, Shimeng
2015-11-01
A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaO x /TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.
Calculation of gas turbine characteristic
NASA Astrophysics Data System (ADS)
Mamaev, B. I.; Murashko, V. L.
2016-04-01
The reasons and regularities of vapor flow and turbine parameter variation depending on the total pressure drop rate π* and rotor rotation frequency n are studied, as exemplified by a two-stage compressor turbine of a power-generating gas turbine installation. The turbine characteristic is calculated in a wide range of mode parameters using the method in which analytical dependences provide high accuracy for the calculated flow output angle and different types of gas dynamic losses are determined with account of the influence of blade row geometry, blade surface roughness, angles, compressibility, Reynolds number, and flow turbulence. The method provides satisfactory agreement of results of calculation and turbine testing. In the design mode, the operation conditions for the blade rows are favorable, the flow output velocities are close to the optimal ones, the angles of incidence are small, and the flow "choking" modes (with respect to consumption) in the rows are absent. High performance and a nearly axial flow behind the turbine are obtained. Reduction of the rotor rotation frequency and variation of the pressure drop change the flow parameters, the parameters of the stages and the turbine, as well as the form of the characteristic. In particular, for decreased n, nonmonotonic variation of the second stage reactivity with increasing π* is observed. It is demonstrated that the turbine characteristic is mainly determined by the influence of the angles of incidence and the velocity at the output of the rows on the losses and the flow output angle. The account of the growing flow output angle due to the positive angle of incidence for decreased rotation frequencies results in a considerable change of the characteristic: poorer performance, redistribution of the pressure drop at the stages, and change of reactivities, growth of the turbine capacity, and change of the angle and flow velocity behind the turbine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langet, Hélène; Laboratoire des Signaux et Systèmes, CentraleSupélec, Gif-sur-Yvette F-91192; Center for Visual Computing, CentraleSupélec, Châtenay-Malabry F-92295
2015-09-15
Purpose: This paper addresses the reconstruction of x-ray cone-beam computed tomography (CBCT) for interventional C-arm systems. Subsampling of CBCT is a significant issue with C-arms due to their slow rotation and to the low frame rate of their flat panel x-ray detectors. The aim of this work is to propose a novel method able to handle the subsampling artifacts generally observed with analytical reconstruction, through a content-driven hierarchical reconstruction based on compressed sensing. Methods: The central idea is to proceed with a hierarchical method where the most salient features (high intensities or gradients) are reconstructed first to reduce the artifactsmore » these features induce. These artifacts are addressed first because their presence contaminates less salient features. Several hierarchical schemes aiming at streak artifacts reduction are introduced for C-arm CBCT: the empirical orthogonal matching pursuit approach with the ℓ{sub 0} pseudonorm for reconstructing sparse vessels; a convex variant using homotopy with the ℓ{sub 1}-norm constraint of compressed sensing, for reconstructing sparse vessels over a nonsparse background; homotopy with total variation (TV); and a novel empirical extension to nonlinear diffusion (NLD). Such principles are implemented with penalized iterative filtered backprojection algorithms. For soft-tissue imaging, the authors compare the use of TV and NLD filters as sparsity constraints, both optimized with the alternating direction method of multipliers, using a threshold for TV and a nonlinear weighting for NLD. Results: The authors show on simulated data that their approach provides fast convergence to good approximations of the solution of the TV-constrained minimization problem introduced by the compressed sensing theory. Using C-arm CBCT clinical data, the authors show that both TV and NLD can deliver improved image quality by reducing streaks. Conclusions: A flexible compressed-sensing-based algorithmic approach is proposed that is able to accommodate for a wide range of constraints. It is successfully applied to C-arm CBCT images that may not be so well approximated by piecewise constant functions.« less
2010-11-01
material. The rubber is laser -etched with rows of tiny, interconnected channels or galleries, to which air pressure is applied. Any propagating crack... clad one side. The Upper Lobe has a radius of approximately 85” (compound curvature) in the region of interest. As stated previously, the skin is...7079-T6 sheet; clad one side with a varying thickness of 0.050” to 0.071” (varies according to stability requirements for compression combined with
Sparse representation of electrodermal activity with knowledge-driven dictionaries.
Chaspari, Theodora; Tsiartas, Andreas; Stein, Leah I; Cermak, Sharon A; Narayanan, Shrikanth S
2015-03-01
Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features.
Yi, Faliu; Jeoung, Yousun; Moon, Inkyu
2017-05-20
In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.
Fast super-resolution estimation of DOA and DOD in bistatic MIMO Radar with off-grid targets
NASA Astrophysics Data System (ADS)
Zhang, Dong; Zhang, Yongshun; Zheng, Guimei; Feng, Cunqian; Tang, Jun
2018-05-01
In this paper, we focus on the problem of joint DOA and DOD estimation in Bistatic MIMO Radar using sparse reconstruction method. In traditional ways, we usually convert the 2D parameter estimation problem into 1D parameter estimation problem by Kronecker product which will enlarge the scale of the parameter estimation problem and bring more computational burden. Furthermore, it requires that the targets must fall on the predefined grids. In this paper, a 2D-off-grid model is built which can solve the grid mismatch problem of 2D parameters estimation. Then in order to solve the joint 2D sparse reconstruction problem directly and efficiently, three kinds of fast joint sparse matrix reconstruction methods are proposed which are Joint-2D-OMP algorithm, Joint-2D-SL0 algorithm and Joint-2D-SOONE algorithm. Simulation results demonstrate that our methods not only can improve the 2D parameter estimation accuracy but also reduce the computational complexity compared with the traditional Kronecker Compressed Sensing method.
Lan, Ti-Yen; Wierman, Jennifer L.; Tate, Mark W.; Philipp, Hugh T.; Elser, Veit
2017-01-01
Recently, there has been a growing interest in adapting serial microcrystallography (SMX) experiments to existing storage ring (SR) sources. For very small crystals, however, radiation damage occurs before sufficient numbers of photons are diffracted to determine the orientation of the crystal. The challenge is to merge data from a large number of such ‘sparse’ frames in order to measure the full reciprocal space intensity. To simulate sparse frames, a dataset was collected from a large lysozyme crystal illuminated by a dim X-ray source. The crystal was continuously rotated about two orthogonal axes to sample a subset of the rotation space. With the EMC algorithm [expand–maximize–compress; Loh & Elser (2009). Phys. Rev. E, 80, 026705], it is shown that the diffracted intensity of the crystal can still be reconstructed even without knowledge of the orientation of the crystal in any sparse frame. Moreover, parallel computation implementations were designed to considerably improve the time and memory scaling of the algorithm. The results show that EMC-based SMX experiments should be feasible at SR sources. PMID:28808431
Sparse imaging for fast electron microscopy
NASA Astrophysics Data System (ADS)
Anderson, Hyrum S.; Ilic-Helms, Jovana; Rohrer, Brandon; Wheeler, Jason; Larson, Kurt
2013-02-01
Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited, large collections can lead to weeks of around-the-clock imaging time. To increase data collection speed, we propose and demonstrate on an operational SEM a fast method to sparsely sample and reconstruct smooth images. To accurately localize the electron probe position at fast scan rates, we model the dynamics of the scan coils, and use the model to rapidly and accurately visit a randomly selected subset of pixel locations. Images are reconstructed from the undersampled data by compressed sensing inversion using image smoothness as a prior. We report image fidelity as a function of acquisition speed by comparing traditional raster to sparse imaging modes. Our approach is equally applicable to other domains of nanometer microscopy in which the time to position a probe is a limiting factor (e.g., atomic force microscopy), or in which excessive electron doses might otherwise alter the sample being observed (e.g., scanning transmission electron microscopy).
Compressive Hyperspectral Imaging and Anomaly Detection
2010-02-01
Level Set Systems 1058 Embury Street Pacific Palisades , CA 90272 8. PERFORMING ORGANIZATION REPORT NUMBER 1A-2010 9. SPONSORING/MONITORING...were obtained from a simple algorithm, namely, the atoms in the trained image were very similar to the simple cell receptive fields in early vision...Field, "Emergence of simple- cell receptive field properties by learning a sparse code for natural images,’" Nature 381(6583), pp. 607-609, 1996. M
Compressive Information Extraction: A Dynamical Systems Approach
2016-01-24
sparsely encoded in very large data streams. (a) Target tracking in an urban canyon; (b) and (c) sample frames showing contextually abnormal events: onset...extraction to identify contextually abnormal se- quences (see section 2.2.3). Formally, the problem of interest can be stated as establishing whether a noisy...relaxations with optimality guarantees can be obtained using tools from semi-algebraic geometry. 2.2 Application: Detecting Contextually Abnormal Events
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong
2016-07-01
In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.
Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing
2015-03-01
Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.
Wind Turbine Wake Variability in a Large Wind Farm, Observed by Scanning Lidar
NASA Astrophysics Data System (ADS)
Lundquist, J. K.; Xiaoxia, G.; Aitken, M.; Quelet, P. T.; Rana, J.; Rhodes, M. E.; St Martin, C. M.; Tay, K.; Worsnop, R.; Irvin, S.; Rajewski, D. A.; Takle, E. S.
2014-12-01
Although wind turbine wake modeling is critical for accurate wind resource assessment, operational forecasting, and wind plant optimization, verification of such simulations is currently constrained by sparse datasets taken in limited atmospheric conditions, often of single turbines in isolation. To address this knowledge gap, our team deployed a WINDCUBE 200S scanning lidar in a 300-MW operating wind farm as part of the CWEX-13 field experiment. The lidar was deployed ~2000 m from a row of four turbines, such that wakes from multiple turbines could be sampled with horizontal scans. Twenty minutes of every hour were devoted to horizontal scans at ½ degree resolution at six different elevation angles. Twenty-five days of data were collected, with wind speeds at hub height ranging from quiescent to 14 m/s, and atmospheric stability varying from unstable to strongly stable. The example scan in Fig. 1a shows wakes from a row of four turbines propagating to the northwest. This extensive wake dataset is analyzed based on the quantitative approach of Aitken et al. (J. Atmos. Ocean. Technol. 2014), who developed an automated wake detection algorithm to characterize wind turbine wakes from scanning lidar data. We have extended the Aitken et al. (2014) method to consider multiple turbines in a single scan in order to classify the large numbers of wakes observed in the CWEX-13 dataset (Fig. 1b) during southerly flow conditions. The presentation will explore the variability of wake characteristics such as the velocity deficit and the wake width. These characteristics vary with atmospheric stability, atmospheric turbulence, and inflow wind speed. We find that the strongest and most persistent wakes occur at low to moderate wind speeds (region 2 of the turbine power curve) in stable conditions. We also present evidence that, in stable conditions with strong changes of wind direction with height, wakes propagate in different directions at different elevations above the surface. Finally, we compare characteristics of wakes at the outside of the row of turbines to wakes from turbines in the interior of the row, quantifying how wakes from outer turbines erode faster than those from interior.
NASA Technical Reports Server (NTRS)
Jorgenson, Philip C. E.; Veres, Joseph P.; Wright, William B.; Struk, Peter M.
2013-01-01
The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that were attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was one or more of the following anomalies: degraded engine performance, engine roll back, compressor surge and stall, and flameout of the combustor. The main focus of this research is the development of a computational tool that can estimate whether there is a risk of ice accretion by tracking key parameters through the compression system blade rows at all engine operating points within the flight trajectory. The tool has an engine system thermodynamic cycle code, coupled with a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Assumptions are made to predict the complex physics involved in engine icing. Specifically, the code does not directly estimate ice accretion and does not have models for particle breakup or erosion. Two key parameters have been suggested as conditions that must be met at the same location for ice accretion to occur: the local wet-bulb temperature to be near freezing or below and the local melt ratio must be above 10%. These parameters were deduced from analyzing laboratory icing test data and are the criteria used to predict the possibility of ice accretion within an engine including the specific blade row where it could occur. Once the possibility of accretion is determined from these parameters, the degree of blockage due to ice accretion on the local stator vane can be estimated from an empirical model of ice growth rate and time spent at that operating point in the flight trajectory. The computational tool can be used to assess specific turbine engines to their susceptibility to ice accretion in an ice crystal environment.
Sparse and redundant representations for inverse problems and recognition
NASA Astrophysics Data System (ADS)
Patel, Vishal M.
Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.
A designed experiment in stitched/RTM composites
NASA Technical Reports Server (NTRS)
Dickinson, Larry C.
1993-01-01
The damage tolerance of composite laminates can be significantly improved by the addition of through-the-thickness fibrous reinforcement such as stitching. However, there are numerous stitching parameters which can be independently varied, and their separate and combined effects on mechanical properties need to be determined. A statistically designed experiment (a 2(sup 5-1) fractional factorial, also known as a Taguchi L16 test matrix) used to evaluate five important parameters is described. The effects and interactions of stitch thread material, stitch thread strength, stitch row spacing and stitch pitch are examined for both thick (48 ply) and thin (16 ply) carbon/epoxy (AS4/E905L) composites. Tension, compression and compression after impact tests are described. Preliminary results of completed tension testing are discussed. Larger threads decreased tensile strength. Panel thickness was found not to be an important stitching parameter for tensile properties. Tensile modulus was unaffected by stitching.
Pressure-Equalizing Cradle for Booster Rocket Mounting
NASA Technical Reports Server (NTRS)
Rutan, Elbert L. (Inventor)
2015-01-01
A launch system and method improve the launch efficiency of a booster rocket and payload. A launch aircraft atop which the booster rocket is mounted in a cradle, is flown or towed to an elevation at which the booster rocket is released. The cradle provides for reduced structural requirements for the booster rocket by including a compressible layer, that may be provided by a plurality of gas or liquid-filled flexible chambers. The compressible layer contacts the booster rocket along most of the length of the booster rocket to distribute applied pressure, nearly eliminating bending loads. Distributing the pressure eliminates point loading conditions and bending moments that would otherwise be generated in the booster rocket structure during carrying. The chambers may be balloons distributed in rows and columns within the cradle or cylindrical chambers extending along a length of the cradle. The cradle may include a manifold communicating gas between chambers.
NASA Technical Reports Server (NTRS)
Tweedt, Daniel L.; Chima, Rodrick V.; Turkel, Eli
1997-01-01
A preconditioning scheme has been implemented into a three-dimensional viscous computational fluid dynamics code for turbomachine blade rows. The preconditioning allows the code, originally developed for simulating compressible flow fields, to be applied to nearly-incompressible, low Mach number flows. A brief description is given of the compressible Navier-Stokes equations for a rotating coordinate system, along with the preconditioning method employed. Details about the conservative formulation of artificial dissipation are provided, and different artificial dissipation schemes are discussed and compared. The preconditioned code was applied to a well-documented case involving the NASA large low-speed centrifugal compressor for which detailed experimental data are available for comparison. Performance and flow field data are compared for the near-design operating point of the compressor, with generally good agreement between computation and experiment. Further, significant differences between computational results for the different numerical implementations, revealing different levels of solution accuracy, are discussed.
An efficient compression scheme for bitmap indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie
2004-04-13
When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap codemore » (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time is proportional to the index size. This indicates that the compressed bitmap indices are efficient for very large datasets.« less
Application of wavefield compressive sensing in surface wave tomography
NASA Astrophysics Data System (ADS)
Zhan, Zhongwen; Li, Qingyang; Huang, Jianping
2018-06-01
Dense arrays allow sampling of seismic wavefield without significant aliasing, and surface wave tomography has benefitted from exploiting wavefield coherence among neighbouring stations. However, explicit or implicit assumptions about wavefield, irregular station spacing and noise still limit the applicability and resolution of current surface wave methods. Here, we propose to apply the theory of compressive sensing (CS) to seek a sparse representation of the surface wavefield using a plane-wave basis. Then we reconstruct the continuous surface wavefield on a dense regular grid before applying any tomographic methods. Synthetic tests demonstrate that wavefield CS improves robustness and resolution of Helmholtz tomography and wavefield gradiometry, especially when traditional approaches have difficulties due to sub-Nyquist sampling or complexities in wavefield.
Sparse modeling applied to patient identification for safety in medical physics applications
NASA Astrophysics Data System (ADS)
Lewkowitz, Stephanie
Every scheduled treatment at a radiation therapy clinic involves a series of safety protocol to ensure the utmost patient care. Despite safety protocol, on a rare occasion an entirely preventable medical event, an accident, may occur. Delivering a treatment plan to the wrong patient is preventable, yet still is a clinically documented error. This research describes a computational method to identify patients with a novel machine learning technique to combat misadministration. The patient identification program stores face and fingerprint data for each patient. New, unlabeled data from those patients are categorized according to the library. The categorization of data by this face-fingerprint detector is accomplished with new machine learning algorithms based on Sparse Modeling that have already begun transforming the foundation of Computer Vision. Previous patient recognition software required special subroutines for faces and different tailored subroutines for fingerprints. In this research, the same exact model is used for both fingerprints and faces, without any additional subroutines and even without adjusting the two hyperparameters. Sparse modeling is a powerful tool, already shown utility in the areas of super-resolution, denoising, inpainting, demosaicing, and sub-nyquist sampling, i.e. compressed sensing. Sparse Modeling is possible because natural images are inherently sparse in some bases, due to their inherent structure. This research chooses datasets of face and fingerprint images to test the patient identification model. The model stores the images of each dataset as a basis (library). One image at a time is removed from the library, and is classified by a sparse code in terms of the remaining library. The Locally Competitive Algorithm, a truly neural inspired Artificial Neural Network, solves the computationally difficult task of finding the sparse code for the test image. The components of the sparse representation vector are summed by ℓ1 pooling, and correct patient identification is consistently achieved 100% over 1000 trials, when either the face data or fingerprint data are implemented as a classification basis. The algorithm gets 100% classification when faces and fingerprints are concatenated into multimodal datasets. This suggests that 100% patient identification will be achievable in the clinal setting.
Dictionary Learning Algorithms for Sparse Representation
Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811
NASA Astrophysics Data System (ADS)
O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.
2013-04-01
Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.
Accelerated Simulation of Kinetic Transport Using Variational Principles and Sparsity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caflisch, Russel
This project is centered on the development and application of techniques of sparsity and compressed sensing for variational principles, PDEs and physics problems, in particular for kinetic transport. This included derivation of sparse modes for elliptic and parabolic problems coming from variational principles. The research results of this project are on methods for sparsity in differential equations and their applications and on application of sparsity ideas to kinetic transport of plasmas.
Blind Compressed Image Watermarking for Noisy Communication Channels
2015-10-26
Lenna test image [11] for our simulations, and gradient projection for sparse recon- struction (GPSR) [12] to solve the convex optimization prob- lem...E. Candes, J. Romberg , and T. Tao, “Robust uncertainty prin- ciples: exact signal reconstruction from highly incomplete fre- quency information,” IEEE...Images - Requirements and Guidelines,” ITU-T Recommen- dation T.81, 1992. [6] M. Gkizeli, D. Pados, and M. Medley, “Optimal signature de - sign for
Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
2011-04-15
funded by Mitsubishi Electric Research Laboratories. †ICTEAM Institute, ELEN Department, Université catholique de Louvain (UCL), B-1348 Louvain-la-Neuve...reduced to a simple comparator that tests for values above or below zero, enabling extremely simple, efficient, and fast quantization. A 1-bit quantizer is...these two terms appears to be significantly different, according to the previously discussed experiments. To test the hypothesis that this term is the key
A compressive-sensing Fourier-transform on-chip Raman spectrometer
NASA Astrophysics Data System (ADS)
Podmore, Hugh; Scott, Alan; Lee, Regina
2018-02-01
We demonstrate a novel compressive sensing Fourier-transform spectrometer (FTS) for snapshot Raman spectroscopy in a compact format. The on-chip FTS consists of a set of planar-waveguide Mach-Zehnder interferometers (MZIs) arrayed on a photonic chip, effecting a discrete Fourier-transform of the input spectrum. Incoherence between the sampling domain (time), and the spectral domain (frequency) permits compressive sensing retrieval using undersampled interferograms for sparse spectra such as Raman emission. In our fabricated device we retain our chosen bandwidth and resolution while reducing the number of MZIs, e.g. the size of the interferogram, to 1/4th critical sampling. This architecture simultaneously reduces chip footprint and concentrates the interferogram in fewer pixels to improve the signal to noise ratio. Our device collects interferogram samples simultaneously, therefore a time-gated detector may be used to separate Raman peaks from sample fluorescence. A challenge for FTS waveguide spectrometers is to achieve multi-aperture high throughput broadband coupling to a large number of single-mode waveguides. A multi-aperture design allows one to increase the bandwidth and spectral resolution without sacrificing optical throughput. In this device, multi-aperture coupling is achieved using an array of microlenses bonded to the surface of the chip, and aligned with a grid of vertically illuminated waveguide apertures. The microlens array accepts a collimated beam with near 100% fill-factor, and the resulting spherical wavefronts are coupled into the single-mode waveguides using 45& mirrors etched into the waveguide layer via focused ion-beam (FIB). The interferogram from the waveguide outputs is imaged using a CCD, and inverted via l1-norm minimization to correctly retrieve a sparse input spectrum.
Sparsity based target detection for compressive spectral imagery
NASA Astrophysics Data System (ADS)
Boada, David Alberto; Arguello Fuentes, Henry
2016-09-01
Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.
Informational analysis for compressive sampling in radar imaging.
Zhang, Jingxiong; Yang, Ke
2015-03-24
Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.
Quasi-static and ratcheting properties of trabecular bone under uniaxial and cyclic compression.
Gao, Li-Lan; Wei, Chao-Lei; Zhang, Chun-Qiu; Gao, Hong; Yang, Nan; Dong, Li-Min
2017-08-01
The quasi-static and ratcheting properties of trabecular bone were investigated by experiments and theoretical predictions. The creep tests with different stress levels were completed and it is found that both the creep strain and creep compliance increase rapidly at first and then increase slowly as the creep time goes by. With increase of compressive stress the creep strain increases and the creep compliance decreases. The uniaxial compressive tests show that the applied stress rate makes remarkable influence on the compressive behaviors of trabecular bone. The Young's modulus of trabecular bone increases with increase of stress rate. The stress-strain hysteresis loops of trabecular bone under cyclic load change from sparse to dense with increase of number of cycles, which agrees with the change trend of ratcheting strain. The ratcheting strain rate rapidly decreases at first, and then exhibits a relatively stable and small value after 50cycles. Both the ratcheting strain and ratcheting strain rate increase with increase of stress amplitude or with decrease of stress rate. The creep model and the nonlinear viscoelastic constitutive model of trabecular bone were proposed and used to predict its creep property and rate-dependent compressive property. The results show that there are good agreements between the experimental data and predictions. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Xiaogang; Chen, Wen; Chen, Xudong
2015-03-09
In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.
Compressive Spectral Method for the Simulation of the Nonlinear Gravity Waves
Bayındır, Cihan
2016-01-01
In this paper an approach for decreasing the computational effort required for the spectral simulations of the fully nonlinear ocean waves is introduced. The proposed approach utilizes the compressive sampling algorithm and depends on the idea of using a smaller number of spectral components compared to the classical spectral method. After performing the time integration with a smaller number of spectral components and using the compressive sampling technique, it is shown that the ocean wave field can be reconstructed with a significantly better efficiency compared to the classical spectral method. For the sparse ocean wave model in the frequency domain the fully nonlinear ocean waves with Jonswap spectrum is considered. By implementation of a high-order spectral method it is shown that the proposed methodology can simulate the linear and the fully nonlinear ocean waves with negligible difference in the accuracy and with a great efficiency by reducing the computation time significantly especially for large time evolutions. PMID:26911357
Amin, O M; Heckmann, R A
1991-04-01
Polymorphus spindlatus n. sp. is described from the black-crowned night heron, Nycticorax nycticorax, in Lake Titicaca, Peru. It is distinguished from all 27 known species of the subgenus Polymorphus by its spindle-shaped proboscis and its trunk shape, the anterior 2/3 of which is ovoid, tapering into a tubular posterior end. It resembles Polymorphus brevis (=Arhythmorhynchus brevis), which is, however, longer and considerably more slender, and has smaller and more numerous proboscis hooks per row and smaller eggs. It is separated also from Polymorphus swartzi, Polymorphus striatus, Polymorphus contortus, and Polymorphus cincli by its proboscis armature (usually 18 longitudinal rows of 11-13 hooks each), among other characters. Histopathological sections of host tissue show well defined localized damage including hemorrhaging with subsequent phagocyte cell migration (granular tissue). The lumen of the host intestine is obstructed and villi show compression. The proboscis of P. spindlatus extends through the intestinal mucosa and submucosa, displacing the smooth muscle layers of the muscularis externa. Fibrosis also was observed.
An efficient implementation of a high-order filter for a cubed-sphere spectral element model
NASA Astrophysics Data System (ADS)
Kang, Hyun-Gyu; Cheong, Hyeong-Bin
2017-03-01
A parallel-scalable, isotropic, scale-selective spatial filter was developed for the cubed-sphere spectral element model on the sphere. The filter equation is a high-order elliptic (Helmholtz) equation based on the spherical Laplacian operator, which is transformed into cubed-sphere local coordinates. The Laplacian operator is discretized on the computational domain, i.e., on each cell, by the spectral element method with Gauss-Lobatto Lagrange interpolating polynomials (GLLIPs) as the orthogonal basis functions. On the global domain, the discrete filter equation yielded a linear system represented by a highly sparse matrix. The density of this matrix increases quadratically (linearly) with the order of GLLIP (order of the filter), and the linear system is solved in only O (Ng) operations, where Ng is the total number of grid points. The solution, obtained by a row reduction method, demonstrated the typical accuracy and convergence rate of the cubed-sphere spectral element method. To achieve computational efficiency on parallel computers, the linear system was treated by an inverse matrix method (a sparse matrix-vector multiplication). The density of the inverse matrix was lowered to only a few times of the original sparse matrix without degrading the accuracy of the solution. For better computational efficiency, a local-domain high-order filter was introduced: The filter equation is applied to multiple cells, and then the central cell was only used to reconstruct the filtered field. The parallel efficiency of applying the inverse matrix method to the global- and local-domain filter was evaluated by the scalability on a distributed-memory parallel computer. The scale-selective performance of the filter was demonstrated on Earth topography. The usefulness of the filter as a hyper-viscosity for the vorticity equation was also demonstrated.
Determining biosonar images using sparse representations.
Fontaine, Bertrand; Peremans, Herbert
2009-05-01
Echolocating bats are thought to be able to create an image of their environment by emitting pulses and analyzing the reflected echoes. In this paper, the theory of sparse representations and its more recent further development into compressed sensing are applied to this biosonar image formation task. Considering the target image representation as sparse allows formulation of this inverse problem as a convex optimization problem for which well defined and efficient solution methods have been established. The resulting technique, referred to as L1-minimization, is applied to simulated data to analyze its performance relative to delay accuracy and delay resolution experiments. This method performs comparably to the coherent receiver for the delay accuracy experiments, is quite robust to noise, and can reconstruct complex target impulse responses as generated by many closely spaced reflectors with different reflection strengths. This same technique, in addition to reconstructing biosonar target images, can be used to simultaneously localize these complex targets by interpreting location cues induced by the bat's head related transfer function. Finally, a tentative explanation is proposed for specific bat behavioral experiments in terms of the properties of target images as reconstructed by the L1-minimization method.
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Iaroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
An embedded system for face classification in infrared video using sparse representation
NASA Astrophysics Data System (ADS)
Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel
2017-09-01
We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
Spatial Compressive Sensing for Strain Data Reconstruction from Sparse Sensors
2014-10-01
optical fiber Bragg grating (or FBG ) sensors embedded in the plate. For the sake of simplicity, we assume that the FBGs are embedded in the radial...direction, as shown by the yellow lines in Fig. 10. The yellow lines are the direction along which strain is being measured. We considered FBGs here...however, strain gages emplaced along these lines can also be envisioned. FBGs are strain-measuring sensors that use the principle of low coherence
Carbon Sequestration at United States Marine Corps Installations West
2014-05-20
22202-4302. Respondents should be aware that notwithstanding any other provision of law , no person shall be subject to any oenalty for failing to...Falge et al., 2002a, b; Law et al., 2002). This, in turn, is perhaps due to the perception that sparse vegetation cover and seemingly bare soil...feasibility of carbon capture and storage (CCS) is divided into three components or steps: 1) CO2 capture and compression, 2) transportation of CO2with
Household wireless electroencephalogram hat
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Moon, Gyu; Yamakawa, Takeshi; Tran, Binh
2012-06-01
We applied Compressive Sensing to design an affordable, convenient Brain Machine Interface (BMI) measuring the high spatial density, and real-time process of Electroencephalogram (EEG) brainwaves by a Smartphone. It is useful for therapeutic and mental health monitoring, learning disability biofeedback, handicap interfaces, and war gaming. Its spec is adequate for a biomedical laboratory, without the cables hanging over the head and tethered to a fixed computer terminal. Our improved the intrinsic signal to noise ratio (SNR) by using the non-uniform placement of the measuring electrodes to create the proximity of measurement to the source effect. We computing a spatiotemporal average the larger magnitude of EEG data centers in 0.3 second taking on tethered laboratory data, using fuzzy logic, and computing the inside brainwave sources, by Independent Component Analysis (ICA). Consequently, we can overlay them together by non-uniform electrode distribution enhancing the signal noise ratio and therefore the degree of sparseness by threshold. We overcame the conflicting requirements between a high spatial electrode density and precise temporal resolution (beyond Event Related Potential (ERP) P300 brainwave at 0.3 sec), and Smartphone wireless bottleneck of spatiotemporal throughput rate. Our main contribution in this paper is the quality and the speed of iterative compressed image recovery algorithm based on a Block Sparse Code (Baranuick et al, IEEE/IT 2008). As a result, we achieved real-time wireless dynamic measurement of EEG brainwaves, matching well with traditionally tethered high density EEG.
Detonation duct gas generator demonstration program
NASA Technical Reports Server (NTRS)
Wortman, Andrew; Brinlee, Gayl A.; Othmer, Peter; Whelan, Michael A.
1991-01-01
The feasibility of the generation of detonation waves moving periodically across high speed channel flow is experimentally demonstrated. Such waves are essential to the concept of compressing requirements and increasing the engine pressure compressor with the objective of reducing conventional compressor requirements and increasing the engine thermodynamic efficiency through isochoric energy addition. By generating transient transverse waves, rather than standing waves, shock wave losses are reduced by an order of magnitude. The ultimate objective is to use such detonation ducts downstream of a low pressure gas turbine compressor to produce a high overall pressure ratio thermodynamic cycle. A 4 foot long, 1 inch x 12 inch cross-section, detonation duct was operated in a blow-down mode using compressed air reservoirs. Liquid or vapor propane was injected through injectors or solenoid valves located in the plenum or the duct itself. Detonation waves were generated when the mixture was ignited by a row of spark plugs in the duct wall. Problems with fuel injection and mixing limited the air speeds to about Mach 0.5, frequencies to below 10 Hz, and measured pressure ratios of about 5 to 6. The feasibility of the gas dynamic compression was demonstrated and the critical problem areas were identified.
Fluid-structure finite-element vibrational analysis
NASA Technical Reports Server (NTRS)
Feng, G. C.; Kiefling, L.
1974-01-01
A fluid finite element has been developed for a quasi-compressible fluid. Both kinetic and potential energy are expressed as functions of nodal displacements. Thus, the formulation is similar to that used for structural elements, with the only differences being that the fluid can possess gravitational potential, and the constitutive equations for fluid contain no shear coefficients. Using this approach, structural and fluid elements can be used interchangeably in existing efficient sparse-matrix structural computer programs such as SPAR. The theoretical development of the element formulations and the relationships of the local and global coordinates are shown. Solutions of fluid slosh, liquid compressibility, and coupled fluid-shell oscillation problems which were completed using a temporary digital computer program are shown. The frequency correlation of the solutions with classical theory is excellent.
Completing sparse and disconnected protein-protein network by deep learning.
Huang, Lei; Liao, Li; Wu, Cathy H
2018-03-22
Protein-protein interaction (PPI) prediction remains a central task in systems biology to achieve a better and holistic understanding of cellular and intracellular processes. Recently, an increasing number of computational methods have shifted from pair-wise prediction to network level prediction. Many of the existing network level methods predict PPIs under the assumption that the training network should be connected. However, this assumption greatly affects the prediction power and limits the application area because the current golden standard PPI networks are usually very sparse and disconnected. Therefore, how to effectively predict PPIs based on a training network that is sparse and disconnected remains a challenge. In this work, we developed a novel PPI prediction method based on deep learning neural network and regularized Laplacian kernel. We use a neural network with an autoencoder-like architecture to implicitly simulate the evolutionary processes of a PPI network. Neurons of the output layer correspond to proteins and are labeled with values (1 for interaction and 0 for otherwise) from the adjacency matrix of a sparse disconnected training PPI network. Unlike autoencoder, neurons at the input layer are given all zero input, reflecting an assumption of no a priori knowledge about PPIs, and hidden layers of smaller sizes mimic ancient interactome at different times during evolution. After the training step, an evolved PPI network whose rows are outputs of the neural network can be obtained. We then predict PPIs by applying the regularized Laplacian kernel to the transition matrix that is built upon the evolved PPI network. The results from cross-validation experiments show that the PPI prediction accuracies for yeast data and human data measured as AUC are increased by up to 8.4 and 14.9% respectively, as compared to the baseline. Moreover, the evolved PPI network can also help us leverage complementary information from the disconnected training network and multiple heterogeneous data sources. Tested by the yeast data with six heterogeneous feature kernels, the results show our method can further improve the prediction performance by up to 2%, which is very close to an upper bound that is obtained by an Approximate Bayesian Computation based sampling method. The proposed evolution deep neural network, coupled with regularized Laplacian kernel, is an effective tool in completing sparse and disconnected PPI networks and in facilitating integration of heterogeneous data sources.
Wu, Xiaolin; Zhang, Xiangjun; Wang, Xiaohan
2009-03-01
Recently, many researchers started to challenge a long-standing practice of digital photography: oversampling followed by compression and pursuing more intelligent sparse sampling techniques. In this paper, we propose a practical approach of uniform down sampling in image space and yet making the sampling adaptive by spatially varying, directional low-pass prefiltering. The resulting down-sampled prefiltered image remains a conventional square sample grid, and, thus, it can be compressed and transmitted without any change to current image coding standards and systems. The decoder first decompresses the low-resolution image and then upconverts it to the original resolution in a constrained least squares restoration process, using a 2-D piecewise autoregressive model and the knowledge of directional low-pass prefiltering. The proposed compression approach of collaborative adaptive down-sampling and upconversion (CADU) outperforms JPEG 2000 in PSNR measure at low to medium bit rates and achieves superior visual quality, as well. The superior low bit-rate performance of the CADU approach seems to suggest that oversampling not only wastes hardware resources and energy, and it could be counterproductive to image quality given a tight bit budget.
Wang, Gang; Zhao, Zhikai; Ning, Yongjie
2018-05-28
As the application of a coal mine Internet of Things (IoT), mobile measurement devices, such as intelligent mine lamps, cause moving measurement data to be increased. How to transmit these large amounts of mobile measurement data effectively has become an urgent problem. This paper presents a compressed sensing algorithm for the large amount of coal mine IoT moving measurement data based on a multi-hop network and total variation. By taking gas data in mobile measurement data as an example, two network models for the transmission of gas data flow, namely single-hop and multi-hop transmission modes, are investigated in depth, and a gas data compressed sensing collection model is built based on a multi-hop network. To utilize the sparse characteristics of gas data, the concept of total variation is introduced and a high-efficiency gas data compression and reconstruction method based on Total Variation Sparsity based on Multi-Hop (TVS-MH) is proposed. According to the simulation results, by using the proposed method, the moving measurement data flow from an underground distributed mobile network can be acquired and transmitted efficiently.
Filtered gradient reconstruction algorithm for compressive spectral imaging
NASA Astrophysics Data System (ADS)
Mejia, Yuri; Arguello, Henry
2017-04-01
Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.
NASA Astrophysics Data System (ADS)
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2014-03-01
We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics
NASA Astrophysics Data System (ADS)
Chen, Yu-Zhong; Lai, Ying-Cheng
2018-03-01
Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.
Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation
Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467
Sparse dynamical Boltzmann machine for reconstructing complex networks with binary dynamics.
Chen, Yu-Zhong; Lai, Ying-Cheng
2018-03-01
Revealing the structure and dynamics of complex networked systems from observed data is a problem of current interest. Is it possible to develop a completely data-driven framework to decipher the network structure and different types of dynamical processes on complex networks? We develop a model named sparse dynamical Boltzmann machine (SDBM) as a structural estimator for complex networks that host binary dynamical processes. The SDBM attains its topology according to that of the original system and is capable of simulating the original binary dynamical process. We develop a fully automated method based on compressive sensing and a clustering algorithm to construct the SDBM. We demonstrate, for a variety of representative dynamical processes on model and real world complex networks, that the equivalent SDBM can recover the network structure of the original system and simulates its dynamical behavior with high precision.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
Lü, Xilin; Zhai, Xinle; Huang, Maosong
2017-11-01
This paper presents a characterization of the mechanical behavior of municipal solid waste (MSW) under consolidated drained and undrained triaxial conditions. The constitutive model was established based on a deviatoric hardening plasticity model. A power form function and incremental hyperbolic form function were proposed to describe the shear strength and the hardening role of MSW. The stress ratio that corresponds to the zero dilatancy was not fixed but depended on mean stress, making the Rowe's rule be able to describe the stress-dilatancy of MSW. A pore water pressure reduction coefficient, which attributed to the compressibility of a particle and the solid matrix, was introduced to the effective stress formulation to modify the Terzaghi's principle. The effects of particle compressibility and solid matrix compressibility on the undrained behavior of MSW were analyzed by parametric analysis, and the changing characteristic of stress-path, stress-strain, and pore-water pressure were obtained. The applicability of the proposed model on MSW under drained and undrained conditions was verified by model predictions of three triaxial tests. The comparison between model simulations and experiments indicated that the proposed model can capture the observed different characteristics of MSW response from normal soil, such as nonlinear shear strength, pressure dependent stress dilatancy, and the reduced value of pore water pressure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Experimental study on infrared radiation temperature field of concrete under uniaxial compression
NASA Astrophysics Data System (ADS)
Lou, Quan; He, Xueqiu
2018-05-01
Infrared thermography, as a nondestructive, non-contact and real-time monitoring method, has great significance in assessing the stability of concrete structure and monitoring its failure. It is necessary to conduct in depth study on the mechanism and application of infrared radiation (IR) of concrete failure under loading. In this paper, the concrete specimens with size of 100 × 100 × 100 mm were adopted to carry out the uniaxial compressions for the IR tests. The distribution of IR temperatures (IRTs), surface topography of IRT field and the reconstructed IR images were studied. The results show that the IRT distribution follows the Gaussian distribution, and the R2 of Gaussian fitting changes along with the loading time. The abnormities of R2 and AE counts display the opposite variation trends. The surface topography of IRT field is similar to the hyperbolic paraboloid, which is related to the stress distribution in the sample. The R2 of hyperbolic paraboloid fitting presents an upward trend prior to the fracture which enables to change the IRT field significantly. This R2 has a sharp drop in response to this large destruction. The normalization images of IRT field, including the row and column normalization images, were proposed as auxiliary means to analyze the IRT field. The row and column normalization images respectively show the transverse and longitudinal distribution of the IRT field, and they have clear responses to the destruction occurring on the sample surface. In this paper, the new methods and quantitative index were proposed for the analysis of IRT field, which have some theoretical and instructive significance for the analysis of the characteristics of IRT field, as well as the monitoring of instability and failure for concrete structure.
Biomechanics of rugby union scrummaging. Technical and safety issues.
Milburn, P D
1993-09-01
In the game of ruby union, the scrum epitomises the physical nature of the game. It is both a powerful offensive skill, affording a base for attacking play, and a defensive skill in denying the opposition clean possession. However, the scrum has also been implicated in a large proportion of serious spinal injuries in rugby union. The majority of injuries are found to occur at engagement where the forces experienced by front-row players (more than two-thirds of a tonne shared across the front-row) can exceed the structural limits of the cervical spine. These large forces are a consequence of the speed of engagement and the weight (and number) of players involved in the scrum. This highlights not only the need for physical preparation of all forwards but particularly player restraint at engagement, and justifies the 'crouch-pause-engage' sequence recently introduced to 'depower' the scrum. As the hooker is the player exposed to the greatest loads throughout the scrum and subsequently most at risk, he should determine the timing of engagement of the 2 front-rows. Stability of the scrum is an indication of front-row players' ability to utilise their strength to transmit the force to their opponents as well as the push of second-row and back-row players behind them in the scrum. This appears to be independent of the size of players. Equally, it reflects the risk of chronic degeneration of the musculoskeletal system through repeated exposure to these large stresses. However, not only are older and more experienced players better able to generate and transmit these forces, they are also able to maintain the integrity of the scrum. A large proportion of individual players' efforts to generate force is lost in their coordinated effort in a normal scrum. It is assumed these forces are dissipated through players re-orientating their bodies in the scrum situation as well as to less efficient shear forces and to the elastic and compressive tissues in the body. It again reinforces the importance of physical preparation for all forwards to better withstand the large forces involved in scrummaging. Despite negative publicity surrounding the risk of serious spinal injury in rugby union, limited research has been conducted to examine either the mechanisms of injury or techniques implicated in causing injury. Biomechanical information can provide systematic bases for modifying existing techniques and assessing the physical capacities necessary to efficiently and safely play in the serum. This will both improve performance of game skills and minimise the potential for injury.
Pant, Jeevan K; Krishnan, Sridhar
2018-03-15
To present a new compressive sensing (CS)-based method for the acquisition of ECG signals and for robust estimation of heart-rate variability (HRV) parameters from compressively sensed measurements with high compression ratio. CS is used in the biosensor to compress the ECG signal. Estimation of the locations of QRS segments is carried out by applying two algorithms on the compressed measurements. The first algorithm reconstructs the ECG signal by enforcing a block-sparse structure on the first-order difference of the signal, so the transient QRS segments are significantly emphasized on the first-order difference of the signal. Multiple block-divisions of the signals are carried out with various block lengths, and multiple reconstructed signals are combined to enhance the robustness of the localization of the QRS segments. The second algorithm removes errors in the locations of QRS segments by applying low-pass filtering and morphological operations. The proposed CS-based method is found to be effective for the reconstruction of ECG signals by enforcing transient QRS structures on the first-order difference of the signal. It is demonstrated to be robust not only to high compression ratio but also to various artefacts present in ECG signals acquired by using on-body wireless sensors. HRV parameters computed by using the QRS locations estimated from the signals reconstructed with a compression ratio as high as 90% are comparable with that computed by using QRS locations estimated by using the Pan-Tompkins algorithm. The proposed method is useful for the realization of long-term HRV monitoring systems by using CS-based low-power wireless on-body biosensors.
A Compressed Sensing Based Ultra-Wideband Communication System
2009-06-01
principle, most of the processing at the receiver can be moved to the transmitter—where energy consumption and computation are sufficient for many advanced...extended to continuous time signals. We use ∗ to denote the convolution process in a linear time-invariant (LTI) system. Assume that there is an analog...Filter Channel Low Rate A/D Processing Sparse Bit Sequence UWB Pulse Generator α̂ Waves)(RadioGHz 5 MHz125 θ Ψ Φ y θ̂ 1 ˆ arg min s.t. yθ
NASA Astrophysics Data System (ADS)
Prodhan, Suryoday; Ramasesha, S.
2018-05-01
The symmetry adapted density matrix renormalization group (SDMRG) technique has been an efficient method for studying low-lying eigenstates in one- and quasi-one-dimensional electronic systems. However, the SDMRG method had bottlenecks involving the construction of linearly independent symmetry adapted basis states as the symmetry matrices in the DMRG basis were not sparse. We have developed a modified algorithm to overcome this bottleneck. The new method incorporates end-to-end interchange symmetry (C2) , electron-hole symmetry (J ) , and parity or spin-flip symmetry (P ) in these calculations. The one-to-one correspondence between direct-product basis states in the DMRG Hilbert space for these symmetry operations renders the symmetry matrices in the new basis with maximum sparseness, just one nonzero matrix element per row. Using methods similar to those employed in the exact diagonalization technique for Pariser-Parr-Pople (PPP) models, developed in the 1980s, it is possible to construct orthogonal SDMRG basis states while bypassing the slow step of the Gram-Schmidt orthonormalization procedure. The method together with the PPP model which incorporates long-range electronic correlations is employed to study the correlated excited-state spectra of 1,12-benzoperylene and a narrow mixed graphene nanoribbon with a chrysene molecule as the building unit, comprising both zigzag and cove-edge structures.
F100(3) parallel compressor computer code and user's manual
NASA Technical Reports Server (NTRS)
Mazzawy, R. S.; Fulkerson, D. A.; Haddad, D. E.; Clark, T. A.
1978-01-01
The Pratt & Whitney Aircraft multiple segment parallel compressor model has been modified to include the influence of variable compressor vane geometry on the sensitivity to circumferential flow distortion. Further, performance characteristics of the F100 (3) compression system have been incorporated into the model on a blade row basis. In this modified form, the distortion's circumferential location is referenced relative to the variable vane controlling sensors of the F100 (3) engine so that the proper solution can be obtained regardless of distortion orientation. This feature is particularly important for the analysis of inlet temperature distortion. Compatibility with fixed geometry compressor applications has been maintained in the model.
Digital holographic image fusion for a larger size object using compressive sensing
NASA Astrophysics Data System (ADS)
Tian, Qiuhong; Yan, Liping; Chen, Benyong; Yao, Jiabao; Zhang, Shihua
2017-05-01
Digital holographic imaging fusion for a larger size object using compressive sensing is proposed. In this method, the high frequency component of the digital hologram under discrete wavelet transform is represented sparsely by using compressive sensing so that the data redundancy of digital holographic recording can be resolved validly, the low frequency component is retained totally to ensure the image quality, and multiple reconstructed images with different clear parts corresponding to a laser spot size are fused to realize the high quality reconstructed image of a larger size object. In addition, a filter combing high-pass and low-pass filters is designed to remove the zero-order term from a digital hologram effectively. The digital holographic experimental setup based on off-axis Fresnel digital holography was constructed. The feasible and comparative experiments were carried out. The fused image was evaluated by using the Tamura texture features. The experimental results demonstrated that the proposed method can improve the processing efficiency and visual characteristics of the fused image and enlarge the size of the measured object effectively.
A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications
NASA Astrophysics Data System (ADS)
Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.
2018-04-01
Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.
Emad, Amin; Milenkovic, Olgica
2014-01-01
We introduce a novel algorithm for inference of causal gene interactions, termed CaSPIAN (Causal Subspace Pursuit for Inference and Analysis of Networks), which is based on coupling compressive sensing and Granger causality techniques. The core of the approach is to discover sparse linear dependencies between shifted time series of gene expressions using a sequential list-version of the subspace pursuit reconstruction algorithm and to estimate the direction of gene interactions via Granger-type elimination. The method is conceptually simple and computationally efficient, and it allows for dealing with noisy measurements. Its performance as a stand-alone platform without biological side-information was tested on simulated networks, on the synthetic IRMA network in Saccharomyces cerevisiae, and on data pertaining to the human HeLa cell network and the SOS network in E. coli. The results produced by CaSPIAN are compared to the results of several related algorithms, demonstrating significant improvements in inference accuracy of documented interactions. These findings highlight the importance of Granger causality techniques for reducing the number of false-positives, as well as the influence of noise and sampling period on the accuracy of the estimates. In addition, the performance of the method was tested in conjunction with biological side information of the form of sparse “scaffold networks”, to which new edges were added using available RNA-seq or microarray data. These biological priors aid in increasing the sensitivity and precision of the algorithm in the small sample regime. PMID:24622336
Dual-wavelength OR-PAM with compressed sensing for cell tracking in a 3D cell culture system
NASA Astrophysics Data System (ADS)
Huang, Rou-Xuan; Fu, Ying; Liu, Wang; Ma, Yu-Ting; Hsieh, Bao-Yu; Chen, Shu-Ching; Sun, Mingjian; Li, Pai-Chi
2018-02-01
Monitoring dynamic interactions of T cells migrating toward tumor is beneficial to understand how cancer immunotherapy works. Optical-resolution photoacoustic microscope (OR-PAM) can provide not only high spatial resolution but also deeper penetration than conventional optical microscopy. With the aid of exogenous contrast agents, the dual-wavelength OR-PAM can be applied to map the distribution of CD8+ cytotoxic T lymphocytes (CTLs) with gold nanospheres (AuNS) under 523nm laser irradiation and Hepta1-6 tumor spheres with indocyanine green (ICG) under 800nm irradiation. However, at 1K laser PRF, it takes approximately 20 minutes to obtain a full sample volume of 160 × 160 × 150 μm3 . To increase the imaging rate, we propose a random non-uniform sparse sampling mechanism to achieve fast sparse photoacoustic data acquisition. The image recovery process is formulated as a low-rank matrix recovery (LRMR) based on compressed sensing (CS) theory. We show that it could be stably recovered via nuclear-norm minimization optimization problem to maintain image quality from a significantly fewer measurement. In this study, we use the dual-wavelength OR-PAM with CS to visualize T cell trafficking in a 3D culture system with higher temporal resolution. Data acquisition time is reduced by 40% in such sample volume where sampling density is 0.5. The imaging system reveals the potential to understand the dynamic cellular process for preclinical screening of anti-cancer drugs.
Double-Row Suture Anchor Repair of Posterolateral Corner Avulsion Fractures.
Gilmer, Brian B
2017-08-01
Posterolateral corner avulsion fractures are a rare variant of ligamentous knee injury primarily described in the skeletally immature population. Injury is often related to a direct varus moment placed on the knee during sporting activities. Various treatment strategies have been discussed ranging from nonoperative management, to excision of the bony fragment, to primary repair with screws or suture. The described technique is a means for achieving fixation of the bony avulsion using principles familiar to double-row transosseous equivalent rotator cuff repair. Proximal anchors are placed in the epiphysis, and sutures are passed in horizontal mattress fashion. Once tied, the limbs of these same sutures are then passed to more distal anchors. Remaining eyelet sutures can be used to manage peripheral tissue. The final repair provides anatomic reduction and compression of the fragment to its bony bed with minimal extracortical hardware prominence and no violation of the physis. Risks include potential for physeal injury or chondral damage to the lateral femoral condyle through aberrant anchor placement. Postoperative care includes toe-touch weight-bearing restrictions and range of motion restrictions of 0°-90° in a hinged brace for 6 weeks followed by gradual return to activity.
Effect of Coolant Temperature and Mass Flow on Film Cooling of Turbine Blades
NASA Technical Reports Server (NTRS)
Garg, Vijay K.; Gaugler, Raymond E.
1997-01-01
A three-dimensional Navier Stokes code has been used to study the effect of coolant temperature, and coolant to mainstream mass flow ratio on the adiabatic effectiveness of a film-cooled turbine blade. The blade chosen is the VKI rotor with six rows of cooling holes including three rows on the shower head. The mainstream is akin to that under real engine conditions with stagnation temperature = 1900 K and stagnation pressure = 3 MPa. Generally, the adiabatic effectiveness is lower for a higher coolant temperature due to nonlinear effects via the compressibility of air. However, over the suction side of shower-head holes, the effectiveness is higher for a higher coolant temperature than that for a lower coolant temperature when the coolant to mainstream mass flow ratio is 5% or more. For a fixed coolant temperature, the effectiveness passes through a minima on the suction side of shower-head holes as the coolant to mainstream mass flow, ratio increases, while on the pressure side of shower-head holes, the effectiveness decreases with increase in coolant mass flow due to coolant jet lift-off. In all cases, the adiabatic effectiveness is highly three-dimensional.
Development of a Nonlinear Acoustic Phased Array and its Interaction with Thin Plates
NASA Astrophysics Data System (ADS)
Anzel, Paul; Donahue, Carly; Daraio, Chiara
2015-03-01
Numerous technologies are based on the principle of focusing acoustic energy. We propose a new device to focus sound waves which exploits highly nonlinear dynamics. The advantages of this device are the capability of generating very highly powerful acoustic pulses and potential operation in high-temperature environments where traditional piezoelectrics may fail. This device is composed of rows of ball bearings placed in contact with a medium of interest and with an actuator on the top. Elastic spherical particles have a contact force that grows with their relative displacement to the three-halves power (Hertzian contact). When several spheres are placed in a row, the particles support the propagation of ``solitary waves''--strong, compact stress-wave pulses whose tendency to disperse is counteracted by the nonlinearity of the sphere's contact force. We present results regarding the experimental operation of the device and its comparison to theory and numerical simulations. We will show how well this system is capable of focusing energy at various locations in the medium, and the limits imposed by pre-compression. Finally, the effects of timing error on energy focusing will be demonstrated. This research has been supported by a NASA Space Technology Research Fellowship.
On the physical significance of the Effective Independence method for sensor placement
NASA Astrophysics Data System (ADS)
Jiang, Yaoguang; Li, Dongsheng; Song, Gangbing
2017-05-01
Optimally deploy sparse sensors for better damage identification and structural health monitoring is always a challenging task. The Effective Independence(EI) is one of the most influential sensor placement method and to be discussed in the paper. Specifically, the effect of the different weighting coefficients on the maximization of the Fisher information matrix(FIM) and the physical significance of the re-orthogonalization of modal shapes through QR decomposition in the EI method are addressed. By analyzing the widely used EI method, we found that the absolute identification space put forward along with the EI method is preferable to ensuring the maximization of the FIM, instead of the original EI coefficient which was post-multiolied by a weighting matrix. That is, deleting the row with the minimum EI coefficient can’t achieve the objective of maximizing the trace of FIM as initially conceived. Furthermore, we observed that in the computation of EI method, the sum of each retained row in the absolute identification space is a constant in each iteration. This potential property can be revealed distinctively by the product of target mode and its transpose, and its form is similar to an alternative formula of the EI method through orthogonal-triangular(QR) decomposition previously proposed by the authors. With it, the physical significance of re-orthogonalization of modal shapes through QR decomposition in the computation of EI method can be obviously manifested from a new perspective. Finally, two simple examples are provided to demonstrate the above two observations.
A Robust Feedforward Model of the Olfactory System
NASA Astrophysics Data System (ADS)
Zhang, Yilun; Sharpee, Tatyana
Most natural odors have sparse molecular composition. This makes the principles of compressing sensing potentially relevant to the structure of the olfactory code. Yet, the largely feedforward organization of the olfactory system precludes reconstruction using standard compressed sensing algorithms. To resolve this problem, recent theoretical work has proposed that signal reconstruction could take place as a result of a low dimensional dynamical system converging to one of its attractor states. The dynamical aspects of optimization, however, would slow down odor recognition and were also found to be susceptible to noise. Here we describe a feedforward model of the olfactory system that achieves both strong compression and fast reconstruction that is also robust to noise. A key feature of the proposed model is a specific relationship between how odors are represented at the glomeruli stage, which corresponds to a compression, and the connections from glomeruli to Kenyon cells, which in the model corresponds to reconstruction. We show that provided this specific relationship holds true, the reconstruction will be both fast and robust to noise, and in particular to failure of glomeruli. The predicted connectivity rate from glomeruli to the Kenyon cells can be tested experimentally. This research was supported by James S. McDonnell Foundation, NSF CAREER award IIS-1254123, NSF Ideas Lab Collaborative Research IOS 1556388.
NASA Astrophysics Data System (ADS)
Wason, H.; Herrmann, F. J.; Kumar, R.
2016-12-01
Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.
Kunz, Mathias; Dorn, Franziska; Greve, Tobias; Stoecklein, Veit; Tonn, Joerg-Christian; Brückmann, Hartmut; Schichor, Christian
2017-09-01
In symptomatic unruptured intracranial aneurysms (UIAs), data on long-term functional outcome are sparse in the literature, even in the light of modern interdisciplinary treatment decisions. We therefore analyzed our in-house database for prognostic factors and long-term outcome of neurologic symptoms after microsurgical/endovascular treatment. Patients treated between 2000 and 2016 after interdisciplinary vascular board decision were included. UIAs were categorized as symptomatic in cases of cranial nerve or brainstem compression. Symptoms were categorized as mild/severe. Long-term development of symptoms after treatment was assessed in a standardized and independent fashion. Of 98 symptomatic UIAs (microsurgery/endovascular 43/55), 84 patients presented with cranial nerve (NII-VI) compression and 14 patients with brainstem compression symptoms. Permanent morbidity occurred in 9% of patients. Of 119 symptoms (mild/severe 71/48), 60.4% recovered (full/partial 22%/39%) and 29% stabilized by the time of last follow-up; median follow-up was 19.5 months. Symptom recovery was higher in the long-term compared with that at discharge (P = 0.002). Optic nerve compression symptoms were less likely to improve compared with abducens nerve palsies and brainstem compression. Prognostic factors for recovery were duration and severity of symptoms, treatment modality (microsurgery) and absence of ischemia in the multivariate analysis. This recent study presents for the first time a detailed analysis of relevant prognostic factors for long-term recovery of cranial nerve/brainstem compression symptoms in an interdisciplinary treatment concept, which was excellent in most patients, with lowest recovery rates in optic nerve compression. Symptom recovery was remarkably higher in the long-term compared with recovery at discharge. Copyright © 2017 Elsevier Inc. All rights reserved.
Compressive-sampling-based positioning in wireless body area networks.
Banitalebi-Dehkordi, Mehdi; Abouei, Jamshid; Plataniotis, Konstantinos N
2014-01-01
Recent achievements in wireless technologies have opened up enormous opportunities for the implementation of ubiquitous health care systems in providing rich contextual information and warning mechanisms against abnormal conditions. This helps with the automatic and remote monitoring/tracking of patients in hospitals and facilitates and with the supervision of fragile, elderly people in their own domestic environment through automatic systems to handle the remote drug delivery. This paper presents a new modeling and analysis framework for the multipatient positioning in a wireless body area network (WBAN) which exploits the spatial sparsity of patients and a sparse fast Fourier transform (FFT)-based feature extraction mechanism for monitoring of patients and for reporting the movement tracking to a central database server containing patient vital information. The main goal of this paper is to achieve a high degree of accuracy and resolution in the patient localization with less computational complexity in the implementation using the compressive sensing theory. We represent the patients' positions as a sparse vector obtained by the discrete segmentation of the patient movement space in a circular grid. To estimate this vector, a compressive-sampling-based two-level FFT (CS-2FFT) feature vector is synthesized for each received signal from the biosensors embedded on the patient's body at each grid point. This feature extraction process benefits in the combination of both short-time and long-time properties of the received signals. The robustness of the proposed CS-2FFT-based algorithm in terms of the average positioning error is numerically evaluated using the realistic parameters in the IEEE 802.15.6-WBAN standard in the presence of additive white Gaussian noise. Due to the circular grid pattern and the CS-2FFT feature extraction method, the proposed scheme represents a significant reduction in the computational complexity, while improving the level of the resolution and the localization accuracy when compared to some classical CS-based positioning algorithms.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO 2 (ffCO 2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Short-term memory capacity in networks via the restricted isometry property.
Charles, Adam S; Yap, Han Lun; Rozell, Christopher J
2014-06-01
Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.
LESS: Link Estimation with Sparse Sampling in Intertidal WSNs
Ji, Xiaoyu; Chen, Yi-chao; Li, Xiaopeng; Xu, Wenyuan
2018-01-01
Deploying wireless sensor networks (WSN) in the intertidal area is an effective approach for environmental monitoring. To sustain reliable data delivery in such a dynamic environment, a link quality estimation mechanism is crucial. However, our observations in two real WSN systems deployed in the intertidal areas reveal that link update in routing protocols often suffers from energy and bandwidth waste due to the frequent link quality measurement and updates. In this paper, we carefully investigate the network dynamics using real-world sensor network data and find it feasible to achieve accurate estimation of link quality using sparse sampling. We design and implement a compressive-sensing-based link quality estimation protocol, LESS, which incorporates both spatial and temporal characteristics of the system to aid the link update in routing protocols. We evaluate LESS in both real WSN systems and a large-scale simulation, and the results show that LESS can reduce energy and bandwidth consumption by up to 50% while still achieving more than 90% link quality estimation accuracy. PMID:29494557
Performance bounds for modal analysis using sparse linear arrays
NASA Astrophysics Data System (ADS)
Li, Yuanxin; Pezeshki, Ali; Scharf, Louis L.; Chi, Yuejie
2017-05-01
We study the performance of modal analysis using sparse linear arrays (SLAs) such as nested and co-prime arrays, in both first-order and second-order measurement models. We treat SLAs as constructed from a subset of sensors in a dense uniform linear array (ULA), and characterize the performance loss of SLAs with respect to the ULA due to using much fewer sensors. In particular, we claim that, provided the same aperture, in order to achieve comparable performance in terms of Cramér-Rao bound (CRB) for modal analysis, SLAs require more snapshots, of which the number is about the number of snapshots used by ULA times the compression ratio in the number of sensors. This is shown analytically for the case with one undamped mode, as well as empirically via extensive numerical experiments for more complex scenarios. Moreover, the misspecified CRB proposed by Richmond and Horowitz is also studied, where SLAs suffer more performance loss than their ULA counterpart.
NASA Astrophysics Data System (ADS)
Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui
2017-01-01
A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.
NASA Astrophysics Data System (ADS)
Bright, Ido; Lin, Guang; Kutz, J. Nathan
2013-12-01
Compressive sensing is used to determine the flow characteristics around a cylinder (Reynolds number and pressure/flow field) from a sparse number of pressure measurements on the cylinder. Using a supervised machine learning strategy, library elements encoding the dimensionally reduced dynamics are computed for various Reynolds numbers. Convex L1 optimization is then used with a limited number of pressure measurements on the cylinder to reconstruct, or decode, the full pressure field and the resulting flow field around the cylinder. Aside from the highly turbulent regime (large Reynolds number) where only the Reynolds number can be identified, accurate reconstruction of the pressure field and Reynolds number is achieved. The proposed data-driven strategy thus achieves encoding of the fluid dynamics using the L2 norm, and robust decoding (flow field reconstruction) using the sparsity promoting L1 norm.
Compressed single pixel imaging in the spatial frequency domain
Torabzadeh, Mohammad; Park, Il-Yong; Bartels, Randy A.; Durkin, Anthony J.; Tromberg, Bruce J.
2017-01-01
Abstract. We have developed compressed sensing single pixel spatial frequency domain imaging (cs-SFDI) to characterize tissue optical properties over a wide field of view (35 mm×35 mm) using multiple near-infrared (NIR) wavelengths simultaneously. Our approach takes advantage of the relatively sparse spatial content required for mapping tissue optical properties at length scales comparable to the transport scattering length in tissue (ltr∼1 mm) and the high bandwidth available for spectral encoding using a single-element detector. cs-SFDI recovered absorption (μa) and reduced scattering (μs′) coefficients of a tissue phantom at three NIR wavelengths (660, 850, and 940 nm) within 7.6% and 4.3% of absolute values determined using camera-based SFDI, respectively. These results suggest that cs-SFDI can be developed as a multi- and hyperspectral imaging modality for quantitative, dynamic imaging of tissue optical and physiological properties. PMID:28300272
Compressive spherical beamforming for localization of incipient tip vortex cavitation.
Choo, Youngmin; Seong, Woojae
2016-12-01
Noises by incipient propeller tip vortex cavitation (TVC) are generally generated at regions near the propeller tip. Localization of these sparse noises is performed using compressive sensing (CS) with measurement data from cavitation tunnel experiments. Since initial TVC sound radiates in all directions as a monopole source, a sensing matrix for CS is formulated by adopting spherical beamforming. CS localization is examined with known source acoustic measurements, where the CS estimated source position coincides with the known source position. Afterwards, CS is applied to initial cavitation noise cases. The result of cavitation localization was detected near the upper downstream area of the propeller and showed less ambiguity compared to Bartlett spherical beamforming. Standard constraint in CS was modified by exploiting the physical features of cavitation to suppress remaining ambiguity. CS localization of TVC using the modified constraint is shown according to cavitation numbers and compared to high-speed camera images.
High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures.
Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando
2011-01-01
Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability.
LCAMP: Location Constrained Approximate Message Passing for Compressed Sensing MRI
Sung, Kyunghyun; Daniel, Bruce L; Hargreaves, Brian A
2016-01-01
Iterative thresholding methods have been extensively studied as faster alternatives to convex optimization methods for solving large-sized problems in compressed sensing. A novel iterative thresholding method called LCAMP (Location Constrained Approximate Message Passing) is presented for reducing computational complexity and improving reconstruction accuracy when a nonzero location (or sparse support) constraint can be obtained from view shared images. LCAMP modifies the existing approximate message passing algorithm by replacing the thresholding stage with a location constraint, which avoids adjusting regularization parameters or thresholding levels. This work is first compared with other conventional reconstruction methods using random 1D signals and then applied to dynamic contrast-enhanced breast MRI to demonstrate the excellent reconstruction accuracy (less than 2% absolute difference) and low computation time (5 - 10 seconds using Matlab) with highly undersampled 3D data (244 × 128 × 48; overall reduction factor = 10). PMID:23042658
Tree-Structured Infinite Sparse Factor Model
Zhang, XianXing; Dunson, David B.; Carin, Lawrence
2013-01-01
A tree-structured multiplicative gamma process (TMGP) is developed, for inferring the depth of a tree-based factor-analysis model. This new model is coupled with the nested Chinese restaurant process, to nonparametrically infer the depth and width (structure) of the tree. In addition to developing the model, theoretical properties of the TMGP are addressed, and a novel MCMC sampler is developed. The structure of the inferred tree is used to learn relationships between high-dimensional data, and the model is also applied to compressive sensing and interpolation of incomplete images. PMID:25279389
Quasi-symmetric designs and equiangular tight frames
NASA Astrophysics Data System (ADS)
Fickus, Matthew; Jasper, John; Mixon, Dustin; Peterson, Jesse
2015-08-01
An equiangular tight frame (ETF) is an M×N matrix which has orthogonal equal norm rows, equal norm columns, and the inner products of all pairs of columns have the same modulus. ETFs arise in numerous applications, including compressed sensing. They also seem to be rare: despite over a decade of active research by the community, only a few construction methods have been discovered. In this article we introduce a new construction of ETFs which uses a particular set of combinatorial designs called quasi-symmetric designs. For ETFs whose entries are contained in {+1;-1}, called real constant amplitude ETFs (RCAETFs), we see that this construction is reversible, giving new quasi-symmetric designs from the known constructions RCAETFs.
NASA Technical Reports Server (NTRS)
Srivastava, R.; Reddy, T. S. R.
1997-01-01
The program DuctE3D is used for steady or unsteady aerodynamic and aeroelastic analysis of ducted fans. This guide describes the input data required and the output files generated, in using DuctE3D. The analysis solves three dimensional unsteady, compressible Euler equations to obtain the aerodynamic forces. A normal mode structural analysis is used to obtain the aeroelastic equations, which are solved using either the time domain or the frequency domain solution method. Sample input and output files are included in this guide for steady aerodynamic analysis and aeroelastic analysis of an isolated fan row.
Compressed Sensing for Chemistry
NASA Astrophysics Data System (ADS)
Sanders, Jacob Nathan
Many chemical applications, from spectroscopy to quantum chemistry, involve measuring or computing a large amount of data, and then compressing this data to retain the most chemically-relevant information. In contrast, compressed sensing is an emergent technique that makes it possible to measure or compute an amount of data that is roughly proportional to its information content. In particular, compressed sensing enables the recovery of a sparse quantity of information from significantly undersampled data by solving an ℓ 1-optimization problem. This thesis represents the application of compressed sensing to problems in chemistry. The first half of this thesis is about spectroscopy. Compressed sensing is used to accelerate the computation of vibrational and electronic spectra from real-time time-dependent density functional theory simulations. Using compressed sensing as a drop-in replacement for the discrete Fourier transform, well-resolved frequency spectra are obtained at one-fifth the typical simulation time and computational cost. The technique is generalized to multiple dimensions and applied to two-dimensional absorption spectroscopy using experimental data collected on atomic rubidium vapor. Finally, a related technique known as super-resolution is applied to open quantum systems to obtain realistic models of a protein environment, in the form of atomistic spectral densities, at lower computational cost. The second half of this thesis deals with matrices in quantum chemistry. It presents a new use of compressed sensing for more efficient matrix recovery whenever the calculation of individual matrix elements is the computational bottleneck. The technique is applied to the computation of the second-derivative Hessian matrices in electronic structure calculations to obtain the vibrational modes and frequencies of molecules. When applied to anthracene, this technique results in a threefold speed-up, with greater speed-ups possible for larger molecules. The implementation of the method in the Q-Chem commercial software package is described. Moreover, the method provides a general framework for bootstrapping cheap low-accuracy calculations in order to reduce the required number of expensive high-accuracy calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, D; Ruan, D; Low, D
2015-06-15
Purpose: Existing efforts to replace complex multileaf collimator (MLC) by simple jaws for intensity modulated radiation therapy (IMRT) resulted in unacceptable compromise in plan quality and delivery efficiency. We introduce a novel fluence map segmentation method based on compressed sensing for plan delivery using a simplified sparse orthogonal collimator (SOC) on the 4π non-coplanar radiotherapy platform. Methods: 4π plans with varying prescription doses were first created by automatically selecting and optimizing 20 non-coplanar beams for 2 GBM, 2 head & neck, and 2 lung patients. To create deliverable 4π plans using SOC, which are two pairs of orthogonal collimators withmore » 1 to 4 leaves in each collimator bank, a Haar Fluence Optimization (HFO) method was used to regulate the number of Haar wavelet coefficients while maximizing the dose fidelity to the ideal prescription. The plans were directly stratified utilizing the optimized Haar wavelet rectangular basis. A matching number of deliverable segments were stratified for the MLC-based plans. Results: Compared to the MLC-based 4π plans, the SOC-based 4π plans increased the average PTV dose homogeneity from 0.811 to 0.913. PTV D98 and D99 were improved by 3.53% and 5.60% of the corresponding prescription doses. The average mean and maximal OAR doses slightly increased by 0.57% and 2.57% of the prescription doses. The average number of segments ranged between 5 and 30 per beam. The collimator travel time to create the segments decreased with increasing leaf numbers in the SOC. The two and four leaf designs were 1.71 and 1.93 times more efficient, on average, than the single leaf design. Conclusion: The innovative dose domain optimization based on compressed sensing enables uncompromised 4π non-coplanar IMRT dose delivery using simple rectangular segments that are deliverable using a sparse orthogonal collimator, which only requires 8 to 16 leaves yet is unlimited in modulation resolution. This work is supported in part by Varian Medical Systems, Inc. and NIH R43 CA18339.« less
NASA Astrophysics Data System (ADS)
Dong, Jian; Kudo, Hiroyuki
2017-03-01
Compressed sensing (CS) is attracting growing concerns in sparse-view computed tomography (CT) image reconstruction. The most standard approach of CS is total variation (TV) minimization. However, images reconstructed by TV usually suffer from distortions, especially in reconstruction of practical CT images, in forms of patchy artifacts, improper serrate edges and loss of image textures. Most existing CS approaches including TV achieve image quality improvement by applying linear transforms to object image, but linear transforms usually fail to take discontinuities into account, such as edges and image textures, which is considered to be the key reason for image distortions. Actually, discussions on nonlinear filter based image processing has a long history, leading us to clarify that the nonlinear filters yield better results compared to linear filters in image processing task such as denoising. Median root prior was first utilized by Alenius as nonlinear transform in CT image reconstruction, with significant gains obtained. Subsequently, Zhang developed the application of nonlocal means-based CS. A fact is gradually becoming clear that the nonlinear transform based CS has superiority in improving image quality compared with the linear transform based CS. However, it has not been clearly concluded in any previous paper within the scope of our knowledge. In this work, we investigated the image quality differences between the conventional TV minimization and nonlinear sparsifying transform based CS, as well as image quality differences among different nonlinear sparisying transform based CSs in sparse-view CT image reconstruction. Additionally, we accelerated the implementation of nonlinear sparsifying transform based CS algorithm.
Enjilela, Esmaeil; Lee, Ting-Yim; Hsieh, Jiang; Wisenberg, Gerald; Teefy, Patrick; Yadegari, Andrew; Bagur, Rodrigo; Islam, Ali; Branch, Kelley; So, Aaron
2018-03-01
We implemented and validated a compressed sensing (CS) based algorithm for reconstructing dynamic contrast-enhanced (DCE) CT images of the heart from sparsely sampled X-ray projections. DCE CT imaging of the heart was performed on five normal and ischemic pigs after contrast injection. DCE images were reconstructed with filtered backprojection (FBP) and CS from all projections (984-view) and 1/3 of all projections (328-view), and with CS from 1/4 of all projections (246-view). Myocardial perfusion (MP) measurements with each protocol were compared to those with the reference 984-view FBP protocol. Both the 984-view CS and 328-view CS protocols were in good agreements with the reference protocol. The Pearson correlation coefficients of 984-view CS and 328-view CS determined from linear regression analyses were 0.98 and 0.99 respectively. The corresponding mean biases of MP measurement determined from Bland-Altman analyses were 2.7 and 1.2ml/min/100g. When only 328 projections were used for image reconstruction, CS was more accurate than FBP for MP measurement with respect to 984-view FBP. However, CS failed to generate MP maps comparable to those with 984-view FBP when only 246 projections were used for image reconstruction. DCE heart images reconstructed from one-third of a full projection set with CS were minimally affected by aliasing artifacts, leading to accurate MP measurements with the effective dose reduced to just 33% of conventional full-view FBP method. The proposed CS sparse-view image reconstruction method could facilitate the implementation of sparse-view dynamic acquisition for ultra-low dose CT MP imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
Biomedical sensor design using analog compressed sensing
NASA Astrophysics Data System (ADS)
Balouchestani, Mohammadreza; Krishnan, Sridhar
2015-05-01
The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.
Reliability of Eustachian tube function measurements in a hypobaric and hyperbaric pressure chamber.
Meyer, M F; Jansen, S; Mordkovich, O; Hüttenbrink, K-B; Beutner, D
2017-12-01
Measurement of the Eustachian tube (ET) function is a challenge. The demand for a precise and meaningful diagnostic tool increases-especially because more and more operative therapies are being offered without objective evidence. The measurement of the ET function by continuous impedance recording in a pressure chamber is an established method, although the reliability of the measurements is still unclear. Twenty-five participants (50 ears) were exposed to phases of compression and decompression in a hypo- and hyperbaric pressure chamber. The ET function reflecting parameters-ET opening pressure (ETOP), ET opening duration (ETOD) and ET opening frequency (ETOF)-were determined under exactly the same preconditions three times in a row. The intraclass correlation coefficient (ICC) and Bland and Altman plot were used to assess test-retest reliability. ICCs revealed a high correlation for ETOP and ETOF in phases of decompression (passive equalisation) as well as ETOD and ETOP in phases of compression (active induced equalisation). Very high correlation could be shown for ETOD in decompression and ETOF in compression phases. The Bland and Altman graphs could show that measurements provide results within a 95 % confidence interval in compression and decompression phases. We conclude that measurements in a pressure chamber are a very valuable tool in terms of estimating the ET opening and closing function. Measurements show some variance comparing participants, but provide reliable results within a 95 % confidence interval in retest. This study is the basis for enabling efficacy measurements of ET treatment modalities. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.
2015-03-01
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W
2015-03-21
This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.
Collaborative Wideband Compressed Signal Detection in Interplanetary Internet
NASA Astrophysics Data System (ADS)
Wang, Yulin; Zhang, Gengxin; Bian, Dongming; Gou, Liang; Zhang, Wei
2014-07-01
As the development of autonomous radio in deep space network, it is possible to actualize communication between explorers, aircrafts, rovers and satellites, e.g. from different countries, adopting different signal modes. The first mission to enforce the autonomous radio is to detect signals of the explorer autonomously without disturbing the original communication. This paper develops a collaborative wideband compressed signal detection approach for InterPlaNetary (IPN) Internet where there exist sparse active signals in the deep space environment. Compressed sensing (CS) can be utilized by exploiting the sparsity of IPN Internet communication signal, whose useful frequency support occupies only a small portion of an entirely wide spectrum. An estimate of the signal spectrum can be obtained by using reconstruction algorithms. Against deep space shadowing and channel fading, multiple satellites collaboratively sense and make a final decision according to certain fusion rule to gain spatial diversity. A couple of novel discrete cosine transform (DCT) and walsh-hadamard transform (WHT) based compressed spectrum detection methods are proposed which significantly improve the performance of spectrum recovery and signal detection. Finally, extensive simulation results are presented to show the effectiveness of our proposed collaborative scheme for signal detection in IPN Internet. Compared with the conventional discrete fourier transform (DFT) based method, our DCT and WHT based methods reduce computational complexity, decrease processing time, save energy and enhance probability of detection.
Double-row vs single-row rotator cuff repair: a review of the biomechanical evidence.
Wall, Lindley B; Keener, Jay D; Brophy, Robert H
2009-01-01
A review of the current literature will show a difference between the biomechanical properties of double-row and single-row rotator cuff repairs. Rotator cuff tears commonly necessitate surgical repair; however, the optimal technique for repair continues to be investigated. Recently, double-row repairs have been considered an alternative to single-row repair, allowing a greater coverage area for healing and a possibly stronger repair. We reviewed the literature of all biomechanical studies comparing double-row vs single-row repair techniques. Inclusion criteria included studies using cadaveric, animal, or human models that directly compared double-row vs single-row repair techniques, written in the English language, and published in peer reviewed journals. Identified articles were reviewed to provide a comprehensive conclusion of the biomechanical strength and integrity of the repair techniques. Fifteen studies were identified and reviewed. Nine studies showed a statistically significant advantage to a double-row repair with regards to biomechanical strength, failure, and gap formation. Three studies produced results that did not show any statistical advantage. Five studies that directly compared footprint reconstruction all demonstrated that the double-row repair was superior to a single-row repair in restoring anatomy. The current literature reveals that the biomechanical properties of a double-row rotator cuff repair are superior to a single-row repair. Basic Science Study, SRH = Single vs. Double Row RCR.
Single-row versus double-row rotator cuff repair: techniques and outcomes.
Dines, Joshua S; Bedi, Asheesh; ElAttrache, Neal S; Dines, David M
2010-02-01
Double-row rotator cuff repair techniques incorporate a medial and lateral row of suture anchors in the repair configuration. Biomechanical studies of double-row repair have shown increased load to failure, improved contact areas and pressures, and decreased gap formation at the healing enthesis, findings that have provided impetus for clinical studies comparing single-row with double-row repair. Clinical studies, however, have not yet demonstrated a substantial improvement over single-row repair with regard to either the degree of structural healing or functional outcomes. Although double-row repair may provide an improved mechanical environment for the healing enthesis, several confounding variables have complicated attempts to establish a definitive relationship with improved rates of healing. Appropriately powered rigorous level I studies that directly compare single-row with double-row techniques in matched tear patterns are necessary to further address these questions. These studies are needed to justify the potentially increased implant costs and surgical times associated with double-row rotator cuff repair.
Prediction of Aerodynamic Coefficients using Neural Networks for Sparse Data
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)
2002-01-01
Basic aerodynamic coefficients are modeled as functions of angles of attack and sideslip with vehicle lateral symmetry and compressibility effects. Most of the aerodynamic parameters can be well-fitted using polynomial functions. In this paper a fast, reliable way of predicting aerodynamic coefficients is produced using a neural network. The training data for the neural network is derived from wind tunnel test and numerical simulations. The coefficients of lift, drag, pitching moment are expressed as a function of alpha (angle of attack) and Mach number. The results produced from preliminary neural network analysis are very good.
... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...
... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...
... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...
The kinetics of rugby union scrummaging.
Milburn, P D
1990-01-01
Two rugby union forward packs of differing ability levels were examined during scrummaging against an instrumented scrum machine. By systematically moving the front-row of the scrum along the scrum machine, kinetic data on each front-row forward could be obtained under all test conditions. Each forward pack was tested under the following scrummaging combinations: front-row only; front-row plus second-row; full scrum minus side-row, and full scrum. Data obtained from each scrum included the three orthogonal components of force at engagement and the sustained force applied by each front-row player. An estimate of sub-unit contributions was made by subtracting the total forward force on all three front-row players from the total for the complete scrum. Results indicated the primary role of the second-row appeared to be application of forward force. The back-row ('number eight') forward did not substantially contribute any additional forward force, and added only slightly to the lateral and vertical shear force experienced by the front-row. The side-row contributed an additional 20-27% to the forward force, but at the expense of increased vertical forces on all front-row forwards. Results of this investigation are discussed in relation to rule modification, rule interpretation and coaching.
The improved Apriori algorithm based on matrix pruning and weight analysis
NASA Astrophysics Data System (ADS)
Lang, Zhenhong
2018-04-01
This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.
Disparities in the Impact of Air Pollution
... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...
Tobacco Use in Racial and Ethnic Populations
... node == 'event_item') { if (allEvents == 'true') { html = ' ' + row.title + ' | ' + row.date + ' '; } else if (sublist.indexOf(row.cat) > -1 ){ html = ' ' + row.title + ' | ' + row.date + ' '; } else {return;} } else if (list_node == ' ...
Co-clustering directed graphs to discover asymmetries and directional communities
Rohe, Karl; Qin, Tai; Yu, Bin
2016-01-01
In directed graphs, relationships are asymmetric and these asymmetries contain essential structural information about the graph. Directed relationships lead to a new type of clustering that is not feasible in undirected graphs. We propose a spectral co-clustering algorithm called di-sim for asymmetry discovery and directional clustering. A Stochastic co-Blockmodel is introduced to show favorable properties of di-sim. To account for the sparse and highly heterogeneous nature of directed networks, di-sim uses the regularized graph Laplacian and projects the rows of the eigenvector matrix onto the sphere. A nodewise asymmetry score and di-sim are used to analyze the clustering asymmetries in the networks of Enron emails, political blogs, and the Caenorhabditis elegans chemical connectome. In each example, a subset of nodes have clustering asymmetries; these nodes send edges to one cluster, but receive edges from another cluster. Such nodes yield insightful information (e.g., communication bottlenecks) about directed networks, but are missed if the analysis ignores edge direction. PMID:27791058
Co-clustering directed graphs to discover asymmetries and directional communities.
Rohe, Karl; Qin, Tai; Yu, Bin
2016-10-21
In directed graphs, relationships are asymmetric and these asymmetries contain essential structural information about the graph. Directed relationships lead to a new type of clustering that is not feasible in undirected graphs. We propose a spectral co-clustering algorithm called di-sim for asymmetry discovery and directional clustering. A Stochastic co-Blockmodel is introduced to show favorable properties of di-sim To account for the sparse and highly heterogeneous nature of directed networks, di-sim uses the regularized graph Laplacian and projects the rows of the eigenvector matrix onto the sphere. A nodewise asymmetry score and di-sim are used to analyze the clustering asymmetries in the networks of Enron emails, political blogs, and the Caenorhabditis elegans chemical connectome. In each example, a subset of nodes have clustering asymmetries; these nodes send edges to one cluster, but receive edges from another cluster. Such nodes yield insightful information (e.g., communication bottlenecks) about directed networks, but are missed if the analysis ignores edge direction.
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...
2017-01-03
Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry
Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less
Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry; ...
2016-10-27
Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.
2017-10-01
A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.
DeHaan, Alexander M; Axelrad, Thomas W; Kaye, Elizabeth; Silvestri, Lorenzo; Puskas, Brian; Foster, Timothy E
2012-05-01
The advantage of single-row versus double-row arthroscopic rotator cuff repair techniques has been a controversial issue in sports medicine and shoulder surgery. There is biomechanical evidence that double-row techniques are superior to single-row techniques; however, there is no clinical evidence that the double-row technique provides an improved functional outcome. When compared with single-row rotator cuff repair, double-row fixation, although biomechanically superior, has no clinical benefit with respect to retear rate or improved functional outcome. Systematic review. The authors reviewed prospective studies of level I or II clinical evidence that compared the efficacy of single- and double-row rotator cuff repairs. Functional outcome scores included the American Shoulder and Elbow Surgeons (ASES) shoulder scale, the Constant shoulder score, and the University of California, Los Angeles (UCLA) shoulder rating scale. Radiographic failures and complications were also analyzed. A test of heterogeneity for patient demographics was also performed to determine if there were differences in the patient profiles across the included studies. Seven studies fulfilled our inclusion criteria. The test of heterogeneity across these studies showed no differences. The functional ASES, Constant, and UCLA outcome scores revealed no difference between single- and double-row rotator cuff repairs. The total retear rate, which included both complete and partial retears, was 43.1% for the single-row repair and 27.2% for the double-row repair (P = .057), representing a trend toward higher failures in the single-row group. Through a comprehensive literature search and meta-analysis of current arthroscopic rotator cuff repairs, we found that the single-row repairs did not differ from the double-row repairs in functional outcome scores. The double-row repairs revealed a trend toward a lower radiographic proven retear rate, although the data did not reach statistical significance. There may be a concerning trend toward higher retear rates in patients undergoing a single-row repair, but further studies are required.
Optimizing Unmanned Aircraft System Scheduling
2008-06-01
COL_MISSION_NAME)) If Trim( CStr (rMissions(iRow, COL_MISSION_REQUIRED))) <> "" Then If CLng(rMissions(iRow, COL_MISSION_REQUIRED)) > CLng...logFN, "s:" & CStr (s) & " " For iRow = 1 To top Print #logFN, stack(iRow) & "," Next iRow Print #logFN...340" 60 Print #logFN, "m:" & CStr (s) & " " For iRow = 1 To top Print #logFN, lMissionPeriod(iRow
Oweiss, Karim G
2006-07-01
This paper suggests a new approach for data compression during extracutaneous transmission of neural signals recorded by high-density microelectrode array in the cortex. The approach is based on exploiting the temporal and spatial characteristics of the neural recordings in order to strip the redundancy and infer the useful information early in the data stream. The proposed signal processing algorithms augment current filtering and amplification capability and may be a viable replacement to on chip spike detection and sorting currently employed to remedy the bandwidth limitations. Temporal processing is devised by exploiting the sparseness capabilities of the discrete wavelet transform, while spatial processing exploits the reduction in the number of physical channels through quasi-periodic eigendecomposition of the data covariance matrix. Our results demonstrate that substantial improvements are obtained in terms of lower transmission bandwidth, reduced latency and optimized processor utilization. We also demonstrate the improvements qualitatively in terms of superior denoising capabilities and higher fidelity of the obtained signals.
Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform
Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart
2014-01-01
Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
A Formal Messaging Notation for Alaskan Aviation Data
NASA Technical Reports Server (NTRS)
Rios, Joseph L.
2015-01-01
Data exchange is an increasingly important aspect of the National Airspace System. While many data communication channels have become more capable of sending and receiving data at higher throughput rates, there is still a need to use communication channels efficiently with limited throughput. The limitation can be based on technological issues, financial considerations, or both. This paper provides a complete description of several important aviation weather data in Abstract Syntax Notation format. By doing so, data providers can take advantage of Abstract Syntax Notation's ability to encode data in a highly compressed format. When data such as pilot weather reports, surface weather observations, and various weather predictions are compressed in such a manner, it allows for the efficient use of throughput-limited communication channels. This paper provides details on the Abstract Syntax Notation One (ASN.1) implementation for Alaskan aviation data, and demonstrates its use on real-world aviation weather data samples as Alaska has sparse terrestrial data infrastructure and data are often sent via relatively costly satellite channels.
Single-snapshot DOA estimation by using Compressed Sensing
NASA Astrophysics Data System (ADS)
Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin
2014-12-01
This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
Elastic Moduli of Permanently Densified Silica Glasses
Deschamps, T.; Margueritat, J.; Martinet, C.; Mermet, A.; Champagnon, B.
2014-01-01
Modelling the mechanical response of silica glass is still challenging, due to the lack of knowledge concerning the elastic properties of intermediate states of densification. An extensive Brillouin Light Scattering study on permanently densified silica glasses after cold compression in diamond anvil cell has been carried out, in order to deduce the elastic properties of such glasses and to provide new insights concerning the densification process. From sound velocity measurements, we derive phenomenological laws linking the elastic moduli of silica glass as a function of its densification ratio. The found elastic moduli are in excellent agreement with the sparse data extracted from literature, and we show that they do not depend on the thermodynamic path taken during densification (room temperature or heating). We also demonstrate that the longitudinal sound velocity exhibits an anomalous behavior, displaying a minimum for a densification ratio of 5%, and highlight the fact that this anomaly has to be distinguished from the compressibility anomaly of a-SiO2 in the elastic domain. PMID:25431218
MIMO channel estimation and evaluation for airborne traffic surveillance in cellular networks
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim
2018-01-01
A channel estimation (CE) procedure based on compressed sensing is proposed to estimate the multiple-input multiple-output sparse channel for traffic data transmission from drones to ground stations. The proposed procedure consists of an offline phase and a real-time phase. In the offline phase, a pilot arrangement method, which considers the interblock and block mutual coherence simultaneously, is proposed. The real-time phase contains three steps. At the first step, it obtains the priori estimate of the channel by block orthogonal matching pursuit; afterward, it utilizes that estimated channel to calculate the linear minimum mean square error of the received pilots. Finally, the block compressive sampling matching pursuit utilizes the enhanced received pilots to estimate the channel more accurately. The performance of the CE procedure is evaluated by simulating the transmission of traffic data through the communication channel and evaluating its fidelity for car detection after demodulation. Simulation results indicate that the proposed CE technique enhances the performance of the car detection in a traffic image considerably.
High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures
Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando
2011-01-01
Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
NASA Astrophysics Data System (ADS)
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
NASA Astrophysics Data System (ADS)
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Exploiting the wavelet structure in compressed sensing MRI.
Chen, Chen; Huang, Junzhou
2014-12-01
Sparsity has been widely utilized in magnetic resonance imaging (MRI) to reduce k-space sampling. According to structured sparsity theories, fewer measurements are required for tree sparse data than the data only with standard sparsity. Intuitively, more accurate image reconstruction can be achieved with the same number of measurements by exploiting the wavelet tree structure in MRI. A novel algorithm is proposed in this article to reconstruct MR images from undersampled k-space data. In contrast to conventional compressed sensing MRI (CS-MRI) that only relies on the sparsity of MR images in wavelet or gradient domain, we exploit the wavelet tree structure to improve CS-MRI. This tree-based CS-MRI problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved by an iterative scheme. Simulations and in vivo experiments demonstrate the significant improvement of the proposed method compared to conventional CS-MRI algorithms, and the feasibleness on MR data compared to existing tree-based imaging algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
Double-Row Capsulolabral Repair Increases Load to Failure and Decreases Excessive Motion.
McDonald, Lucas S; Thompson, Matthew; Altchek, David W; McGarry, Michelle H; Lee, Thay Q; Rocchi, Vanna J; Dines, Joshua S
2016-11-01
Using a cadaver shoulder instability model and load-testing device, we compared biomechanical characteristics of double-row and single-row capsulolabral repairs. We hypothesized a greater reduction in glenohumeral motion and translation and a higher load to failure in a mattress double-row capsulolabral repair than in a single-row repair. In 6 matched pairs of cadaveric shoulders, a capsulolabral injury was created. One shoulder was repaired with a single-row technique, and the other with a double-row mattress technique. Rotational range of motion, anterior-inferior translation, and humeral head kinematics were measured. Load-to-failure testing measured stiffness, yield load, deformation at yield load, energy absorbed at yield load, load to failure, deformation at ultimate load, and energy absorbed at ultimate load. Double-row repair significantly decreased external rotation and total range of motion compared with single-row repair. Both repairs decreased anterior-inferior translation compared with the capsulolabral-injured condition, however, no differences existed between repair types. Yield load in the single-row group was 171.3 ± 110.1 N, and in the double-row group it was 216.1 ± 83.1 N (P = .02). Ultimate load to failure in the single-row group was 224.5 ± 121.0 N, and in the double-row group it was 373.9 ± 172.0 N (P = .05). Energy absorbed at ultimate load in the single-row group was 1,745.4 ± 1,462.9 N-mm, and in the double-row group it was 4,649.8 ± 1,930.8 N-mm (P = .02). In cases of capsulolabral disruption, double-row repair techniques may result in decreased shoulder rotational range of motion and improved load-to-failure characteristics. In cases of capsulolabral disruption, repair techniques with double-row mattress repair may provide more secure fixation. Double-row capsulolabral repair decreases shoulder motion and increases load to failure, yield load, and energy absorbed at yield load more than single-row repair. Published by Elsevier Inc.
Kim, Doo-Sup; Yoon, Yeo-Seung; Chung, Hoi-Jeong
2011-07-01
Despite the attention that has been paid to restoration of the capsulolabral complex anatomic insertion onto the glenoid, studies comparing the pressurized contact area and mean interface pressure at the anatomic insertion site between a single-row repair and a double-row labral repair have been uncommon. The purpose of our study was to compare the mean interface pressure and pressurized contact area at the anatomic insertion site of the capsulolabral complex between a single-row repair and a double-row repair technique. Controlled laboratory study. Thirty fresh-frozen cadaveric shoulders (mean age, 61 ± 8 years; range, 48-71 years) were used for this study. Two types of repair were performed on each specimen: (1) a single-row repair and (2) a double-row repair. Using pressure-sensitive films, we examined the interface contact area and contact pressure. The mean interface pressure was greater for the double-row repair technique (0.29 ± 0.04 MPa) when compared with the single-row repair technique (0.21 ± 0.03 MPa) (P = .003). The mean pressurized contact area was also significantly greater for the double-row repair technique (211.8 ± 18.6 mm(2), 78.4% footprint) compared with the single-row repair technique (106.4 ± 16.8 mm(2), 39.4% footprint) (P = .001). The double-row repair has significantly greater mean interface pressure and pressurized contact area at the insertion site of the capsulolabral complex than the single-row repair. The double-row repair may be advantageous compared with the single-row repair in restoring the native footprint area of the capsulolabral complex.
Near real-time estimation of the seismic source parameters in a compressed domain
NASA Astrophysics Data System (ADS)
Rodriguez, Ismael A. Vera
Seismic events can be characterized by its origin time, location and moment tensor. Fast estimations of these source parameters are important in areas of geophysics like earthquake seismology, and the monitoring of seismic activity produced by volcanoes, mining operations and hydraulic injections in geothermal and oil and gas reservoirs. Most available monitoring systems estimate the source parameters in a sequential procedure: first determining origin time and location (e.g., epicentre, hypocentre or centroid of the stress glut density), and then using this information to initialize the evaluation of the moment tensor. A more efficient estimation of the source parameters requires a concurrent evaluation of the three variables. The main objective of the present thesis is to address the simultaneous estimation of origin time, location and moment tensor of seismic events. The proposed method displays the benefits of being: 1) automatic, 2) continuous and, depending on the scale of application, 3) of providing results in real-time or near real-time. The inversion algorithm is based on theoretical results from sparse representation theory and compressive sensing. The feasibility of implementation is determined through the analysis of synthetic and real data examples. The numerical experiments focus on the microseismic monitoring of hydraulic fractures in oil and gas wells, however, an example using real earthquake data is also presented for validation. The thesis is complemented with a resolvability analysis of the moment tensor. The analysis targets common monitoring geometries employed in hydraulic fracturing in oil wells. Additionally, it is presented an application of sparse representation theory for the denoising of one-component and three-component microseismicity records, and an algorithm for improved automatic time-picking using non-linear inversion constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
2013-09-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions whichmore » can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.« less
Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.
2015-12-01
For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian
2016-06-18
In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.
Lorbach, Olaf; Bachelier, Felix; Vees, Jochen; Kohn, Dieter; Pape, Dietrich
2008-08-01
Double-row repair is suggested to have superior biomechanical properties in rotator cuff reconstruction compared with single-row repair. However, double-row rotator cuff repair is frequently compared with simple suture repair and not with modified suture configurations. Single-row rotator cuff repairs with modified suture configurations have similar failure loads and gap formations as double-row reconstructions. Controlled laboratory study. We created 1 x 2-cm defects in 48 porcine infraspinatus tendons. Reconstructions were then performed with 4 single-row repairs and 2 double-row repairs. The single-row repairs included transosseous simple sutures; double-loaded corkscrew anchors in either a double mattress or modified Mason-Allen suture repair; and the Magnum Knotless Fixation Implant with an inclined mattress. Double-row repairs were either with Bio-Corkscrew FT using modified Mason-Allen stitches or a combination of Bio-Corkscrew FT and PushLock anchors using the SutureBridge Technique. During cyclic load (10 N to 60-200 N), gap formation was measured, and finally, ultimate load to failure and type of failure were recorded. Double-row double-corkscrew anchor fixation had the highest ultimate tensile strength (398 +/- 98 N) compared to simple sutures (105 +/- 21 N; P < .0001), single-row corkscrews using a modified Mason-Allen stitch (256 +/- 73 N; P = .003) or double mattress repair (290 +/- 56 N; P = .043), the Magnum Implant (163 +/- 13 N; P < .0001), and double-row repair with PushLock and Bio-Corkscrew FT anchors (163 +/- 59 N; P < .0001). Single-row double mattress repair was superior to transosseous sutures (P < .0001), the Magnum Implant (P = .009), and double-row repair with PushLock and Bio-Corkscrew FT anchors (P = .009). Lowest gap formation was found for double-row double-corkscrew repair (3.1 +/- 0.1 mm) compared to simple sutures (8.7 +/- 0.2 mm; P < .0001), the Magnum Implant (6.2 +/- 2.2 mm; P = .002), double-row repair with PushLock and Bio-Corkscrew FT anchors (5.9 +/- 0.9 mm; P = .008), and corkscrews with modified Mason-Allen sutures (6.4 +/- 1.3 mm; P = .001). Double-row double-corkscrew anchor rotator cuff repair offered the highest failure load and smallest gap formation and provided the most secure fixation of all tested configurations. Double-loaded suture anchors using modified suture configurations achieved superior results in failure load and gap formation compared to simple suture repair and showed similar loads and gap formation with double-row repair using PushLock and Bio-Corkscrew FT anchors. Single-row repair with modified suture configurations may lead to results comparable to several double-row fixations. If double-row repair is used, modified stitches might further minimize gap formation and increase failure load.
Code of Federal Regulations, 2013 CFR
2013-01-01
... barley and Six-rowed Blue Malting barley. 810.204 Section 810.204 Agriculture Regulations of the... requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley. Grade Minimum limits of— Test... and Six-rowed Blue Malting barley varieties not meeting the requirements of this section shall be...
Code of Federal Regulations, 2012 CFR
2012-01-01
... barley and Six-rowed Blue Malting barley. 810.204 Section 810.204 Agriculture Regulations of the... requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley. Grade Minimum limits of— Test... and Six-rowed Blue Malting barley varieties not meeting the requirements of this section shall be...
Code of Federal Regulations, 2011 CFR
2011-01-01
... barley and Six-rowed Blue Malting barley. 810.204 Section 810.204 Agriculture Regulations of the... requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley. Grade Minimum limits of— Test... and Six-rowed Blue Malting barley varieties not meeting the requirements of this section shall be...
Code of Federal Regulations, 2014 CFR
2014-01-01
... barley and Six-rowed Blue Malting barley. 810.204 Section 810.204 Agriculture Regulations of the... requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley. Grade Minimum limits of— Test... and Six-rowed Blue Malting barley varieties not meeting the requirements of this section shall be...
A biomechanical comparison of single and double-row fixation in arthroscopic rotator cuff repair.
Smith, Christopher D; Alexander, Susan; Hill, Adam M; Huijsmans, Pol E; Bull, Anthony M J; Amis, Andrew A; De Beer, Joe F; Wallace, Andrew L
2006-11-01
The optimal method for arthroscopic rotator cuff repair is not yet known. The hypothesis of the present study was that a double-row repair would demonstrate superior static and cyclic mechanical behavior when compared with a single-row repair. The specific aims were to measure gap formation at the bone-tendon interface under static creep loading and the ultimate strength and mode of failure of both methods of repair under cyclic loading. A standardized tear of the supraspinatus tendon was created in sixteen fresh cadaveric shoulders. Arthroscopic rotator cuff repairs were performed with use of either a double-row technique (eight specimens) or a single-row technique (eight specimens) with nonabsorbable sutures that were double-loaded on a titanium suture anchor. The repairs were loaded statically for one hour, and the gap formation was measured. Cyclic loading to failure was then performed. Gap formation during static loading was significantly greater in the single-row group than in the double-row group (mean and standard deviation, 5.0 +/- 1.2 mm compared with 3.8 +/- 1.4 mm; p < 0.05). Under cyclic loading, the double-row repairs failed at a mean of 320 +/- 96.9 N whereas the single-row repairs failed at a mean of 224 +/- 147.9 N (p = 0.058). Three single-row repairs and three double-row repairs failed as a result of suture cut-through. Four single-row repairs and one double-row repair failed as a result of anchor or suture failure. The remaining five repairs did not fail, and a midsubstance tear of the tendon occurred. Although more technically demanding, the double-row technique demonstrates superior resistance to gap formation under static loading as compared with the single-row technique. A double-row reconstruction of the supraspinatus tendon insertion may provide a more reliable construct than a single-row repair and could be used as an alternative to open reconstruction for the treatment of isolated tears.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
NASA Astrophysics Data System (ADS)
Zhu, Hao
Sparsity plays an instrumental role in a plethora of scientific fields, including statistical inference for variable selection, parsimonious signal representations, and solving under-determined systems of linear equations - what has led to the ground-breaking result of compressive sampling (CS). This Thesis leverages exciting ideas of sparse signal reconstruction to develop sparsity-cognizant algorithms, and analyze their performance. The vision is to devise tools exploiting the 'right' form of sparsity for the 'right' application domain of multiuser communication systems, array signal processing systems, and the emerging challenges in the smart power grid. Two important power system monitoring tasks are addressed first by capitalizing on the hidden sparsity. To robustify power system state estimation, a sparse outlier model is leveraged to capture the possible corruption in every datum, while the problem nonconvexity due to nonlinear measurements is handled using the semidefinite relaxation technique. Different from existing iterative methods, the proposed algorithm approximates well the global optimum regardless of the initialization. In addition, for enhanced situational awareness, a novel sparse overcomplete representation is introduced to capture (possibly multiple) line outages, and develop real-time algorithms for solving the combinatorially complex identification problem. The proposed algorithms exhibit near-optimal performance while incurring only linear complexity in the number of lines, which makes it possible to quickly bring contingencies to attention. This Thesis also accounts for two basic issues in CS, namely fully-perturbed models and the finite alphabet property. The sparse total least-squares (S-TLS) approach is proposed to furnish CS algorithms for fully-perturbed linear models, leading to statistically optimal and computationally efficient solvers. The S-TLS framework is well motivated for grid-based sensing applications and exhibits higher accuracy than existing sparse algorithms. On the other hand, exploiting the finite alphabet of unknown signals emerges naturally in communication systems, along with sparsity coming from the low activity of each user. Compared to approaches only accounting for either one of the two, joint exploitation of both leads to statistically optimal detectors with improved error performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, J.; Lee, J.; Yadav, V.
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno
2016-09-15
The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less
The w-effect in interferometric imaging: from a fast sparse measurement operator to superresolution
NASA Astrophysics Data System (ADS)
Dabbech, A.; Wolz, L.; Pratley, L.; McEwen, J. D.; Wiaux, Y.
2017-11-01
Modern radio telescopes, such as the Square Kilometre Array, will probe the radio sky over large fields of view, which results in large w-modulations of the sky image. This effect complicates the relationship between the measured visibilities and the image under scrutiny. In algorithmic terms, it gives rise to massive memory and computational time requirements. Yet, it can be a blessing in terms of reconstruction quality of the sky image. In recent years, several works have shown that large w-modulations promote the spread spectrum effect. Within the compressive sensing framework, this effect increases the incoherence between the sensing basis and the sparsity basis of the signal to be recovered, leading to better estimation of the sky image. In this article, we revisit the w-projection approach using convex optimization in realistic settings, where the measurement operator couples the w-terms in Fourier and the de-gridding kernels. We provide sparse, thus fast, models of the Fourier part of the measurement operator through adaptive sparsification procedures. Consequently, memory requirements and computational cost are significantly alleviated at the expense of introducing errors on the radio interferometric data model. We present a first investigation of the impact of the sparse variants of the measurement operator on the image reconstruction quality. We finally analyse the interesting superresolution potential associated with the spread spectrum effect of the w-modulation, and showcase it through simulations. Our c++ code is available online on GitHub.
NASA Astrophysics Data System (ADS)
Miao, Di; Borden, Michael J.; Scott, Michael A.; Thomas, Derek C.
2018-06-01
In this paper we demonstrate the use of B\\'{e}zier projection to alleviate locking phenomena in structural mechanics applications of isogeometric analysis. Interpreting the well-known $\\bar{B}$ projection in two different ways we develop two formulations for locking problems in beams and nearly incompressible elastic solids. One formulation leads to a sparse symmetric symmetric system and the other leads to a sparse non-symmetric system. To demonstrate the utility of B\\'{e}zier projection for both geometry and material locking phenomena we focus on transverse shear locking in Timoshenko beams and volumetric locking in nearly compressible linear elasticity although the approach can be applied generally to other types of locking phenemona as well. B\\'{e}zier projection is a local projection technique with optimal approximation properties, which in many cases produces solutions that are comparable to global $L^2$ projection. In the context of $\\bar{B}$ methods, the use of B\\'ezier projection produces sparse stiffness matrices with only a slight increase in bandwidth when compared to standard displacement-based methods. Of particular importance is that the approach is applicable to any spline representation that can be written in B\\'ezier form like NURBS, T-splines, LR-splines, etc. We discuss in detail how to integrate this approach into an existing finite element framework with minimal disruption through the use of B\\'ezier extraction operators and a newly introduced dual basis for the B\\'{e}zierprojection operator. We then demonstrate the behavior of the two proposed formulations through several challenging benchmark problems.
Robust sparse image reconstruction of radio interferometric observations with PURIFY
NASA Astrophysics Data System (ADS)
Pratley, Luke; McEwen, Jason D.; d'Avezac, Mayeul; Carrillo, Rafael E.; Onose, Alexandru; Wiaux, Yves
2018-01-01
Next-generation radio interferometers, such as the Square Kilometre Array, will revolutionize our understanding of the Universe through their unprecedented sensitivity and resolution. However, to realize these goals significant challenges in image and data processing need to be overcome. The standard methods in radio interferometry for reconstructing images, such as CLEAN, have served the community well over the last few decades and have survived largely because they are pragmatic. However, they produce reconstructed interferometric images that are limited in quality and scalability for big data. In this work, we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers algorithm presented in a recent article. First, we assess the impact of the interpolation kernel used to perform gridding and degridding on sparse image reconstruction. We find that the Kaiser-Bessel interpolation kernel performs as well as prolate spheroidal wave functions while providing a computational saving and an analytic form. Secondly, we apply PURIFY to real interferometric observations from the Very Large Array and the Australia Telescope Compact Array and find that images recovered by PURIFY are of higher quality than those recovered by CLEAN. Thirdly, we discuss how PURIFY reconstructions exhibit additional advantages over those recovered by CLEAN. The latest version of PURIFY, with developments presented in this work, is made publicly available.
A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients
Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping
2016-01-01
Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337
A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients.
Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping
2016-02-05
Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information.
Blade row interaction effects on flutter and forced response
NASA Technical Reports Server (NTRS)
Buffum, Daniel H.
1993-01-01
In the flutter or forced response analysis of a turbomachine blade row, the blade row in question is commonly treated as if it is isolated from the neigboring blade rows. Disturbances created by vibrating blades are then free to propagate away from this blade row without being disturbed. In reality, neighboring blade rows will reflect some portion of this wave energy back toward the vibrating blades, causing additional unsteady forces on them. It is of fundamental importance to determine whether or not these reflected waves can have a significant effect on the aeroelastic stability or forced response of a blade row. Therefore, a procedure to calculate intra-blade-row unsteady aerodynamic interactions was developed which relies upon results available from isolated blade row unsteady aerodynamic analyses. In addition, an unsteady aerodynamic influence coefficient technique is used to obtain a model for the vibratory response in which the neighboring blade rows are also flexible. The flutter analysis shows that interaction effects can be destabilizing, and the forced response analysis shows that interaction effects can result in a significant increase in the resonant response of a blade row.
Alternate row placement is ineffective for cultural control of Meloidogyne incognita in cotton
2008-01-01
The objective of this study was to determine if planting cotton into the space between the previous year's rows reduces crop loss due to Meloidogyne incognita compared to planting in the same row every year. Row placement had a significant (P ≤ 0.05) effect on nematode population levels only on 8 July 2005. Plots receiving 1,3-dichloropropene plus aldicarb had lower nematode population levels than non-fumigated plots on 24 May and 8 July in 2005, but not in 2004. The effect of nematicide treatment on nematode populations was not affected by row placement. Row placement did not have a significant effect on root galling or yield in 2004 or 2005. Nematicide treatment decreased root galling in all years, and the decrease was not influenced by row placement. Yield was increased by nematicide application in 2004 and 2005, and the increase was not affected by row placement. Percentage yield loss was not affected by row placement. Changing the placement of rows reduced nematode population levels only on one sampling date in one year, but end-of-season root galling and lint yield were not affected by changing the placement of rows, nor was the effect of fumigation on yield influenced by row placement. Therefore, row placement is unlikely to contribute to M. incognita management in cotton. PMID:19440259
Single-row versus double-row repair of the distal Achilles tendon: a biomechanical comparison.
Pilson, Holly; Brown, Philip; Stitzel, Joel; Scott, Aaron
2012-01-01
Surgery for recalcitrant insertional Achilles tendinopathy often consists of partial or total release of the insertion site, debridement of the diseased portion of the tendon, calcaneal ostectomy, and reattachment of the Achilles to the calcaneus. Although single-row and double-row techniques exist for repair of the detached Achilles tendon, biomechanical data are lacking to support one technique over the other. Based on data extrapolated from the study of rotator cuff repairs, we hypothesized that a double-row construct would provide superior fixation strength over a single-row repair. Eighteen human cadaveric Achilles tendons (9 matched pairs) with attached calcanei were repaired with single-row or double-row techniques. Specimens were mounted in a servohydraulic materials testing machine, subjected to a preconditioning cycle, and loaded to failure. Failure was defined as suture breakage or pullout, midsubstance tendon rupture, or anchor pullout. Among the failures were 12 suture failures, 5 proximal-row anchor failures, and 1 distal-row anchor failure. No midsubstance tendon ruptures or testing apparatus failures were observed. There were no statistically significant differences in the peak load to failure between the single-row and double-row repairs (p = .46). Similarly, no significant differences were observed with regards to mean energy expenditure to failure (p = .069). The present study demonstrated no biomechanical advantages of the double-row repair over a single-row repair. Despite the lack of a clear biomechanical advantage, there may exist clinical advantages of a double-row repair, such as reduction in knot prominence and restoration of the Achilles footprint. Copyright © 2012 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
An Efficient Image Compressor for Charge Coupled Devices Camera
Li, Jin; Xing, Fei; You, Zheng
2014-01-01
Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977
Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications
NASA Astrophysics Data System (ADS)
Ermeydan, Esra Şengün; ćankaya, Ilyas
2018-01-01
Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.
A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings
Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng
2016-01-01
The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063
1. GENERAL VIEW OF CROSS ROW BUILDING (in background), LOOKING ...
1. GENERAL VIEW OF CROSS ROW BUILDING (in background), LOOKING SOUTHWEST. The building at right is Brick Row (Old Beersheba Inn, Brick Row, HABS No. TN-54 B) - Old Beersheba Inn, Cross Row (Boarding Cabin), Armsfield Avenue, Beersheba Springs, Grundy County, TN
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
Buess, Eduard; Waibl, Bernhard; Vogel, Roger; Seidner, Robert
2009-10-01
Cadaveric studies and commercial pressure have initiated a strong trend towards double-row repair in arthroscopic cuff surgery. The objective of this study was to evaluate if the biomechanical advantages of a double-row supraspinatus tendon repair would result in superior clinical outcome and higher abduction strength. A retrospective study of two groups of 32 single-row and 33 double-row repairs of small to medium cuff tears was performed. The Simple Shoulder Test (SST) and a visual analog scale for pain were used to evaluate the outcome. The participation rate was 100%. A subset of patients was further investigated with the Constant Score (CS) including electronic strength measurement. The double-row repair patients had significantly more (p = 0.01) yes answers in the SST than the single-row group, and pain reduction was slightly better (p = 0.03). No difference was found for the relative CS (p = 0.86) and abduction strength (p = 0.74). Patient satisfaction was 100% for double-row and 97% for single-row repair. Single- and double-row repairs both achieved excellent clinical results. Evidence of superiority of double-row repair is still scarce and has to be balanced against the added complexity of the procedure and higher costs.
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-01-01
Context: Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. Objective: To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Data Sources: Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. Study Selection: The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Data Extraction: Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Results: Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, –0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Conclusions: Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up. PMID:23016017
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-05-01
Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, -0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up.
A biomechanical analysis of a single-row suture anchor fixation of a large bony bankart lesion.
Dyskin, Evgeny; Marzo, John M; Howard, Craig; Ehrensberger, Mark
2014-12-01
This study was conducted to assess whether a single-row suture anchor repair of a bony Bankart lesion comprising 19% of the glenoid length restores peak translational force and glenoid depth compared with the intact shoulder. Nine thawed adult cadaveric shoulders were dissected and mounted in 45° of abduction and 30° of external rotation. A bony Bankart lesion was simulated with an anterior longitudinal osteotomy, parallel to the superoinferior axis of the glenoid, equivalent to 19% of the glenoid length. The humeral head was displaced 10 mm anteriorly at a speed of 2 mm/s with a 50-N compressive load applied. Testing was performed with the glenoid intact, a simulated lesion, and the lesion repaired with 3 single-row suture anchors. Median (interquartile range [IQR]) peak translational force and glenoid depth were reported. The Friedman test and post hoc comparisons with the Wilcoxon signed rank test were used for between-group analyses. Peak translational force decreased after osteotomy (13.7 N; IQR, 9.6 to 15.5 N; P = .01) and increased after the repair (18.3 N; IQR, 18.3 to 20.6 N; P = .01) compared with the intact shoulder (23.7 N; IQR, 16.4 to 29.9 N). Glenoid depth significantly decreased after the osteotomy (0.2 mm; IQR, -0.6 to 0.7 mm) compared with baseline (1.7 mm; IQR, 1.3 to 2.0 mm; P = .01) and increased after repair (0.8 mm; IQR, 0.1 to 1.0 mm; P = .03) compared with the osteotomized shoulder. The glenoid depth of the repair was less than the baseline value (P = .01). Repair of an anterior bony Bankart lesion equivalent to 19% of the glenoid length with 3 suture anchors restored the peak translational force needed to anteriorly displace the humerus relative to the glenoid; however, this technique failed to restore the natural glenoid depth in a laboratory setting. Our findings describe the inability of a single-row suture anchor repair to provide anatomic fixation of the bony Bankart lesion equivalent to 19% of the glenoid length. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Lee, Stuart M. C.; Stenger, Michael B.; Laurie, Steven S.; Ploutz-Snyder, Lori L.; Platts, Steven H.
2015-01-01
More than 60% of US astronauts participating in Mir and early International Space Station missions (greater than 5 months) were unable to complete a 10-min 80 deg head-up tilt test on landing day. This high incidence of post-spaceflight orthostatic intolerance may be related to limitations of the inflight exercise hardware that prevented high intensity training. PURPOSE: This study sought to determine if a countermeasure program that included intense lower-body resistive and rowing exercises designed to prevent cardiovascular and musculoskeletal deconditioning during 70 days of 6 deg head-down tilt bed rest (BR), a spaceflight analog, also would protect against post- BR orthostatic intolerance. METHODS: Sixteen males participated in this study and performed no exercise (Control, n=10) or performed an intense supine exercise protocol with resistive and aerobic components (Exercise, n=6). On 3 days/week, exercise subjects performed lower body resistive exercise and a 30-min continuous bout of rowing (greater than or equal to 75% max heart rate). On 3 other days/week, subjects performed only high-intensity, interval-style rowing. Orthostatic intolerance was assessed using a 15-min 80 deg head-up tilt test performed 2 days (BR-2) before and on the last day of BR (BR70). Plasma volume was measured using a carbon monoxide rebreathing technique on BR-3 and before rising on the first recovery day (BR+0). RESULTS: Following 70 days of BR, tilt tolerance time decreased significantly in both the Control (BR-2: 15.0 +/- 0.0, BR70: 9.9 +/- 4.6 min, mean +/- SD) and Exercise (BR-2: 12.2 +/- 4.7, BR70: 4.9 +/- 1.9 min) subjects, but the decreased tilt tolerance time was not different between groups (Control: -34 +/- 31, Exercise: -56 +/- 16%). Plasma volume also decreased (Control: -0.56 +/- 0.40, Exercise: -0.48 +/- 0.33 L) from pre to post-BR, with no differences between groups (Control: -18 +/- 11%, Exerciser: -15 +/-1 0%). CONCLUSIONS: These findings confirm previous reports in shorter BR studies that the performance of an exercise countermeasure protocol by itself during BR does not prevent orthostatic intolerance or plasma volume loss. This suggests that protection against orthostatic intolerance in astronauts following long-duration spaceflight will require an additional intervention, such as periodic orthostatic stress, fluid repletion, and/or lower-body compression garments.
Louisiana farm discussion: 8 foot row spacing
USDA-ARS?s Scientific Manuscript database
This year several tests in growers’ fields were used to compare traditional 6-foot row spacing to 8-foot row spacing. Cane is double-drilled in the wider row spacing. The wider row spacing would accommodate John Deere 3522 harvester. Field data indicate the sugarcane yields are very comparable in 8-...
Integrated optical transceiver with electronically controlled optical beamsteering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davids, Paul; DeRose, Christopher; Tauke-Pedretti, Anna
A beam-steering optical transceiver is provided. The transceiver includes one or more modules, each comprising an antenna chip and a control chip bonded to the antenna chip. Each antenna chip has a feeder waveguide, a plurality of row waveguides that tap off from the feeder waveguide, and a plurality of metallic nanoantenna elements arranged in a two-dimensional array of rows and columns such that each row overlies one of the row waveguides. Each antenna chip also includes a plurality of independently addressable thermo-optical phase shifters, each configured to produce a thermo-optical phase shift in a respective row. Each antenna chipmore » also has, for each row, a row-wise heating circuit configured to produce a respective thermo-optic phase shift at each nanoantenna element along its row. The control chip includes controllable current sources for the independently addressable thermo-optical phase shifters and the row-wise heating circuits.« less
Mechanical performance of aquatic rowing and flying.
Walker, J A; Westneat, M W
2000-09-22
Aquatic flight, performed by rowing or flapping fins, wings or limbs, is a primary locomotor mechanism for many animals. We used a computer simulation to compare the mechanical performance of rowing and flapping appendages across a range of speeds. Flapping appendages proved to be more mechanically efficient than rowing appendages at all swimming speeds, suggesting that animals that frequently engage in locomotor behaviours that require energy conservation should employ a flapping stroke. The lower efficiency of rowing appendages across all speeds begs the question of why rowing occurs at all. One answer lies in the ability of rowing fins to generate more thrust than flapping fins during the power stroke. Large forces are necessary for manoeuvring behaviours such as accelerations, turning and braking, which suggests that rowing should be found in slow-swimming animals that frequently manoeuvre. The predictions of the model are supported by observed patterns of behavioural variation among rowing and flapping vertebrates.
Yousif, Matthew John; Bicos, James
2017-12-01
The glenohumeral joint is the most commonly dislocated joint in the body. Failure rates of capsulolabral repair have been reported to be approximately 8%. Recent focus has been on restoration of the capsulolabral complex by a double-row capsulolabral repair technique in an effort to decrease redislocation rates after arthroscopic capsulolabral repair. To present a review of the biomechanical literature comparing single- versus double-row capsulolabral repairs and discuss the previous case series of double-row fixation. Narrative review. A simple review of the literature was performed by PubMed search. Only biomechanical studies comparing single- versus double-row capsulolabral repair were included for review. Only those case series and descriptive techniques with clinical results for double-row repair were included in the discussion. Biomechanical comparisons evaluating the native footprint of the labrum demonstrated significantly superior restoration of the footprint through double-row capsulolabral repair compared with single-row repair. Biomechanical comparisons of contact pressure at the repair interface, fracture displacement in bony Bankart lesion, load to failure, and decreased external rotation (suggestive of increased load to failure) were also significantly in favor of double- versus single-row repair. Recent descriptive techniques and case series of double-row fixation have demonstrated good clinical outcomes; however, no comparative clinical studies between single- and double-row repair have assessed functional outcomes. The superiority of double-row capsulolabral repair versus single-row repair remains uncertain because comparative studies assessing clinical outcomes have yet to be performed.
Effects of stroke resistance on rowing economy in club rowers post-season.
Kane, D A; Mackenzie, S J; Jensen, R L; Watts, P B
2013-02-01
In the sport of rowing, increasing the impulse applied to the oar handle during the stroke can result in greater boat velocities; this may be facilitated by increasing the surface area of the oar blade and/or increasing the length of the oars. The purpose of this study was to compare the effects of different rowing resistances on the physiological response to rowing. 5 male and 7 female club rowers completed progressive, incremental exercise tests on an air-braked rowing ergometer, using either low (LO; 100) or high (HI; 150) resistance (values are according to the adjustable "drag factor" setting on the ergometer). Expired air, blood lactate concentration, heart rate, rowing cadence, and ergometer power output were monitored during the tests. LO rowing elicited significantly greater cadences (P<0.01) and heart rates (P<0.05), whereas rowing economy (J · L O(2) equivalents(-1)) was significantly greater during HI rowing (P<0.05). These results suggest that economically, rowing with a greater resistance may be advantageous for performance. Moreover, biomechanical analysis of ergometer rowing support the notion that the impulse generated during the stroke increases positively as a function of rowing resistance. We conclude that an aerobic advantage associated with greater resistance parallels the empirical trend toward larger oar blades in competitive rowing. This may be explained by a greater stroke impulse at the higher resistance. © Georg Thieme Verlag KG Stuttgart · New York.
Stretchy binary classification.
Toh, Kar-Ann; Lin, Zhiping; Sun, Lei; Li, Zhengguo
2018-01-01
In this article, we introduce an analytic formulation for compressive binary classification. The formulation seeks to solve the least ℓ p -norm of the parameter vector subject to a classification error constraint. An analytic and stretchable estimation is conjectured where the estimation can be viewed as an extension of the pseudoinverse with left and right constructions. Our variance analysis indicates that the estimation based on the left pseudoinverse is unbiased and the estimation based on the right pseudoinverse is biased. Sparseness can be obtained for the biased estimation under certain mild conditions. The proposed estimation is investigated numerically using both synthetic and real-world data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pilot-based parametric channel estimation algorithm for DCO-OFDM-based visual light communications
NASA Astrophysics Data System (ADS)
Qian, Xuewen; Deng, Honggui; He, Hailang
2017-10-01
Due to wide modulation bandwidth in optical communication, multipath channels may be non-sparse and deteriorate communication performance heavily. Traditional compressive sensing-based channel estimation algorithm cannot be employed in this kind of situation. In this paper, we propose a practical parametric channel estimation algorithm for orthogonal frequency division multiplexing (OFDM)-based visual light communication (VLC) systems based on modified zero correlation code (ZCC) pair that has the impulse-like correlation property. Simulation results show that the proposed algorithm achieves better performances than existing least squares (LS)-based algorithm in both bit error ratio (BER) and frequency response estimation.
NASA Astrophysics Data System (ADS)
van Rooij, Michael P. C.
Current turbomachinery design systems increasingly rely on multistage Computational Fluid Dynamics (CFD) as a means to assess performance of designs. However, design weaknesses attributed to improper stage matching are addressed using often ineffective strategies involving a costly iterative loop between blading modification, revision of design intent, and evaluation of aerodynamic performance. A design methodology is presented which greatly improves the process of achieving design-point aerodynamic matching. It is based on a three-dimensional viscous inverse design method which generates the blade camber surface based on prescribed pressure loading, thickness distribution and stacking line. This inverse design method has been extended to allow blading analysis and design in a multi-blade row environment. Blade row coupling was achieved through a mixing plane approximation. Parallel computing capability in the form of MPI has been implemented to reduce the computational time for multistage calculations. Improvements have been made to the flow solver to reach the level of accuracy required for multistage calculations. These include inclusion of heat flux, temperature-dependent treatment of viscosity, and improved calculation of stress components and artificial dissipation near solid walls. A validation study confirmed that the obtained accuracy is satisfactory at design point conditions. Improvements have also been made to the inverse method to increase robustness and design fidelity. These include the possibility to exclude spanwise sections of the blade near the endwalls from the design process, and a scheme that adjusts the specified loading area for changes resulting from the leading and trailing edge treatment. Furthermore, a pressure loading manager has been developed. Its function is to automatically adjust the pressure loading area distribution during the design calculation in order to achieve a specified design objective. Possible objectives are overall mass flow and compression ratio, and radial distribution of exit flow angle. To supplement the loading manager, mass flow inlet and exit boundary conditions have been implemented. Through appropriate combination of pressure or mass flow inflow/outflow boundary conditions and loading manager objectives, increased control over the design intent can be obtained. The three-dimensional multistage inverse design method with pressure loading manager was demonstrated to offer greatly enhanced blade row matching capabilities. Multistage design allows for simultaneous design of blade rows in a mutually interacting environment, which permits the redesigned blading to adapt to changing aerodynamic conditions resulting from the redesign. This ensures that the obtained blading geometry and performance implied by the prescribed pressure loading distribution are consistent with operation in the multi-blade row environment. The developed methodology offers high aerodynamic design quality and productivity, and constitutes a significant improvement over existing approaches used to address design-point aerodynamic matching.
Incidence of retear with double-row versus single-row rotator cuff repair.
Shen, Chong; Tang, Zhi-Hong; Hu, Jun-Zu; Zou, Guo-Yao; Xiao, Rong-Chi
2014-11-01
Rotator cuff tears have a high recurrence rate, even after arthroscopic rotator cuff repair. Although some biomechanical evidence suggests the superiority of the double-row vs the single-row technique, clinical findings regarding these methods have been controversial. The purpose of this study was to determine whether the double-row repair method results in a lower incidence of recurrent tearing compared with the single-row method. Electronic databases were systematically searched to identify reports of randomized, controlled trials (RCTs) comparing single-row with double-row rotator cuff repair. The primary outcome assessed was retear of the repaired cuff. Secondary outcome measures were the American Shoulder and Elbow Surgeons (ASES) shoulder score, the Constant shoulder score, and the University of California, Los Angeles (UCLA) score. Heterogeneity between the included studies was assessed. Six studies involving 428 patients were included in the review. Compared with single-row repair, double-row repair demonstrated a lower retear incidence (risk ratio [RR]=1.71 [95% confidence interval (CI), 1.18-2.49]; P=.005; I(2)=0%) and a reduced incidence of partial-thickness retears (RR=2.16 [95% CI, 1.26-3.71]; P=.005; I(2)=26%). Functional ASES, Constant, and UCLA scores showed no difference between single- and double-row cuff repairs. Use of the double-row technique decreased the incidence of retears, especially partial-thickness retears, compared with the single-row technique. The functional outcome was not significantly different between the 2 techniques. To improve the structural outcome of the repaired rotator cuff, surgeons should use the double-row technique. However, further long-term RCTs on this topic are needed. Copyright 2014, SLACK Incorporated.
Miura, Yohei; Ichikawa, Katsuhiro; Fujimura, Ichiro; Hara, Takanori; Hoshino, Takashi; Niwa, Shinji; Funahashi, Masao
2018-03-01
The 320-detector row computed tomography (CT) system, i.e., the area detector CT (ADCT), can perform helical scanning with detector configurations of 4-, 16-, 32-, 64-, 80-, 100-, and 160-detector rows for routine CT examinations. This phantom study aimed to compare the quality of images obtained using helical scan mode with different detector configurations. The image quality was measured using modulation transfer function (MTF) and noise power spectrum (NPS). The system performance function (SP), based on the pre-whitening theorem, was calculated as MTF 2 /NPS, and compared between configurations. Five detector configurations, i.e., 0.5 × 16 mm (16 row), 0.5 × 64 mm (64 row), 0.5 × 80 mm (80 row), 0.5 × 100 mm (100 row), and 0.5 × 160 mm (160 row), were compared using a constant volume CT dose index (CTDI vol ) of 25 mGy, simulating the scan of an adult abdomen, and with a constant effective mAs value. The MTF was measured using the wire method, and the NPS was measured from images of a 20-cm diameter phantom with uniform content. The SP of 80-row configuration was the best, for the constant CTDI vol , followed by the 64-, 160-, 16-, and 100-row configurations. The decrease in the rate of the 100- and 160-row configurations from the 80-row configuration was approximately 30%. For the constant effective mAs, the SPs of the 100-row and 160-row configurations were significantly lower, compared with the other three detector configurations. The 80- and 64-row configurations were adequate in cases that required dose efficiency rather than scan speed.
Baums, M H; Schminke, B; Posmyk, A; Miosge, N; Klinger, H-M; Lakemeier, S
2015-01-01
The clinical superiority of the double-row technique is still a subject of controversial debate in rotator cuff repair. We hypothesised that the expression of different collagen types will differ between double-row and single-row rotator cuff repair indicating a faster healing response by the double-row technique. Twenty-four mature female sheep were randomly assembled to two different groups in which a surgically created acute infraspinatus tendon tear was fixed using either a modified single- or double-row repair technique. Shoulder joints from female sheep cadavers of identical age, bone maturity, and weight served as untreated control cluster. Expression of type I, II, and III collagen was observed in the tendon-to-bone junction along with recovering changes in the fibrocartilage zone after immunohistological tissue staining at 1, 2, 3, 6, 12, and 26 weeks postoperatively. Expression of type III collagen remained positive until 6 weeks after surgery in the double-row group, whereas it was detectable for 12 weeks in the single-row group. In both groups, type I collagen expression increased after 12 weeks. Type II collagen expression was increased after 12 weeks in the double-row versus single-row group. Clusters of chondrocytes were only visible between week 6 and 12 in the double-row group. The study demonstrates differences regarding the expression of type I and type III collagen in the tendon-to-bone junction following double-row rotator cuff repair compared to single-row repair. The healing response in this acute repair model is faster in the double-row group during the investigated healing period.
Single-row versus double-row arthroscopic rotator cuff repair in small- to medium-sized tears.
Aydin, Nuri; Kocaoglu, Baris; Guven, Osman
2010-07-01
Double-row rotator cuff repair leads to superior cuff integrity and clinical results compared with single-row repair. The study enrolled 68 patients with a full-thickness rotator cuff tear who were divided into 2 groups of 34 patients according to repair technique. The patients were followed-up for at least 2 years. The results were evaluated by Constant score. Despite the biomechanical studies and cadaver studies that proved the superiority of double-row fixation over single-row fixation, our clinical results show no difference in functional outcome between the two methods. It is evident that double-row repair is more technically demanding, expensive, and time-consuming than single-row repair, without providing a significant improvement in clinical results. Comparison between groups did not show significant differences. At the final follow-up, the Constant score was 82.2 in the single-row group and 78.8 in the double-row group. Functional outcome was improved in both groups after surgery, but the difference between the 2 groups was not significant. At long-term follow-up, arthroscopic rotator cuff repair with the double-row technique showed no significant difference in clinical outcome compared with single-row repair in small to medium tears. 2010 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Counterrotatable booster compressor assembly for a gas turbine engine
NASA Technical Reports Server (NTRS)
Moniz, Thomas Ory (Inventor); Orlando, Robert Joseph (Inventor)
2004-01-01
A counterrotatable booster compressor assembly for a gas turbine engine having a counterrotatable fan section with a first fan blade row connected to a first drive shaft and a second fan blade row axially spaced from the first fan blade row and connected to a second drive shaft, the counterrotatable booster compressor assembly including a first compressor blade row connected to the first drive shaft and a second compressor blade row interdigitated with the first compressor blade row and connected to the second drive shaft. A portion of each fan blade of the second fan blade row extends through a flowpath of the counterrotatable booster compressor so as to function as a compressor blade in the second compressor blade row. The counterrotatable booster compressor further includes a first platform member integral with each fan blade of the second fan blade row at a first location so as to form an inner flowpath for the counterrotatable booster compressor and a second platform member integral with each fan blade of the second fan blade row at a second location so as to form an outer flowpath for the counterrotatable booster compressor.
The Influence of Periodically Non-Stationary Afflux on Transition Behavior of Compressor Grids
NASA Astrophysics Data System (ADS)
Teusch, Reinhold
2001-01-01
The primary goal of this study is to obtain a deeper look into the physical occurrences within the shovel border layer. The author accomplishes this effort through a detailed examination of non-stationary flow behavior of compressor shovels with Controlled Diffusion Airfoil (CDA)-profiling under the influence of after-running depressions of current salient shovel rows. In addition to the checking of the precision of stationary and non-stationary calculatory processes, criteria are defined for the layout of modern compression shovels under the rubrick of rotor/stator interaction. An overview of the literature is then given regarding both the basic principles of non-stationary transition behavior under the influence of after-running depressions as well as the most up-to-date scholarship on the problematics of the field discussed.
Spang, Jeffrey T; Buchmann, Stefan; Brucker, Peter U; Kouloumentas, Panos; Obst, Tobias; Schröder, Manuel; Burgkart, Rainer; Imhoff, Andreas B
2009-08-01
A novel double-row configuration was compared with a traditional double-row configuration for rotator cuff repair. In 10 matched-pair sheep shoulders in vitro repair was performed with either a double-row technique with corkscrew suture anchors for the medial row and insertion anchors for the lateral row (group A) or a double-row technique with a new tape-like suture material with insertion anchors for both the medial and lateral rows (group B). Each specimen underwent cyclic loading from 10 to 150 N for 100 cycles, followed by unidirectional failure testing. Gap formation and strain within the repair area for the first and last cycles were analyzed with a video digitizing system, and stiffness and failure load were determined from the load-elongation curve. The results were similar for the 2 repair types. There was no significant difference between the ultimate failure loads of the 2 techniques (421 +/- 150 N in group A and 408 +/- 66 N in group B, P = .31) or the stiffness of the 2 techniques (84 +/- 26 N/mm in group A and 99 +/- 20 N/mm in group B, P = .07). In addition, gap formation was not different between the repair types. Strain over the repair area was also not different between the repair types. Both tested rotator cuff repair techniques had high failure loads, limited gap formation, and acceptable strain patterns. No significant difference was found between the novel and conventional double-row repair types. Two double-row techniques-one with corkscrew suture anchors for the medial row and insertion anchors for the lateral row and one with insertion anchors for both the medial and lateral rows-provided excellent biomechanical profiles at time 0 for double-row repairs in a sheep model. Although the sheep model may not directly correspond to in vivo conditions, all-insertion anchor double-row constructs are worthy of further investigation.
Yousif, Matthew John; Bicos, James
2017-01-01
Background: The glenohumeral joint is the most commonly dislocated joint in the body. Failure rates of capsulolabral repair have been reported to be approximately 8%. Recent focus has been on restoration of the capsulolabral complex by a double-row capsulolabral repair technique in an effort to decrease redislocation rates after arthroscopic capsulolabral repair. Purpose: To present a review of the biomechanical literature comparing single- versus double-row capsulolabral repairs and discuss the previous case series of double-row fixation. Study Design: Narrative review. Methods: A simple review of the literature was performed by PubMed search. Only biomechanical studies comparing single- versus double-row capsulolabral repair were included for review. Only those case series and descriptive techniques with clinical results for double-row repair were included in the discussion. Results: Biomechanical comparisons evaluating the native footprint of the labrum demonstrated significantly superior restoration of the footprint through double-row capsulolabral repair compared with single-row repair. Biomechanical comparisons of contact pressure at the repair interface, fracture displacement in bony Bankart lesion, load to failure, and decreased external rotation (suggestive of increased load to failure) were also significantly in favor of double- versus single-row repair. Recent descriptive techniques and case series of double-row fixation have demonstrated good clinical outcomes; however, no comparative clinical studies between single- and double-row repair have assessed functional outcomes. Conclusion: The superiority of double-row capsulolabral repair versus single-row repair remains uncertain because comparative studies assessing clinical outcomes have yet to be performed. PMID:29230427
The extension of a uniform canopy reflectance model to include row effects
NASA Technical Reports Server (NTRS)
Suits, G. H. (Principal Investigator)
1981-01-01
The effect of row structure is assumed to be caused by the variation in density of vegetation across rows rather than to a profile in canopy height. The calculation of crop reflectance using vegetation density modulation across rows follows a parallel procedure to that for a uniform canopy. Predictions using the row model for wheat show that the effect of changes in sun to row azimuth are greatest in Landsat Band 5 (red band) and can result in underestimation of crop vigor.
Death row inmate characteristics, adjustment, and confinement: a critical review of the literature.
Cunningham, Mark D; Vigen, Mark P
2002-01-01
This article reviews and summarizes research on death row inmates. The contributions and weaknesses of death row demographic data, clinical studies, and research based on institutional records are critiqued. Our analysis shows that death row inmates are overwhelmingly male and disproportionately Southern. Racial representation remains controversial. Frequently death row inmates are intellectually limited and academically deficient. Histories of significant neurological insult are common, as are developmental histories of trauma, family disruption, and substance abuse. Rates of psychological disorder among death row inmates are high, with conditions of confinement appearing to precipitate or aggravate these disorders. Contrary to expectation, the extant research indicates that the majority of death row inmates do not exhibit violence in prison even in more open institutional settings. These findings have implications for forensic mental health sentencing evaluations, competent attorney representation, provision of mental health services, racial disparity in death sentences, death row security and confinement policies, and moral culpability considerations. Future research directions on death row populations are suggested. Copyright 2002 John Wiley & Sons, Ltd.
Single- and double-row repair for rotator cuff tears - biology and mechanics.
Papalia, Rocco; Franceschi, Francesco; Vasta, Sebastiano; Zampogna, Biagio; Maffulli, Nicola; Denaro, Vincenzo
2012-01-01
We critically review the existing studies comparing the features of single- and double-row repair, and discuss suggestions about the surgical indications for the two repair techniques. All currently available studies comparing the biomechanical, clinical and the biological features of single and double row. Biomechanically, the double-row repair has greater performances in terms of higher initial fixation strength, greater footprint coverage, improved contact area and pressure, decreased gap formation, and higher load to failure. Results of clinical studies demonstrate no significantly better outcomes for double-row compared to single-row repair. Better results are achieved by double-row repair for larger lesions (tear size 2.5-3.5 cm). Considering the lack of statistically significant differences between the two techniques and that the double row is a high cost and a high surgical skill-dependent technique, we suggest using the double-row technique only in strictly selected patients. Copyright © 2012 S. Karger AG, Basel.
Six-rowed barley originated from a mutation in a homeodomain-leucine zipper I-class homeobox gene
Komatsuda, Takao; Pourkheirandish, Mohammad; He, Congfen; Azhaguvel, Perumal; Kanamori, Hiroyuki; Perovic, Dragan; Stein, Nils; Graner, Andreas; Wicker, Thomas; Tagiri, Akemi; Lundqvist, Udda; Fujimura, Tatsuhito; Matsuoka, Makoto; Matsumoto, Takashi; Yano, Masahiro
2007-01-01
Increased seed production has been a common goal during the domestication of cereal crops, and early cultivators of barley (Hordeum vulgare ssp. vulgare) selected a phenotype with a six-rowed spike that stably produced three times the usual grain number. This improved yield established barley as a founder crop for the Near Eastern Neolithic civilization. The barley spike has one central and two lateral spikelets at each rachis node. The wild-type progenitor (H. vulgare ssp. spontaneum) has a two-rowed phenotype, with additional, strictly rudimentary, lateral rows; this natural adaptation is advantageous for seed dispersal after shattering. Until recently, the origin of the six-rowed phenotype remained unknown. In the present study, we isolated vrs1 (six-rowed spike 1), the gene responsible for the six-rowed spike in barley, by means of positional cloning. The wild-type Vrs1 allele (for two-rowed barley) encodes a transcription factor that includes a homeodomain with a closely linked leucine zipper motif. Expression of Vrs1 was strictly localized in the lateral-spikelet primordia of immature spikes, suggesting that the VRS1 protein suppresses development of the lateral rows. Loss of function of Vrs1 resulted in complete conversion of the rudimentary lateral spikelets in two-rowed barley into fully developed fertile spikelets in the six-rowed phenotype. Phylogenetic analysis demonstrated that the six-rowed phenotype originated repeatedly, at different times and in different regions, through independent mutations of Vrs1. PMID:17220272
Lorbach, Olaf; Kieb, Matthias; Raber, Florian; Busch, Lüder C; Kohn, Dieter M; Pape, Dietrich
2013-01-01
The double-row suture bridge repair was recently introduced and has demonstrated superior biomechanical results and higher yield load compared with the traditional double-row technique. It therefore seemed reasonable to compare this second generation of double-row constructs to the modified single-row double mattress reconstruction. The repair technique, initial tear size, and tendon subregion will have a significant effect on 3-dimensional (3D) cyclic displacement under additional static external rotation of a modified single-row compared with a double-row rotator cuff repair. Controlled laboratory study. Rotator cuff tears (small to medium: 25 mm; medium to large: 35 mm) were created in 24 human cadaveric shoulders. Rotator cuff repairs were performed as modified single-row or double-row repairs, and cyclic loading (10-60 N, 10-100 N) was applied under 20° of external rotation. Radiostereometric analysis was used to calculate cyclic displacement in the anteroposterior (x), craniocaudal (y), and mediolateral (z) planes with a focus on the repair constructs and the initial tear size. Moreover, differences in cyclic displacement of the anterior compared with the posterior tendon subregions were calculated. Significantly lower cyclic displacement was seen in small to medium tears for the single-row compared with double-row repair at 60 and 100 N in the x plane (P = .001) and y plane (P = .001). The results were similar in medium to large tears at 100 N in the x plane (P = .004). Comparison of 25-mm versus 35-mm tears did not show any statistically significant differences for the single-row repairs. In the double-row repairs, lower gap formation was found for the 35-mm tears (P ≤ .05). Comparison of the anterior versus posterior tendon subregions revealed a trend toward higher anterior gap formation, although this was statistically not significant. The tested single-row reconstruction achieved superior results in 3D cyclic displacement to the tested double-row repair. Extension of the initial rupture size did not have a negative effect on the biomechanical results of the tested constructs. Single-row repairs with modified suture configurations provide comparable biomechanical strength to double-row repairs. Furthermore, as increased gap formation in the early postoperative period might lead to failure of the construct, a strong anterior fixation and restricted external rotation protocol might be considered in rotator cuff repairs to avoid this problem.
How many CT detector rows are necessary to perform adequate three dimensional visualization?
Fischer, Lars; Tetzlaff, Ralf; Schöbinger, Max; Radeleff, Boris; Bruckner, Thomas; Meinzer, H P; Büchler, M W; Schemmer, Peter
2010-06-01
The technical development of computer tomography (CT) imaging has experienced great progress. As consequence, CT data to be used for 3D visualization is not only based on 4 row CTs and 16 row CTs but also on 64 row CTs, respectively. The main goal of this study was to examine whether the increased amount of CT detector rows is correlated with improved quality of the 3D images. All CTs were acquired during routinely performed preoperative evaluation. Overall, there were 12 data sets based on 4 detector row CT, 12 data sets based on 16 detector row CT, and 10 data sets based on 64 detector row CT. Imaging data sets were transferred to the DKFZ Heidelberg using the CHILI teleradiology system. For the analysis all CT scans were examined in a blinded fashion, i.e. both the name of the patient as well as the name of the CT brand were erased. For analysis, the time for segmentation of liver, both portal and hepatic veins as well as the branching depth of portal veins and hepatic veins was recorded automatically. In addition, all results were validated in a blinded fashion based on given quality index. Segmentation of the liver was performed in significantly shorter time (p<0.01, Kruskal-Wallis test) in the 16 row CT (median 479 s) compared to 4 row CT (median 611 s), and 64 row CT (median 670 s), respectively. The branching depth of the portal vein did not differ significantly among the 3 different data sets (p=0.37, Kruskal-Wallis test). However, the branching depth of the hepatic veins was significantly better (p=0.028, Kruskal-Wallis test) in the 4 row CT and 16 row CT compared to 64 row CT. The grading of the quality index was not statistically different for portal veins and hepatic veins (p=0.80, Kruskal-Wallis test). Even though the total quality index was better for the vessel tree based on 64 row CT data sets (mean scale 2.6) compared to 4 CT row data (mean scale 3.25) and 16 row CT data (mean scale 3.0), these differences did not reach statistical difference (p=0.53, Kruskal-Wallis test). Even though 3D visualization is useful in operation planning, the quality of the 3D images appears to be not dependent of the number of CT detector rows. Copyright (c) 2009. Published by Elsevier Ireland Ltd.
Developing Formulas by Skipping Rows in Pascal's Triangle
ERIC Educational Resources Information Center
Buonpastore, Robert J.; Osler, Thomas J.
2007-01-01
A table showing the first thirteen rows of Pascal's triangle, where the rows are, as usual numbered from 0 to 12 is presented. The entries in the table are called binomial coefficients. In this note, the authors systematically delete rows from Pascal's triangle and, by trial and error, try to find a formula that allows them to add new rows to the…
Comparison between single-row and double-row rotator cuff repair: a biomechanical study.
Milano, Giuseppe; Grasso, Andrea; Zarelli, Donatella; Deriu, Laura; Cillo, Mario; Fabbriciani, Carlo
2008-01-01
The aim of this study was to compare the mechanical behavior under cyclic loading test of single-row and double-row rotator cuff repair with suture anchors in an ex-vivo animal model. For the present study, 50 fresh porcine shoulders were used. On each shoulder, a crescent-shaped full-thickness tear of the infraspinatus was performed. Width of the tendon tear was 2 cm. The lesion was repaired using metal suture anchors. Shoulders were divided in four groups, according the type of repair: single-row tension-free repair (Group 1); single-row tension repair (Group 2); double-row tension-free repair (Group 3); double-row tension repair (Group 4); and a control group. Specimens were subjected to a cyclic loading test. Number of cycles at 5 mm of elongation and at failure, and total elongation were calculated. Single-row tension repair showed significantly poorest results for all the variables considered, when compared with the other groups. Regarding the mean number of cycles at 5 mm of elongation and at failure, there was a nonsignificant difference between Groups 3 and 4, and both of them were significantly greater than Group 1. For mean total elongation, the difference between Groups 1, 3, and 4 was not significant, but all of them were significantly lower than the control group. A single-row repair is particularly weak when performed under tension. Double-row repair is significantly more resistant to cyclic displacement than single-row repair in both tension-free and tension repair. Double-row repair technique can be primarily considered for large, unstable rotator cuff tears to improve mechanical strength of primary fixation of tendons to bone.
Barber, F Alan
2016-05-01
To compare the structural healing and clinical outcomes of triple-loaded single-row with suture-bridging double-row repairs of full-thickness rotator cuff tendons when both repair constructs are augmented with platelet-rich plasma fibrin membrane. A prospective, randomized, consecutive series of patients diagnosed with full-thickness rotator cuff tears no greater than 3 cm in anteroposterior length were treated with a triple-loaded single-row (20) or suture-bridging double-row (20) repair augmented with platelet-rich plasma fibrin membrane. The primary outcome measure was cuff integrity by magnetic resonance imaging (MRI) at 12 months postoperatively. Secondary clinical outcome measures were American Shoulder and Elbow Surgeons, Rowe, Simple Shoulder Test, Constant, and Single Assessment Numeric Evaluation scores. The mean MRI interval was 12.6 months (range, 12-17 months). A total of 3 of 20 single-row repairs and 3 of 20 double-row repairs (15%) had tears at follow-up MRI. The single-row group had re-tears in 1 single tendon repair and 2 double tendon repairs. All 3 tears failed at the original attachment site (Cho type 1). In the double-row group, re-tears were found in 3 double tendon repairs. All 3 tears failed medial to the medial row near the musculotendinous junction (Cho type 2). All clinical outcome measures were significantly improved from the preoperative level (P < .0001), but there was no statistical difference between groups postoperatively. There is no MRI difference in rotator cuff tendon re-tear rate at 12 months postsurgery between a triple-loaded single-row repair or a suture-bridging double-row repair when both are augmented with platelet-rich plasma fibrin membrane. No difference could be demonstrated between these repairs on clinical outcome scores. I, Prospective randomized study. Copyright © 2016 Arthroscopy Association of North America. All rights reserved.
Zhang, Chun-Gang; Zhao, De-Wei; Wang, Wei-Ming; Ren, Ming-Fa; Li, Rui-Xin; Yang, Sheng; Liu, Yu-Peng
2010-11-01
For partial-thickness tears of the rotator cuff, double-row fixation and transtendon single-row fixation restore insertion site anatomy, with excellent results. We compared the biomechanical properties of double-row and transtendon single-row suture anchor techniques for repair of grade III partial articular-sided rotator cuff tears. In 10 matched pairs of fresh-frozen sheep shoulders, the infraspinatus tendon from 1 shoulder was repaired with a double-row suture anchor technique. This comprised placement of 2 medial anchors with horizontal mattress sutures at an angle of ≤ 45° into the medial margin of the infraspinatus footprint, just lateral to the articular surface, and 2 lateral anchors with horizontal mattress sutures. Standardized, 50% partial, articular-sided infraspinatus lesions were created in the contralateral shoulder. The infraspinatus tendon from the contralateral shoulder was repaired using two anchors with transtendon single-row mattress sutures. Each specimen underwent cyclic loading from 10 to 100 N for 50 cycles, followed by tensile testing to failure. Gap formation and strain over the footprint area were measured using a motion capture system; stiffness and failure load were determined from testing data. Gap formation for the transtendon single-row repair was significantly smaller (P < 0.05) when compared with the double-row repair for the first cycle ((1.74 ± 0.38) mm vs. (2.86 ± 0.46) mm, respectively) and the last cycle ((3.77 ± 0.45) mm vs. (5.89 ± 0.61) mm, respectively). The strain over the footprint area for the transtendon single-row repair was significantly smaller (P < 0.05) when compared with the double-row repair. Also, it had a higher mean ultimate tensile load and stiffness. For grade III partial articular-sided rotator cuff tears, transtendon single-row fixation exhibited superior biomechanical properties when compared with double-row fixation.
Kim, David H; Elattrache, Neal S; Tibone, James E; Jun, Bong-Jae; DeLaMora, Sergai N; Kvitne, Ronald S; Lee, Thay Q
2006-03-01
Reestablishment of the native footprint during rotator cuff repair has been suggested as an important criterion for optimizing healing potential and fixation strength. A double-row rotator cuff footprint repair will demonstrate superior biomechanical properties compared with a single-row repair. Controlled laboratory study. In 9 matched pairs of fresh-frozen cadaveric shoulders, the supraspinatus tendon from 1 shoulder was repaired with a double-row suture anchor technique: 2 medial anchors with horizontal mattress sutures and 2 lateral anchors with simple sutures. The tendon from the contralateral shoulder was repaired using a single lateral row of 2 anchors with simple sutures. Each specimen underwent cyclic loading from 10 to 180 N for 200 cycles, followed by tensile testing to failure. Gap formation and strain over the footprint area were measured using a video digitizing system; stiffness and failure load were determined from testing machine data. Gap formation for the double-row repair was significantly smaller (P < .05) when compared with the single-row repair for the first cycle (1.67 +/- 0.75 mm vs 3.10 +/- 1.67 mm, respectively) and the last cycle (3.58 +/- 2.59 mm vs 7.64 +/- 3.74 mm, respectively). The initial strain over the footprint area for the double-row repair was nearly one third (P < .05) the strain of the single-row repair. Adding a medial row of anchors increased the stiffness of the repair by 46% and the ultimate failure load by 48% (P < .05). Footprint reconstruction of the rotator cuff using a double-row repair improved initial strength and stiffness and decreased gap formation and strain over the footprint when compared with a single-row repair. To achieve maximal initial fixation strength and minimal gap formation for rotator cuff repair, reconstructing the footprint attachment with 2 rows of suture anchors should be considered.
Pauly, Stephan; Gerhardt, Christian; Chen, Jianhai; Scheibel, Markus
2010-12-01
Several techniques for arthroscopic repair of rotator cuff defects have been introduced over the past years. Besides established techniques such as single-row repairs, new techniques such as double-row reconstructions have gained increasing interest. The present article therefore provides an overview of the currently available literature on both repair techniques with respect to several anatomical, biomechanical, clinical and structural endpoints. Systematic literature review of biomechanical, clinical and radiographic studies investigating or comparing single- and double-row techniques. These results were evaluated and compared to provide an overview on benefits and drawbacks of the respective repair type. Reconstructions of the tendon-to-bone unit for full-thickness tears in either single- or double-row technique differ with respect to several endpoints. Double-row repair techniques provide more anatomical reconstructions of the footprint and superior initial biomechanical characteristics when compared to single-row repair. With regard to clinical results, no significant differences were found while radiological data suggest a better structural tendon integrity following double-row fixation. Presently published clinical studies cannot emphasize a clearly superior technique at this time. Available biomechanical studies are in favour of double-row repair. Radiographic studies suggest a beneficial effect of double-row reconstruction on structural integrity of the reattached tendon or reduced recurrent defect rates, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meinke, Rainer
A method for manufacture of a conductor assembly. The assembly is of the type which, when conducting current, generates a magnetic field or in which, in the presence of a changing magnetic field, a voltage is induced. In an example embodiment one or more first coil rows are formed. The assembly has multiple coil rows about an axis with outer coil rows formed about inner coil rows. A determination is made of deviations from specifications associated with the formed one or more first coil rows. One or more deviations correspond to a magnitude of a multipole field component which departsmore » from a field specification. Based on the deviations, one or more wiring patterns are generated for one or more second coil rows to be formed about the one or more first coil rows. The one or more second coil rows are formed in the assembly. The magnitude of each multipole field component that departs from the field specification is offset.« less
Moving Beam-Blocker-Based Low-Dose Cone-Beam CT
NASA Astrophysics Data System (ADS)
Lee, Taewon; Lee, Changwoo; Baek, Jongduk; Cho, Seungryong
2016-10-01
This paper experimentally demonstrates a feasibility of moving beam-blocker-based low-dose cone-beam CT (CBCT) and exploits the beam-blocking configurations to reach an optimal one that leads to the highest contrast-to-noise ratio (CNR). Sparse-view CT takes projections at sparse view angles and provides a viable option to reducing dose. We have earlier proposed a many-view under-sampling (MVUS) technique as an alternative to sparse-view CT. Instead of switching the x-ray tube power, one can place a reciprocating multi-slit beam-blocker between the x-ray tube and the patient to partially block the x-ray beam. We used a bench-top circular cone-beam CT system with a lab-made moving beam-blocker. For image reconstruction, we used a modified total-variation minimization (TV) algorithm that masks the blocked data in the back-projection step leaving only the measured data through the slits to be used in the computation. The number of slits and the reciprocation frequency have been varied and the effects of them on the image quality were investigated. For image quality assessment, we used CNR and the detectability. We also analyzed the sampling efficiency in the context of compressive sensing: the sampling density and data incoherence in each case. We tested three sets of slits with their number of 6, 12 and 18, each at reciprocation frequencies of 10, 30, 50 and 70 Hz/rot. The optimum condition out of the tested sets was found to be using 12 slits at 30 Hz/rot.
Advanced hybrid particulate collector and method of operation
Miller, Stanley J [Grand Forks, ND
2003-04-08
A device and method for controlling particulate air pollutants of the present invention combines filtration and electrostatic collection devices. The invention includes a chamber housing a plurality of rows of filter elements. Between the rows of filter elements are rows of high voltage discharge electrodes. Between the rows of discharge electrodes and the rows of filter elements are grounded perforated plates for creating electrostatic precipitation zones.
Busfield, Benjamin T; Glousman, Ronald E; McGarry, Michelle H; Tibone, James E; Lee, Thay Q
2008-05-01
Previous studies have shown comparable biomechanical properties of double-row fixation versus double-row fixation with a knotless lateral row. SutureBridge is a construct that secures the cuff with medial row mattress suture anchors and knotless lateral row fixation of the medial suture ends. Recent completely knotless constructs may lead to lesser clinical outcomes if the construct properties are compromised from lack of suture knots. A completely knotless construct without medial row knots will compromise the biomechanical properties in both cyclic and failure-testing parameters. Controlled laboratory study. Six matched pairs of cadaveric shoulders were randomized to 2 groups of double row fixation with SutureBridge: group 1 with medial row knots, and group 2 without medial row knots. The specimens were placed in a materials test system at 30 degrees of abduction. Cyclic testing to 180 N at 1 mm/sec for 30 cycles was performed, followed by tensile testing to failure at 1 mm/sec. Data included cyclic and failure data from the materials test system and gap data using a video digitizing system. All data from paired specimens were compared using paired Student t tests. Group 1 had a statistically significant difference (P < .05) for gap formation for the 1st (3.47 vs 5.05 mm) and 30th cycle (4.22 vs 8.10 mm) and at yield load (5.2 vs 9.1 mm). In addition, there was a greater energy absorbed (2805 vs 1648 N-mm), yield load (233 vs 183.1 N), and ultimate load (352.9 vs 253.9 N) for group 1. The mode of failure for the majority (4/6) of group 2 was lateral row failure, whereas all group 1 specimens failed at the clamp. Although lateral row knotless fixation has been shown not to sacrifice structural integrity of this construct, the addition of a knotless medial row compromises the construct leading to greater gapping and failure at lower loads. This may raise concerns regarding recently marketed completely knotless double row constructs.
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-12-09
We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less
Virtual screening of inorganic materials synthesis parameters with deep learning
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Kevin; Jegelka, Stefanie; Olivetti, Elsa
2017-12-01
Virtual materials screening approaches have proliferated in the past decade, driven by rapid advances in first-principles computational techniques, and machine-learning algorithms. By comparison, computationally driven materials synthesis screening is still in its infancy, and is mired by the challenges of data sparsity and data scarcity: Synthesis routes exist in a sparse, high-dimensional parameter space that is difficult to optimize over directly, and, for some materials of interest, only scarce volumes of literature-reported syntheses are available. In this article, we present a framework for suggesting quantitative synthesis parameters and potential driving factors for synthesis outcomes. We use a variational autoencoder to compress sparse synthesis representations into a lower dimensional space, which is found to improve the performance of machine-learning tasks. To realize this screening framework even in cases where there are few literature data, we devise a novel data augmentation methodology that incorporates literature synthesis data from related materials systems. We apply this variational autoencoder framework to generate potential SrTiO3 synthesis parameter sets, propose driving factors for brookite TiO2 formation, and identify correlations between alkali-ion intercalation and MnO2 polymorph selection.
Parametric dictionary learning for modeling EAP and ODF in diffusion MRI.
Merlet, Sylvain; Caruyer, Emmanuel; Deriche, Rachid
2012-01-01
In this work, we propose an original and efficient approach to exploit the ability of Compressed Sensing (CS) to recover diffusion MRI (dMRI) signals from a limited number of samples while efficiently recovering important diffusion features such as the ensemble average propagator (EAP) and the orientation distribution function (ODF). Some attempts to sparsely represent the diffusion signal have already been performed. However and contrarly to what has been presented in CS dMRI, in this work we propose and advocate the use of a well adapted learned dictionary and show that it leads to a sparser signal estimation as well as to an efficient reconstruction of very important diffusion features. We first propose to learn and design a sparse and parametric dictionary from a set of training diffusion data. Then, we propose a framework to analytically estimate in closed form two important diffusion features: the EAP and the ODF. Various experiments on synthetic, phantom and human brain data have been carried out and promising results with reduced number of atoms have been obtained on diffusion signal reconstruction, thus illustrating the added value of our method over state-of-the-art SHORE and SPF based approaches.
Uncovering representations of sleep-associated hippocampal ensemble spike activity
NASA Astrophysics Data System (ADS)
Chen, Zhe; Grosmark, Andres D.; Penagos, Hector; Wilson, Matthew A.
2016-08-01
Pyramidal neurons in the rodent hippocampus exhibit spatial tuning during spatial navigation, and they are reactivated in specific temporal order during sharp-wave ripples observed in quiet wakefulness or slow wave sleep. However, analyzing representations of sleep-associated hippocampal ensemble spike activity remains a great challenge. In contrast to wake, during sleep there is a complete absence of animal behavior, and the ensemble spike activity is sparse (low occurrence) and fragmental in time. To examine important issues encountered in sleep data analysis, we constructed synthetic sleep-like hippocampal spike data (short epochs, sparse and sporadic firing, compressed timescale) for detailed investigations. Based upon two Bayesian population-decoding methods (one receptive field-based, and the other not), we systematically investigated their representation power and detection reliability. Notably, the receptive-field-free decoding method was found to be well-tuned for hippocampal ensemble spike data in slow wave sleep (SWS), even in the absence of prior behavioral measure or ground truth. Our results showed that in addition to the sample length, bin size, and firing rate, number of active hippocampal pyramidal neurons are critical for reliable representation of the space as well as for detection of spatiotemporal reactivated patterns in SWS or quiet wakefulness.
Sparse reconstruction of breast MRI using homotopic L0 minimization in a regional sparsified domain.
Wong, Alexander; Mishra, Akshaya; Fieguth, Paul; Clausi, David A
2013-03-01
The use of MRI for early breast examination and screening of asymptomatic women has become increasing popular, given its ability to provide detailed tissue characteristics that cannot be obtained using other imaging modalities such as mammography and ultrasound. Recent application-oriented developments in compressed sensing theory have shown that certain types of magnetic resonance images are inherently sparse in particular transform domains, and as such can be reconstructed with a high level of accuracy from highly undersampled k-space data below Nyquist sampling rates using homotopic L0 minimization schemes, which holds great potential for significantly reducing acquisition time. An important consideration in the use of such homotopic L0 minimization schemes is the choice of sparsifying transform. In this paper, a regional differential sparsifying transform is investigated for use within a homotopic L0 minimization framework for reconstructing breast MRI. By taking local regional characteristics into account, the regional differential sparsifying transform can better account for signal variations and fine details that are characteristic of breast MRI than the popular finite differential transform, while still maintaining strong structure fidelity. Experimental results show that good breast MRI reconstruction accuracy can be achieved compared to existing methods.
Outcomes of single-row and double-row arthroscopic rotator cuff repair: a systematic review.
Saridakis, Paul; Jones, Grant
2010-03-01
Arthroscopic rotator cuff repair is a common procedure that is gaining wide acceptance among orthopaedic surgeons because it is less invasive than open repair techniques. However, there is little consensus on whether to employ single-row or double-row fixation. The purpose of the present study was to systematically review the English-language literature to see if there is a difference between single-row and double-row fixation techniques in terms of clinical outcomes and radiographic healing. PubMed, the Cochrane Central Register of Controlled Trials, and EMBASE were reviewed with the terms "arthroscopic rotator cuff," "single row repair," and "double row repair." The inclusion criteria were a level of evidence of III (or better), an in vivo human clinical study on arthroscopic rotator cuff repair, and direct comparison of single-row and double-row fixation. Excluded were technique reports, review articles, biomechanical studies, and studies with no direct comparison of arthroscopic rotator cuff repair techniques. On the basis of these criteria, ten articles were found, and a review of the full-text articles identified six articles for final review. Data regarding demographic characteristics, rotator cuff pathology, surgical techniques, biases, sample sizes, postoperative rehabilitation regimens, American Shoulder and Elbow Surgeons scores, University of California at Los Angeles scores, Constant scores, and the prevalence of recurrent defects noted on radiographic studies were extracted. Confidence intervals were then calculated for the American Shoulder and Elbow Surgeons, University of California at Los Angeles, and Constant scores. Quality appraisal was performed by the two authors to identify biases. There was no significant difference between the single-row and double-row groups within each study in terms of postoperative clinical outcomes. However, one study divided each of the groups into patients with small-to-medium tears (< 3 cm in length) and those with large-to-massive tears (> or = 3 cm in length), and the authors noted that patients with large to massive tears who had double-row fixation performed better in terms of the American Shoulder and Elbow Surgeons scores and Constant scores in comparison with those who had single-row fixation. Two studies demonstrated a significant difference in terms of structural healing of the rotator cuff tendons after surgery, with the double-row method having superior results. There was an overlap in the confidence intervals between the single-row and double-row groups for all of the studies and the American Shoulder and Elbow Surgeons, Constant, and University of California at Los Angeles scoring systems utilized in the studies, indicating that there was no difference in these scores between single-row and double-row fixation. Potential biases included selection, performance, detection, and attrition biases; each study had at least one bias. Two studies had potentially inadequate power to detect differences between the two techniques. There appears to be a benefit of structural healing when an arthroscopic rotator cuff repair is performed with double-row fixation as opposed to single-row fixation. However, there is little evidence to support any functional differences between the two techniques, except, possibly, for patients with large or massive rotator cuff tears (> or = 3 cm). A risk-reward analysis of a patient's age, functional demands, and other quality-of-life issues should be considered before deciding which surgical method to employ. Double-row fixation may result in improved structural healing at the site of rotator cuff repair in some patients, depending on the size of the tear.
Biomechanical evaluation of a single-row versus double-row repair for complete subscapularis tears.
Wellmann, Mathias; Wiebringhaus, Philipp; Lodde, Ina; Waizy, Hazibullah; Becher, Christoph; Raschke, Michael J; Petersen, Wolf
2009-12-01
The purpose of the study was to compare a single-row repair and a double-row repair technique for the specific characteristics of a complete subscapularis lesion. Ten pairs of human cadaveric shoulder human shoulder specimens were tested for stiffness and ultimate tensile strength of the intact tendons in a load to failure protocol. After a complete subscapularis tear was provoked, the specimens were assigned to two treatment groups: single-row repair (1) and a double-row repair using a "suture bridge" technique (2). After repair cyclic loading a subsequent load to failure protocol was performed to determine the ultimate tensile load, the stiffness and the elongation behaviour of the reconstructions. The intact subscapularis tendons had a mean stiffness of 115 N/mm and a mean ultimate load of 720 N. The predominant failure mode of the intact tendons was a tear at the humeral insertion site (65%). The double-row technique restored 48% of the ultimate load of the intact tendons (332 N), while the single-row technique revealed a significantly lower ultimate load of 244 N (P = 0.001). In terms of the stiffness, the double-row technique showed a mean stiffness of 81 N/mm which is significantly higher compared to the stiffness of the single-row repairs of 55 N/mm (P = 0.001). The double-row technique has been shown to be stronger and stiffer when compared to a conventional single-row repair. Therefore, this technique is recommended from a biomechanical point of view irrespectively if performed by an open or arthroscopic approach.
Parametric analysis of synthetic aperture radar data acquired over truck garden vegetation
NASA Technical Reports Server (NTRS)
Wu, S. T.
1984-01-01
An airborne X-band SAR acquired multipolarization and multiflight pass SAR images over a truck garden vegetation area. Based on a variety of land cover and row crop direction variations, the vertical (VV) polarization data contain the highest contrast, while cross polarization contains the least. When the radar flight path is parallel to the row direction, both horizontal (HH) and VV polarization data contain very high return which masks out the specific land cover that forms the row structure. Cross polarization data are not that sensitive to row orientation. The inclusion of like and cross polarization data help delineate special surface features (e.g., row crop against non-row-oriented land cover, very-rough-surface against highly row-oriented surface).
Sun, Baoru; Peng, Yi; Yang, Hongyu; Li, Zhijian; Gao, Yingzhi; Wang, Chao; Yan, Yuli; Liu, Yanmei
2014-01-01
Given the growing challenges to food and eco-environmental security as well as sustainable development of animal husbandry in the farming and pastoral areas of northeast China, it is crucial to identify advantageous intercropping modes and some constraints limiting its popularization. In order to assess the performance of various intercropping modes of maize and alfalfa, a field experiment was conducted in a completely randomized block design with five treatments: maize monoculture in even rows, maize monoculture in alternating wide and narrow rows, alfalfa monoculture, maize intercropped with one row of alfalfa in wide rows and maize intercropped with two rows of alfalfa in wide rows. Results demonstrate that maize monoculture in alternating wide and narrow rows performed best for light transmission, grain yield and output value, compared to in even rows. When intercropped, maize intercropped with one row of alfalfa in wide rows was identified as the optimal strategy and the largely complementary ecological niches of alfalfa and maize were shown to account for the intercropping advantages, optimizing resource utilization and improving yield and economic incomes. These findings suggest that alfalfa/maize intercropping has obvious advantages over monoculture and is applicable to the farming and pastoral areas of northeast China.
Sun, Baoru; Peng, Yi; Yang, Hongyu; Li, Zhijian; Gao, Yingzhi; Wang, Chao; Yan, Yuli; Liu, Yanmei
2014-01-01
Given the growing challenges to food and eco-environmental security as well as sustainable development of animal husbandry in the farming and pastoral areas of northeast China, it is crucial to identify advantageous intercropping modes and some constraints limiting its popularization. In order to assess the performance of various intercropping modes of maize and alfalfa, a field experiment was conducted in a completely randomized block design with five treatments: maize monoculture in even rows, maize monoculture in alternating wide and narrow rows, alfalfa monoculture, maize intercropped with one row of alfalfa in wide rows and maize intercropped with two rows of alfalfa in wide rows. Results demonstrate that maize monoculture in alternating wide and narrow rows performed best for light transmission, grain yield and output value, compared to in even rows. When intercropped, maize intercropped with one row of alfalfa in wide rows was identified as the optimal strategy and the largely complementary ecological niches of alfalfa and maize were shown to account for the intercropping advantages, optimizing resource utilization and improving yield and economic incomes. These findings suggest that alfalfa/maize intercropping has obvious advantages over monoculture and is applicable to the farming and pastoral areas of northeast China. PMID:25329376
An approach to improve the spatial resolution of a force mapping sensing system
NASA Astrophysics Data System (ADS)
Negri, Lucas Hermann; Manfron Schiefer, Elberth; Sade Paterno, Aleksander; Muller, Marcia; Luís Fabris, José
2016-02-01
This paper proposes a smart sensor system capable of detecting sparse forces applied to different positions of a metal plate. The sensing is performed with strain transducers based on fiber Bragg gratings (FBG) distributed under the plate. Forces actuating in nine squared regions of the plate, resulting from up to three different loads applied simultaneously to the plate, were monitored with seven transducers. The system determines the magnitude of the force/pressure applied on each specific area, even in the absence of a dedicated transducer for that area. The set of strain transducers with coupled responses and a compressive sensing algorithm are employed to solve the underdetermined inverse problem which emerges from mapping the force. In this configuration, experimental results have shown that the system is capable of recovering the value of the load distributed on the plate with a signal-to-noise ratio better than 12 dB, when the plate is submitted to three simultaneous test loads. The proposed method is a practical illustration of compressive sensing algorithms for the reduction of the number of FBG-based transducers used in a quasi-distributed configuration.
Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang
2017-05-30
In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.
Bayesian nonparametric dictionary learning for compressed sensing MRI.
Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping
2014-12-01
We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.
Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.
Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin
2013-09-01
Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.
Weed management practices affect the diversity and relative abundance of physic nut mites.
Saraiva, Althiéris de Sousa; Sarmento, Renato A; Erasmo, Eduardo A L; Pedro-Neto, Marçal; de Souza, Danival José; Teodoro, Adenir V; Silva, Daniella G
2015-03-01
Crop management practices determine weed community, which in turn may influence patterns of diversity and abundance of associated arthropods. This study aimed to evaluate whether local weed management practices influence the diversity and relative abundance of phytophagous and predatory mites, as well as mites with undefined feeding habits--of the families Oribatidae and Acaridae--in a physic nut (Jatropha curcas L.) plantation subjected to (1) within-row herbicide spraying and between-row mowing; (2) within-row herbicide spraying and no between-row mowing; (3) within-row weeding and between-row mowing; (4) within-row weeding and no between-row mowing; and (5) unmanaged (control). The herbicide used was glyphosate. Herbicide treatments resulted in higher diversity and relative abundance of predatory mites and mites with undefined feeding habit on physic nut shrubs. This was probably due to the toxic effects of the herbicide on mites or to removal of weeds. Within-row herbicide spraying combined with between-row mowing was the treatment that most contributed to this effect. Our results show that within-row weeds harbor important species of predatory mites and mites with undefined feeding habit. However, the dynamics of such mites in the system can be changed according to the weed management practice applied. Among the predatory mites of the family Phytoseiidae Amblydromalus sp. was the most abundant, whereas Brevipalpus phoenicis was the most frequent phytophagous mite and an unidentified oribatid species was the most frequent mite with undefined feeding habit.
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2008-10-14
An apparatus, program product and method checks for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.
Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN
2012-02-07
An apparatus, program product and method check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.
Archer, Charles Jens; Pinnow, Kurt Walter; Ratterman, Joseph D.; Smith, Brian Edward
2010-02-23
An apparatus and program product check for nodal faults in a row of nodes by causing each node in the row to concurrently communicate with its adjacent neighbor nodes in the row. The communications are analyzed to determine a presence of a faulty node or connection.
2012-06-22
With her prime crewmates and backup crewmembers looking on, Expedition 32/33 Flight Engineer Sunita Williams of NASA (first row, center) signed a visitors book at the Gagarin Cosmonaut Training Center museum in Star City, Russia June 22, 2012 as part of traditional activities leading to her launch July 15 to the International Space Station from the Baikonur Cosmodrome in Kazakhstan on the Soyuz TMA-05M spacecraft. Williams will launch along with Aki Hoshide of the Japan Aerospace Exploration Agency (first row, left) and Soyuz Commander Yuri Malenchenko (first row, right). Also participating in the activities were the backup crew on the top row, Flight Engineer Tom Marshburn of NASA (top row, left), Flight Engineer Chris Hadfield of the Canadian Space Agency (top row, center) and Roman Romanenko (top row, right). Credit: NASA/Stephanie Stoll
Construction of ground-state preserving sparse lattice models for predictive materials simulations
NASA Astrophysics Data System (ADS)
Huang, Wenxuan; Urban, Alexander; Rong, Ziqin; Ding, Zhiwei; Luo, Chuan; Ceder, Gerbrand
2017-08-01
First-principles based cluster expansion models are the dominant approach in ab initio thermodynamics of crystalline mixtures enabling the prediction of phase diagrams and novel ground states. However, despite recent advances, the construction of accurate models still requires a careful and time-consuming manual parameter tuning process for ground-state preservation, since this property is not guaranteed by default. In this paper, we present a systematic and mathematically sound method to obtain cluster expansion models that are guaranteed to preserve the ground states of their reference data. The method builds on the recently introduced compressive sensing paradigm for cluster expansion and employs quadratic programming to impose constraints on the model parameters. The robustness of our methodology is illustrated for two lithium transition metal oxides with relevance for Li-ion battery cathodes, i.e., Li2xFe2(1-x)O2 and Li2xTi2(1-x)O2, for which the construction of cluster expansion models with compressive sensing alone has proven to be challenging. We demonstrate that our method not only guarantees ground-state preservation on the set of reference structures used for the model construction, but also show that out-of-sample ground-state preservation up to relatively large supercell size is achievable through a rapidly converging iterative refinement. This method provides a general tool for building robust, compressed and constrained physical models with predictive power.
Experimental investigations on airborne gravimetry based on compressed sensing.
Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun
2014-03-18
Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.
Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing
Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun
2014-01-01
Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125
High-performance 3D compressive sensing MRI reconstruction.
Kim, Daehyun; Trzasko, Joshua D; Smelyanskiy, Mikhail; Haider, Clifton R; Manduca, Armando; Dubey, Pradeep
2010-01-01
Compressive Sensing (CS) is a nascent sampling and reconstruction paradigm that describes how sparse or compressible signals can be accurately approximated using many fewer samples than traditionally believed. In magnetic resonance imaging (MRI), where scan duration is directly proportional to the number of acquired samples, CS has the potential to dramatically decrease scan time. However, the computationally expensive nature of CS reconstructions has so far precluded their use in routine clinical practice - instead, more-easily generated but lower-quality images continue to be used. We investigate the development and optimization of a proven inexact quasi-Newton CS reconstruction algorithm on several modern parallel architectures, including CPUs, GPUs, and Intel's Many Integrated Core (MIC) architecture. Our (optimized) baseline implementation on a quad-core Core i7 is able to reconstruct a 256 × 160×80 volume of the neurovasculature from an 8-channel, 10 × undersampled data set within 56 seconds, which is already a significant improvement over existing implementations. The latest six-core Core i7 reduces the reconstruction time further to 32 seconds. Moreover, we show that the CS algorithm benefits from modern throughput-oriented architectures. Specifically, our CUDA-base implementation on NVIDIA GTX480 reconstructs the same dataset in 16 seconds, while Intel's Knights Ferry (KNF) of the MIC architecture even reduces the time to 12 seconds. Such level of performance allows the neurovascular dataset to be reconstructed within a clinically viable time.
Method of reducing multipole content in a conductor assembly during manufacture
Meinke, Rainer [Melbourne, FL
2011-08-09
A method for manufacture of a conductor assembly. The assembly is of the type which, when conducting current, generates a magnetic field or in which, in the presence of a changing magnetic field, a voltage is induced. In an example embodiment one or more first coil rows are formed. The assembly has multiple coil rows about an axis with outer coil rows formed about inner coil rows. A determination is made of deviations from specifications associated with the formed one or more first coil rows. One or more deviations correspond to a magnitude of a multipole field component which departs from a field specification. Based on the deviations, one or more wiring patterns are generated for one or more second coil rows to be formed about the one or more first coil rows. The one or more second coil rows are formed in the assembly. The magnitude of each multipole field component that departs from the field specification is offset.
Variability of reflectance measurements with sensor altitude and canopy type
NASA Technical Reports Server (NTRS)
Daughtry, C. S. T.; Vanderbilt, V. C.; Pollara, V. J.
1981-01-01
Data were acquired on canopies of mature corn planted in 76 cm rows, mature soybeans planted in 96 cm rows with 71 percent soil cover, and mature soybeans planed in 76 cm rows with 100 percent soil cover. A LANDSAT band radiometer with a 15 degree field of view was used at ten altitudes ranging from 0.2 m to 10 m above the canopy. At each altitude, measurements were taken at 15 cm intervals also a 2.0 m transect perpendicular to the crop row direction. Reflectance data were plotted as a function of altitude and horizontal position to verify that the variance of measurements at low altitudes was attributable to row effects which disappear at higher altitudes where the sensor integrate across several rows. The coefficient of variation of reflectance decreased exponentially as the sensor was elevated. Systematic sampling (at odd multiples of 0.5 times the row spacing interval) required fewer measurements than simple random sampling over row crop canopies.
Method of reducing multipole content in a conductor assembly during manufacture
Meinke, Rainer
2013-08-20
A method for manufacture of a conductor assembly. The assembly is of the type which, when conducting current, generates a magnetic field or in which, in the presence of a changing magnetic field, a voltage is induced. In an example embodiment one or more first coil rows are formed. The assembly has multiple coil rows about an axis with outer coil rows formed about inner coil rows. A determination is made of deviations from specifications associated with the formed one or more first coil rows. One or more deviations correspond to a magnitude of a multipole field component which departs from a field specification. Based on the deviations, one or more wiring patterns are generated for one or more second coil rows to be formed about the one or more first coil rows. The one or more second coil rows are formed in the assembly. The magnitude of each multipole field component that departs from the field specification is offset.
Han, Eun-Taek; Choi, Moon-Seok; Choi, Sung-Yil; Chai, Jong-Yil
2011-12-01
The tegumental ultrastructure of juvenile and adult Acanthoparyphium tyosenense (Digenea: Echinostomatidae) was observed by scanning electron microscopy. One- to 3-day-old juveniles and 10-day-old adults were harvested from chicks experimentally fed metacercariae from a bivalve, Mactra veneriformis. The juvenile worms were minute, curved ventrally, and had 23 collar spines characteristically arranged in a single row. The lips of the oral sucker had 7 single aciliated sensory papillae and 4 grouped uniciliated sensory papillae. The ventral sucker had 25 aciliated round swellings on its lip. The anterolateral surface between the 2 suckers was densely packed with tongue-shaped tegumental spines, and the ventral surface just posterior to the ventral sucker was covered with peg-like spines. Retractile, peg-like spines were seen on the anterolateral surface, whereas scale-like spines with round tips and broad bases were sparsely distributed posterior to the ventral sucker. The cirrus was characteristically protruding and armed with minute spines. The surface ultrastructure of A. tyosenense was unique, especially in the number and arrangement of collar spines, shape, and distribution of tegumental spines and in distribution of sensory papillae.
NASA Astrophysics Data System (ADS)
Kalisperakis, I.; Stentoumis, Ch.; Grammatikopoulos, L.; Karantzalos, K.
2015-08-01
The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from i.e., (i) hyperspectral data, (ii) 2D RGB orthophotomosaics and (iii) 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (r2 > 73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greenness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.
Extension of FRI for modeling of electrocardiogram signals.
Quick, R Frank; Crochiere, Ronald E; Hong, John H; Hormati, Ali; Baechler, Gilles
2012-01-01
Recent work has developed a modeling method applicable to certain types of signals having a "finite rate of innovation" (FRI). Such signals contain a sparse collection of time- or frequency-limited pulses having a restricted set of allowable pulse shapes. A limitation of past work on FRI is that all of the pulses must have the same shape. Many real signals, including electrocardiograms, consist of pulses with varying widths and asymmetry, and therefore are not well fit by the past FRI methods. We present an extension of FRI allowing pulses having variable pulse width (VPW) and asymmetry. We show example results for electrocardiograms and discuss the possibility of application to signal compression and diagnostics.
The cost-effectiveness of single-row compared with double-row arthroscopic rotator cuff repair.
Genuario, James W; Donegan, Ryan P; Hamman, Daniel; Bell, John-Erik; Boublik, Martin; Schlegel, Theodore; Tosteson, Anna N A
2012-08-01
Interest in double-row techniques for arthroscopic rotator cuff repair has increased over the last several years, presumably because of a combination of literature demonstrating superior biomechanical characteristics and recent improvements in instrumentation and technique. As a result of the increasing focus on value-based health-care delivery, orthopaedic surgeons must understand the cost implications of this practice. The purpose of this study was to examine the cost-effectiveness of double-row arthroscopic rotator cuff repair compared with traditional single-row repair. A decision-analytic model was constructed to assess the cost-effectiveness of double-row arthroscopic rotator cuff repair compared with single-row repair on the basis of the cost per quality-adjusted life year gained. Two cohorts of patients (one with a tear of <3 cm and the other with a tear of ≥3 cm) were evaluated. Probabilities for retear and persistent symptoms, health utilities for the particular health states, and the direct costs for rotator cuff repair were derived from the orthopaedic literature and institutional data. The incremental cost-effectiveness ratio for double-row compared with single-row arthroscopic rotator cuff repair was $571,500 for rotator cuff tears of <3 cm and $460,200 for rotator cuff tears of ≥3 cm. The rate of radiographic or symptomatic retear alone did not influence cost-effectiveness results. If the increase in the cost of double-row repair was less than $287 for small or moderate tears and less than $352 for large or massive tears compared with the cost of single-row repair, then double-row repair would represent a cost-effective surgical alternative. On the basis of currently available data, double-row rotator cuff repair is not cost-effective for any size rotator cuff tears. However, variability in the values for costs and probability of retear can have a profound effect on the results of the model and may create an environment in which double-row repair becomes the more cost-effective surgical option. The identification of the threshold values in this study may help surgeons to determine the most cost-effective treatment.
Medial-row failure after arthroscopic double-row rotator cuff repair.
Yamakado, Kotaro; Katsuo, Shin-ichi; Mizuno, Katsunori; Arakawa, Hitoshi; Hayashi, Seigaku
2010-03-01
We report 4 cases of medial-row failure after double-row arthroscopic rotator cuff repair (ARCR) without arthroscopic subacromial decompression (ASAD), in which there was pullout of mattress sutures of the medial row and knots were caught between the cuff and the greater tuberosity. Between October 2006 and January 2008, 49 patients underwent double-row ARCR. During this period, ASAD was not performed with ARCR. Revision arthroscopy was performed in 8 patients because of ongoing symptoms after the index operation. In 4 of 8 patients the medial rotator cuff failed; the tendon appeared to be avulsed at the medial row, and there were exposed knots on the bony surface of the rotator cuff footprint. It appeared that the knots were caught between the cuff and the greater tuberosity. Three retear cuffs were revised with the arthroscopic transtendon technique, and one was revised with a single-row technique after completing the tear. ASAD was performed in all patients. Three of the four patients showed improvement of symptoms and returned to their preinjury occupation. Impingement of pullout knots may be a source of pain after double-row rotator cuff repair. Copyright 2010 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
High-Altitude Flight Cooling Investigation of a Radial Air-Cooled Engine
NASA Technical Reports Server (NTRS)
Manganiello, Eugene J; Valerino, Michael F; Bell, E Barton
1947-01-01
An investigation of the cooling of an 18-cylinder, twin-row, radial, air-cooled engine in a high-performance pursuit airplane has been conducted for variable engine and flight conditions at altitudes ranging from 5000 to 35,000 feet in order to provide a basis for predicting high-altitude cooling performance from sea-level or low altitude experimental results. The engine cooling data obtained were analyzed by the usual NACA cooling-correlation method wherein cylinder-head and cylinder-barrel temperatures are related to the pertinent engine and cooling-air variables. A theoretical analysis was made of the effect on engine cooling of the change of density of the cooling air across the engine (the compressibility effect), which becomes of increasing importance as altitude is increased. Good agreement was obtained between the results of the theoretical analysis and the experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shem, L.M.; Zimmerman, R.E.; Hayes, D.
The goal of the Gas Research Institute Wetland Corridors Program is to document impacts of existing pipeline on the wetlands they traverse. To accomplish this goal, 12 existing wetland crossings were surveyed. These sites varied in elapsed time since pipeline construction, wetland type, pipeline installation techniques, and night of-way (ROW) management practices. This report presents the results of a survey conducted over the period of August 12-13, 1991, at the Bayou Grand Cane crossing in De Soto Parish, Louisiana, where a pipeline constructed three years prior to the survey crosses the bayou through mature bottomland hardwoods. The sit was notmore » seeded or fertilized after construction activities. At the time of sampling, a dense herb stratum (composed of mostly native species) covered the 20-m-wide ROW, except within drainage channels. As a result of the creation of the ROW, new habitat was created, plant diversity increased, and forest habitat became fragmented. The ROW must be maintained at an early stage of succession to allow access to the pipeline however, impacts to the wetland were minimized by decreasing the width of the ROW to 20 m and recreating the drainage channels across the ROW. The canopy trees on the ROW`s edge shaded part of the ROW, which helped to minimize the effects of the ROW.« less
Unducted, counterrotating gearless front fan engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.B.
This patent describes a high bypass ratio gas turbine engine. It comprises a core engine effective for generating combustion gases passing through a main flow path; a power turbine aft of the core engine and including first and second counter rotatable interdigitated turbine blade rows, effective for counterrotating first and second drive shafts, respectively; an unducted fan section forward of the core engine including a first fan blade row connected to the first drive shaft and a second fan blade row axially spaced aftward from the first fan blade row and connected to the second drive shaft; and a boostermore » compressor axially positioned between the first and second fan blade rows and including first compressor blade rows connected to the first drive shaft and second compressor blade rows connected to the second drive shaft.« less
Baums, Mike H; Spahn, Gunter; Buchhorn, Gottfried H; Schultz, Wolfgang; Hofmann, Lars; Klinger, Hans-Michael
2012-06-01
To investigate the biomechanical and magnetic resonance imaging (MRI)-derived morphologic changes between single- and double-row rotator cuff repair at different time points after fixation. Eighteen mature female sheep were randomly assigned to either a single-row treatment group using arthroscopic Mason-Allen stitches or a double-row treatment group using a combination of arthroscopic Mason-Allen and mattress stitches. Each group was analyzed at 1 of 3 survival points (6 weeks, 12 weeks, and 26 weeks). We evaluated the integrity of the cuff repair using MRI and biomechanical properties using a mechanical testing machine. The mean load to failure was significantly higher in the double-row group compared with the single-row group at 6 and 12 weeks (P = .018 and P = .002, respectively). At 26 weeks, the differences were not statistically significant (P = .080). However, the double-row group achieved a mean load to failure similar to that of a healthy infraspinatus tendon, whereas the single-row group reached only 70% of the load of a healthy infraspinatus tendon. No significant morphologic differences were observed based on the MRI results. This study confirms that in an acute repair model, double-row repair may enhance the speed of mechanical recovery of the tendon-bone complex when compared with single-row repair in the early postoperative period. Double-row rotator cuff repair enables higher mechanical strength that is especially sustained during the early recovery period and may therefore improve clinical outcome. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
Ostrander, Roger V; McKinney, Bart I
2012-10-01
Studies suggest that arthroscopic repair techniques may have high recurrence rates for larger rotator cuff tears. A more anatomic repair may improve the success rate when performing arthroscopic rotator cuff repair. We hypothesized that a triple-row modification of the suture-bridge technique for rotator cuff repair would result in significantly more footprint contact area and pressure between the rotator cuff and the humeral tuberosity. Eighteen ovine infraspinatus tendons were repaired using 1 of 3 simulated arthroscopic techniques: a double-row repair, the suture-bridge technique, and a triple-row repair. The triple-row repair technique is a modification of the suture-bridge technique that uses an additional reducing anchor between the medial and lateral rows. Six samples were tested per group. Pressure-indicating film was used to measure the footprint contact area and pressure after each repair. The triple-row repair resulted in significantly more rotator cuff footprint contact area and contact pressure compared with the double-row technique and the standard suture-bridge technique. No statistical difference in contact area or contact pressure was found between the double-row technique and the suture-bridge technique. The triple-row technique for rotator cuff repair results in significantly more footprint contact area and contact pressure compared with the double-row and standard suture-bridge techniques. This more anatomic repair may improve the healing rate when performing arthroscopic rotator cuff repair. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Evaluation of Two Compressed Air Foam Systems for Culling Caged Layer Hens.
Benson, Eric R; Weiher, Jaclyn A; Alphin, Robert L; Farnell, Morgan; Hougentogler, Daniel P
2018-04-24
Outbreaks of avian influenza (AI) and other highly contagious poultry diseases continue to be a concern for those involved in the poultry industry. In the situation of an outbreak, emergency depopulation of the birds involved is necessary. In this project, two compressed air foam systems (CAFS) were evaluated for mass emergency depopulation of layer hens in a manure belt equipped cage system. In both experiments, a randomized block design was used with multiple commercial layer hens treated with one of three randomly selected depopulation methods: CAFS, CAFS with CO₂ gas, and CO₂ gas. In Experiment 1, a Rowe manufactured CAFS was used, a selection of birds were instrumented, and the time to unconsciousness, brain death, altered terminal cardiac activity and motion cessation were recorded. CAFS with and without CO₂ was faster to unconsciousness, however, the other parameters were not statistically significant. In Experiment 2, a custom Hale based CAFS was used to evaluate the impact of bird age, a selection of birds were instrumented, and the time to motion cessation was recorded. The difference in time to cessation of movement between pullets and spent hens using CAFS was not statistically significant. Both CAFS depopulate caged layers, however, there was no benefit to including CO₂.
Acoustics of swirling flow in a variable area pipe
NASA Astrophysics Data System (ADS)
Peake, Nigel; Cooper, Alison
2000-11-01
We consider the propagation of small-amplitude waves through swirling steady flow conveyed by a circular pipe whose cross-sectional area varies slowly in the axial direction. The unsteady flow is decomposed into vortical and irrotational components, and the steady vorticity means that unlike in standard rapid distortion theory these components are coupled, as in recent work by Atassi, Tam and co-workers. The coupling leads to separate families of modes, driven by compressibility or by the swirl, which must be treated separately. We consider the practically important case in which the swirl Mach numbers are comparable to those of the steady axial flow. WKB analysis is applied using ɛ, the mean axial gradient of the cylinder walls, as the small parameter. At O(1) we determine local wave numbers according to the parallel-flow theory of Atassi, while at O(ɛ) a secularity condition yields the variaition of the modal amplitudes along the axis. We demonstrate that the presence of swirl can significantly reduce the amplitude of acoustic modes in the pipe. This is of practical significnance for the prediction of noise generation by turbomachinery, since rotating blade rows can produce significant mean swirl downstream. Similar analysis for a compressible swirling jet, in which the axial variation is provided by viscous effects, will also be described.
Simulation of design-unbiased point-to-particle sampling compared to alternatives on plantation rows
Thomas B. Lynch; David Hamlin; Mark J. Ducey
2016-01-01
Total quantities of tree attributes can be estimated in plantations by sampling on plantation rows using several methods. At random sample points on a row, either fixed row lengths or variable row lengths with a fixed number of sample trees can be assessed. Ratio of means or mean of ratios estimators can be developed for the fixed number of trees option but are not...
Casting core for a cooling arrangement for a gas turbine component
Lee, Ching-Pang; Heneveld, Benjamin E
2015-01-20
A ceramic casting core, including: a plurality of rows (162, 166, 168) of gaps (164), each gap (164) defining an airfoil shape; interstitial core material (172) that defines and separates adjacent gaps (164) in each row (162, 166, 168); and connecting core material (178) that connects adjacent rows (170, 174, 176) of interstitial core material (172). Ends of interstitial core material (172) in one row (170, 174, 176) align with ends of interstitial core material (172) in an adjacent row (170, 174, 176) to form a plurality of continuous and serpentine shaped structures each including interstitial core material (172) from at least two adjacent rows (170, 174, 176) and connecting core material (178).