Sparsely sampling the sky: a Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Paykari, P.; Jaffe, A. H.
2013-08-01
The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.
New methods for sampling sparse populations
Anna Ringvall
2007-01-01
To improve surveys of sparse objects, methods that use auxiliary information have been suggested. Guided transect sampling uses prior information, e.g., from aerial photographs, for the layout of survey strips. Instead of being laid out straight, the strips will wind between potentially more interesting areas. 3P sampling (probability proportional to prediction) uses...
Sparsely sampling the sky: Regular vs. random sampling
NASA Astrophysics Data System (ADS)
Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.
2015-09-01
Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters).
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-07
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Fredenberg, E.; Lundqvist, Mats; Siewerdsen, J. H.
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging. PMID:26611740
Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-01-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540
ERIC Educational Resources Information Center
Sadler, Peter G.
The Institute for the Study of Sparsely Populated Areas is a multidisciplinary research unit which acts to coordinate, further, and initiate studies of the economic and social conditions of sparsely populated areas. Short summaries of the eight studies completed in the session of 1977-78 indicate work in such areas as the study of political life…
Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.
Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos
2014-05-01
In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.
Sparse imaging for fast electron microscopy
NASA Astrophysics Data System (ADS)
Anderson, Hyrum S.; Ilic-Helms, Jovana; Rohrer, Brandon; Wheeler, Jason; Larson, Kurt
2013-02-01
Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited, large collections can lead to weeks of around-the-clock imaging time. To increase data collection speed, we propose and demonstrate on an operational SEM a fast method to sparsely sample and reconstruct smooth images. To accurately localize the electron probe position at fast scan rates, we model the dynamics of the scan coils, and use the model to rapidly and accurately visit a randomly selected subset of pixel locations. Images are reconstructed from the undersampled data by compressed sensing inversion using image smoothness as a prior. We report image fidelity as a function of acquisition speed by comparing traditional raster to sparse imaging modes. Our approach is equally applicable to other domains of nanometer microscopy in which the time to position a probe is a limiting factor (e.g., atomic force microscopy), or in which excessive electron doses might otherwise alter the sample being observed (e.g., scanning transmission electron microscopy).
Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data
Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.
2013-01-01
Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946
Optimized Design and Analysis of Sparse-Sampling fMRI Experiments
Perrachione, Tyler K.; Ghosh, Satrajit S.
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power. PMID:23616742
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power.
LESS: Link Estimation with Sparse Sampling in Intertidal WSNs
Ji, Xiaoyu; Chen, Yi-chao; Li, Xiaopeng; Xu, Wenyuan
2018-01-01
Deploying wireless sensor networks (WSN) in the intertidal area is an effective approach for environmental monitoring. To sustain reliable data delivery in such a dynamic environment, a link quality estimation mechanism is crucial. However, our observations in two real WSN systems deployed in the intertidal areas reveal that link update in routing protocols often suffers from energy and bandwidth waste due to the frequent link quality measurement and updates. In this paper, we carefully investigate the network dynamics using real-world sensor network data and find it feasible to achieve accurate estimation of link quality using sparse sampling. We design and implement a compressive-sensing-based link quality estimation protocol, LESS, which incorporates both spatial and temporal characteristics of the system to aid the link update in routing protocols. We evaluate LESS in both real WSN systems and a large-scale simulation, and the results show that LESS can reduce energy and bandwidth consumption by up to 50% while still achieving more than 90% link quality estimation accuracy. PMID:29494557
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan
2017-04-01
Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials, the results obtained in our experiments demonstrate the sparse acquisition STEM imaging is potentially capable of reducing the electron dose by at least 20 times expanding the frontiers of our characterization capabilities for investigation of biological/organic molecules, polymers, soft materials and nanostructures in general.
Practical Sub-Nyquist Sampling via Array-Based Compressed Sensing Receiver Architecture
2016-07-10
different array ele- ments at different sub-Nyquist sampling rates. Signal processing inspired by the sparse fast Fourier transform allows for signal...reconstruction algorithms can be computationally demanding (REF). The related sparse Fourier transform algorithms aim to reduce the processing time nec- essary to...compute the DFT of frequency-sparse signals [7]. In particular, the sparse fast Fourier transform (sFFT) achieves processing time better than the
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
NASA Astrophysics Data System (ADS)
Patej, A.; Eisenstein, D. J.
2018-07-01
We develop a formalism for measuring the cosmological distance scale from baryon acoustic oscillations (BAO) using the cross-correlation of a sparse redshift survey with a denser photometric sample. This reduces the shot noise that would otherwise affect the autocorrelation of the sparse spectroscopic map. As a proof of principle, we make the first on-sky application of this method to a sparse sample defined as the z > 0.6 tail of the Sloan Digital Sky Survey's (SDSS) BOSS/CMASS sample of galaxies and a dense photometric sample from SDSS DR9. We find a 2.8σ preference for the BAO peak in the cross-correlation at an effective z = 0.64, from which we measure the angular diameter distance DM(z = 0.64) = (2418 ± 73 Mpc)(rs/rs, fid). Accordingly, we expect that using this method to combine sparse spectroscopy with the deep, high-quality imaging that is just now becoming available will enable higher precision BAO measurements than possible with the spectroscopy alone.
NASA Astrophysics Data System (ADS)
Patej, Anna; Eisenstein, Daniel J.
2018-04-01
We develop a formalism for measuring the cosmological distance scale from baryon acoustic oscillations (BAO) using the cross-correlation of a sparse redshift survey with a denser photometric sample. This reduces the shot noise that would otherwise affect the auto-correlation of the sparse spectroscopic map. As a proof of principle, we make the first on-sky application of this method to a sparse sample defined as the z > 0.6 tail of the Sloan Digital Sky Survey's (SDSS) BOSS/CMASS sample of galaxies and a dense photometric sample from SDSS DR9. We find a 2.8σ preference for the BAO peak in the cross-correlation at an effective z = 0.64, from which we measure the angular diameter distance DM(z = 0.64) = (2418 ± 73 Mpc)(rs/rs, fid). Accordingly, we expect that using this method to combine sparse spectroscopy with the deep, high quality imaging that is just now becoming available will enable higher precision BAO measurements than possible with the spectroscopy alone.
Sparse representation based SAR vehicle recognition along with aspect angle.
Xing, Xiangwei; Ji, Kefeng; Zou, Huanxin; Sun, Jixiang
2014-01-01
As a method of representing the test sample with few training samples from an overcomplete dictionary, sparse representation classification (SRC) has attracted much attention in synthetic aperture radar (SAR) automatic target recognition (ATR) recently. In this paper, we develop a novel SAR vehicle recognition method based on sparse representation classification along with aspect information (SRCA), in which the correlation between the vehicle's aspect angle and the sparse representation vector is exploited. The detailed procedure presented in this paper can be summarized as follows. Initially, the sparse representation vector of a test sample is solved by sparse representation algorithm with a principle component analysis (PCA) feature-based dictionary. Then, the coefficient vector is projected onto a sparser one within a certain range of the vehicle's aspect angle. Finally, the vehicle is classified into a certain category that minimizes the reconstruction error with the novel sparse representation vector. Extensive experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) dataset and the results demonstrate that the proposed method performs robustly under the variations of depression angle and target configurations, as well as incomplete observation.
Urban land use of the Sao Paulo metropolitan area by automatic analysis of LANDSAT data
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Niero, M.; Foresti, C.
1983-01-01
The separability of urban land use classes in the metropolitan area of Sao Paulo was studied by means of automatic analysis of MSS/LANDSAT digital data. The data were analyzed using the media K and MAXVER classification algorithms. The land use classes obtained were: CBD/vertical growth area, residential area, mixed area, industrial area, embankment area type 1, embankment area type 2, dense vegetation area and sparse vegetation area. The spectral analysis of representative samples of urban land use classes was done using the "Single Cell" analysis option. The classes CBD/vertical growth area, residential area and embankment area type 2 showed better spectral separability when compared to the other classes.
Discriminant WSRC for Large-Scale Plant Species Recognition.
Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong
2017-01-01
In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.
Tipton, John; Hooten, Mevin B.; Goring, Simon
2017-01-01
Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temperature surfaces from sparse and noisy measurements recorded at historical United States military forts and other observer stations from 1820 to 1894. One common method for reconstructing the paleoclimate from proxy data is principal component regression (PCR). With PCR, one learns a statistical relationship between the paleoclimate proxy data and a set of climate observations that are used as patterns for potential reconstruction scenarios. We explore PCR in a Bayesian hierarchical framework, extending classical PCR in a variety of ways. First, we model the latent principal components probabilistically, accounting for measurement error in the observational data. Next, we extend our method to better accommodate outliers that occur in the proxy data. Finally, we explore alternatives to the truncation of lower-order principal components using different regularization techniques. One fundamental challenge in paleoclimate reconstruction efforts is the lack of out-of-sample data for predictive validation. Cross-validation is of potential value, but is computationally expensive and potentially sensitive to outliers in sparse data scenarios. To overcome the limitations that a lack of out-of-sample records presents, we test our methods using a simulation study, applying proper scoring rules including a computationally efficient approximation to leave-one-out cross-validation using the log score to validate model performance. The result of our analysis is a spatially explicit reconstruction of spatio-temporal temperature from a very sparse historical record.
Signal Sampling for Efficient Sparse Representation of Resting State FMRI Data
Ge, Bao; Makkie, Milad; Wang, Jin; Zhao, Shijie; Jiang, Xi; Li, Xiang; Lv, Jinglei; Zhang, Shu; Zhang, Wei; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
As the size of brain imaging data such as fMRI grows explosively, it provides us with unprecedented and abundant information about the brain. How to reduce the size of fMRI data but not lose much information becomes a more and more pressing issue. Recent literature studies tried to deal with it by dictionary learning and sparse representation methods, however, their computation complexities are still high, which hampers the wider application of sparse representation method to large scale fMRI datasets. To effectively address this problem, this work proposes to represent resting state fMRI (rs-fMRI) signals of a whole brain via a statistical sampling based sparse representation. First we sampled the whole brain’s signals via different sampling methods, then the sampled signals were aggregate into an input data matrix to learn a dictionary, finally this dictionary was used to sparsely represent the whole brain’s signals and identify the resting state networks. Comparative experiments demonstrate that the proposed signal sampling framework can speed-up by ten times in reconstructing concurrent brain networks without losing much information. The experiments on the 1000 Functional Connectomes Project further demonstrate its effectiveness and superiority. PMID:26646924
Visual Tracking Based on Extreme Learning Machine and Sparse Representation
Wang, Baoxian; Tang, Linbo; Yang, Jinglin; Zhao, Baojun; Wang, Shuigen
2015-01-01
The existing sparse representation-based visual trackers mostly suffer from both being time consuming and having poor robustness problems. To address these issues, a novel tracking method is presented via combining sparse representation and an emerging learning technique, namely extreme learning machine (ELM). Specifically, visual tracking can be divided into two consecutive processes. Firstly, ELM is utilized to find the optimal separate hyperplane between the target observations and background ones. Thus, the trained ELM classification function is able to remove most of the candidate samples related to background contents efficiently, thereby reducing the total computational cost of the following sparse representation. Secondly, to further combine ELM and sparse representation, the resultant confidence values (i.e., probabilities to be a target) of samples on the ELM classification function are used to construct a new manifold learning constraint term of the sparse representation framework, which tends to achieve robuster results. Moreover, the accelerated proximal gradient method is used for deriving the optimal solution (in matrix form) of the constrained sparse tracking model. Additionally, the matrix form solution allows the candidate samples to be calculated in parallel, thereby leading to a higher efficiency. Experiments demonstrate the effectiveness of the proposed tracker. PMID:26506359
Non-uniform sampling: post-Fourier era of NMR data collection and processing.
Kazimierczuk, Krzysztof; Orekhov, Vladislav
2015-11-01
The invention of multidimensional techniques in the 1970s revolutionized NMR, making it the general tool of structural analysis of molecules and materials. In the most straightforward approach, the signal sampling in the indirect dimensions of a multidimensional experiment is performed in the same manner as in the direct dimension, i.e. with a grid of equally spaced points. This results in lengthy experiments with a resolution often far from optimum. To circumvent this problem, numerous sparse-sampling techniques have been developed in the last three decades, including two traditionally distinct approaches: the radial sampling and non-uniform sampling. This mini review discusses the sparse signal sampling and reconstruction techniques from the point of view of an underdetermined linear algebra problem that arises when a full, equally spaced set of sampled points is replaced with sparse sampling. Additional assumptions that are introduced to solve the problem, as well as the shape of the undersampled Fourier transform operator (visualized as so-called point spread function), are shown to be the main differences between various sparse-sampling methods. Copyright © 2015 John Wiley & Sons, Ltd.
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
NASA Astrophysics Data System (ADS)
Miorelli, Roberto; Reboud, Christophe
2018-04-01
Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.
The power of FIA Phase 3 Crown-Indicator variables to detect change
William Bechtold; KaDonna Randolph; Stanley Zarnoch
2009-01-01
The goal of Phase 3 Detection Monitoring as implemented by the Forest Inventory and Analysis Program is to identify forest ecosystems where conditions might be deteriorating in subtle ways over large areas. At the relatively sparse sampling intensity of the Phase 3 plot network, a rough measure of success for the forest health indicators developed for this purpose is...
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
Sparse magnetic resonance imaging reconstruction using the bregman iteration
NASA Astrophysics Data System (ADS)
Lee, Dong-Hoon; Hong, Cheol-Pyo; Lee, Man-Woo
2013-01-01
Magnetic resonance imaging (MRI) reconstruction needs many samples that are sequentially sampled by using phase encoding gradients in a MRI system. It is directly connected to the scan time for the MRI system and takes a long time. Therefore, many researchers have studied ways to reduce the scan time, especially, compressed sensing (CS), which is used for sparse images and reconstruction for fewer sampling datasets when the k-space is not fully sampled. Recently, an iterative technique based on the bregman method was developed for denoising. The bregman iteration method improves on total variation (TV) regularization by gradually recovering the fine-scale structures that are usually lost in TV regularization. In this study, we studied sparse sampling image reconstruction using the bregman iteration for a low-field MRI system to improve its temporal resolution and to validate its usefulness. The image was obtained with a 0.32 T MRI scanner (Magfinder II, SCIMEDIX, Korea) with a phantom and an in-vivo human brain in a head coil. We applied random k-space sampling, and we determined the sampling ratios by using half the fully sampled k-space. The bregman iteration was used to generate the final images based on the reduced data. We also calculated the root-mean-square-error (RMSE) values from error images that were obtained using various numbers of bregman iterations. Our reconstructed images using the bregman iteration for sparse sampling images showed good results compared with the original images. Moreover, the RMSE values showed that the sparse reconstructed phantom and the human images converged to the original images. We confirmed the feasibility of sparse sampling image reconstruction methods using the bregman iteration with a low-field MRI system and obtained good results. Although our results used half the sampling ratio, this method will be helpful in increasing the temporal resolution at low-field MRI systems.
Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
Qi, Jin; Yang, Zhiyong
2014-01-01
Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.
NASA Astrophysics Data System (ADS)
Rana, Parvez; Vauhkonen, Jari; Junttila, Virpi; Hou, Zhengyang; Gautam, Basanta; Cawkwell, Fiona; Tokola, Timo
2017-12-01
Large-diameter trees (taking DBH > 30 cm to define large trees) dominate the dynamics, function and structure of a forest ecosystem. The aim here was to employ sparse airborne laser scanning (ALS) data with a mean point density of 0.8 m-2 and the non-parametric k-most similar neighbour (k-MSN) to predict tree diameter at breast height (DBH) distributions in a subtropical forest in southern Nepal. The specific objectives were: (1) to evaluate the accuracy of the large-tree fraction of the diameter distribution; and (2) to assess the effect of the number of training areas (sample size, n) on the accuracy of the predicted tree diameter distribution. Comparison of the predicted distributions with empirical ones indicated that the large tree diameter distribution can be derived in a mixed species forest with a RMSE% of 66% and a bias% of -1.33%. It was also feasible to downsize the sample size without losing the interpretability capacity of the model. For large-diameter trees, even a reduction of half of the training plots (n = 250), giving a marginal increase in the RMSE% (1.12-1.97%) was reported compared with the original training plots (n = 500). To be consistent with these outcomes, the sample areas should capture the entire range of spatial and feature variability in order to reduce the occurrence of error.
ERIC Educational Resources Information Center
Kraenzel, Carl F.
Rural demographic characteristics, regional distribution, and their respective trends should constitute significant policy information for the nation, but the U.S. Population Census offers little aid to the researcher studying population on a minor civil division (MCD) basis. When some census data are based on a 15 percent sample, some on a 5…
Investigation of wall-bounded turbulence over sparsely distributed roughness
NASA Astrophysics Data System (ADS)
Placidi, Marco; Ganapathisubramani, Bharath
2011-11-01
The effects of sparsely distributed roughness elements on the structure of a turbulent boundary layer are examined by performing a series of Particle Image Velocimetry (PIV) experiments in a wind tunnel. From the literature, the best way to characterise a rough wall, especially one where the density of roughness elements is sparse, is unclear. In this study, rough surfaces consisting of sparsely and uniformly distributed LEGO® blocks are used. Five different patterns are adopted in order to examine the effects of frontal solidity (λf, frontal area of the roughness elements per unit wall-parallel area), plan solidity (λp, plan area of roughness elements per unit wall-parallel area) and the geometry of the roughness element (square and cylindrical elements), on the turbulence structure. The Karman number, Reτ , has been matched, at the value of approximately 2300, in order to compare across the different cases. In the talk, we will present detailed analysis of mean and rms velocity profiles, Reynolds stresses and quadrant decomposition.
Distant failure prediction for early stage NSCLC by analyzing PET with sparse representation
NASA Astrophysics Data System (ADS)
Hao, Hongxia; Zhou, Zhiguo; Wang, Jing
2017-03-01
Positron emission tomography (PET) imaging has been widely explored for treatment outcome prediction. Radiomicsdriven methods provide a new insight to quantitatively explore underlying information from PET images. However, it is still a challenging problem to automatically extract clinically meaningful features for prognosis. In this work, we develop a PET-guided distant failure predictive model for early stage non-small cell lung cancer (NSCLC) patients after stereotactic ablative radiotherapy (SABR) by using sparse representation. The proposed method does not need precalculated features and can learn intrinsically distinctive features contributing to classification of patients with distant failure. The proposed framework includes two main parts: 1) intra-tumor heterogeneity description; and 2) dictionary pair learning based sparse representation. Tumor heterogeneity is initially captured through anisotropic kernel and represented as a set of concatenated vectors, which forms the sample gallery. Then, given a test tumor image, its identity (i.e., distant failure or not) is classified by applying the dictionary pair learning based sparse representation. We evaluate the proposed approach on 48 NSCLC patients treated by SABR at our institute. Experimental results show that the proposed approach can achieve an area under the characteristic curve (AUC) of 0.70 with a sensitivity of 69.87% and a specificity of 69.51% using a five-fold cross validation.
NASA Astrophysics Data System (ADS)
Orović, Irena; Stanković, Srdjan; Amin, Moeness
2013-05-01
A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
Magnetic Resonance Super-resolution Imaging Measurement with Dictionary-optimized Sparse Learning
NASA Astrophysics Data System (ADS)
Li, Jun-Bao; Liu, Jing; Pan, Jeng-Shyang; Yao, Hongxun
2017-06-01
Magnetic Resonance Super-resolution Imaging Measurement (MRIM) is an effective way of measuring materials. MRIM has wide applications in physics, chemistry, biology, geology, medical and material science, especially in medical diagnosis. It is feasible to improve the resolution of MR imaging through increasing radiation intensity, but the high radiation intensity and the longtime of magnetic field harm the human body. Thus, in the practical applications the resolution of hardware imaging reaches the limitation of resolution. Software-based super-resolution technology is effective to improve the resolution of image. This work proposes a framework of dictionary-optimized sparse learning based MR super-resolution method. The framework is to solve the problem of sample selection for dictionary learning of sparse reconstruction. The textural complexity-based image quality representation is proposed to choose the optimal samples for dictionary learning. Comprehensive experiments show that the dictionary-optimized sparse learning improves the performance of sparse representation.
An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.
Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi
2016-02-01
Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce themore » electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. As a result, the use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO 3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.« less
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; ...
2016-10-17
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce themore » electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.« less
Effects of isolation on ant assemblages depend on microhabitat
Chen, Xuan; Adams, Benjamin; Layne, Michael; Swarzenski, Christopher M.; Norris, David O.; Hooper-Bui, Linda
2017-01-01
How isolation affects biological communities is a fundamental question in ecology and conservation biology. Local diversity (α) and regional diversity (γ) are consistently lower in insular areas. The pattern of species turnover (β diversity) and the influence of isolation on competitive interactions are less predictable. Differences in communities across microhabitats within an isolated patch could contribute to the variability in patterns related to isolation. Trees form characteristically dense and sparse patches (low vs. high isolation) in floating marshes in coastal Louisiana, and canopy and root areas around these trees could support distinct ant communities. Consequently, trees in floating marshes provide an ideal environment to study the effects of isolation on community assemblages in different microhabitats. We sampled ant communities in 120 trees during the summer of 2016. We found ant α diversity was not different between the canopy and roots, and the magnitude and directional effects of isolation on ants were inconsistent between the canopy and root areas. In the roots of sparse sites, ant diversity (α, β, and γ) was lower, species composition was changed, and the signature of interspecific competition was more prominent compared to dense sites. In the canopy, however, significant differences between dense and sparse sites were only detected in α and γ diversity, and ant species co‐occurrence was not significantly different from a random distribution. The inconsistent responses of ants in canopy and root areas to isolation may be due to the differences of species pool size, environmental harshness, and species interactions between strata. In addition, these findings indicate that communities in distinct microenvironments can respond differentially to habitat isolation. We suggest incorporating organisms from different microhabitats into future research to better understand the influence of isolation on the assembly of biological communities.
Fast and low-dose computed laminography using compressive sensing based technique
NASA Astrophysics Data System (ADS)
Abbas, Sajid; Park, Miran; Cho, Seungryong
2015-03-01
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.
Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression
NASA Astrophysics Data System (ADS)
Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang
2018-02-01
Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.
Droplet Image Super Resolution Based on Sparse Representation and Kernel Regression
NASA Astrophysics Data System (ADS)
Zou, Zhenzhen; Luo, Xinghong; Yu, Qiang
2018-05-01
Microgravity and containerless conditions, which are produced via electrostatic levitation combined with a drop tube, are important when studying the intrinsic properties of new metastable materials. Generally, temperature and image sensors can be used to measure the changes of sample temperature, morphology and volume. Then, the specific heat, surface tension, viscosity changes and sample density can be obtained. Considering that the falling speed of the material sample droplet is approximately 31.3 m/s when it reaches the bottom of a 50-meter-high drop tube, a high-speed camera with a collection rate of up to 106 frames/s is required to image the falling droplet. However, at the high-speed mode, very few pixels, approximately 48-120, will be obtained in each exposure time, which results in low image quality. Super-resolution image reconstruction is an algorithm that provides finer details than the sampling grid of a given imaging device by increasing the number of pixels per unit area in the image. In this work, we demonstrate the application of single image-resolution reconstruction in the microgravity and electrostatic levitation for the first time. Here, using the image super-resolution method based on sparse representation, a low-resolution droplet image can be reconstructed. Employed Yang's related dictionary model, high- and low-resolution image patches were combined with dictionary training, and high- and low-resolution-related dictionaries were obtained. The online double-sparse dictionary training algorithm was used in the study of related dictionaries and overcome the shortcomings of the traditional training algorithm with small image patch. During the stage of image reconstruction, the algorithm of kernel regression is added, which effectively overcomes the shortcomings of the Yang image's edge blurs.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.
Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang
2017-07-01
It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.
Miller, Fred K.; Benham, John R.
1984-01-01
On the basis of mineral-resource surveys the Selkirk Roadless Area, Idaho has little promise for the occurrence of mineral or energy resources. Molybdenum, lead, uranium, thorium, chromium, tungsten, zirconium, and several rare-earth elements have been detected in panned concentrates from samples of stream sediment, but no minerals containing the first five elements were found in place, nor were any conditions conducive to their concentration found. Zirconium, thorium, and the rare earths occur in sparsely disseminated accessory minerals in granitic rocks and no resource potential is identified. There is no history of mining in the roadless area and there are no oil, gas, mineral, or geothermal leases or current claims.
Geostatistical modeling of riparian forest microclimate and its implications for sampling
Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.
2011-01-01
Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.
NASA Astrophysics Data System (ADS)
Shoupeng, Song; Zhou, Jiang
2017-03-01
Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.
Sparse modeling applied to patient identification for safety in medical physics applications
NASA Astrophysics Data System (ADS)
Lewkowitz, Stephanie
Every scheduled treatment at a radiation therapy clinic involves a series of safety protocol to ensure the utmost patient care. Despite safety protocol, on a rare occasion an entirely preventable medical event, an accident, may occur. Delivering a treatment plan to the wrong patient is preventable, yet still is a clinically documented error. This research describes a computational method to identify patients with a novel machine learning technique to combat misadministration. The patient identification program stores face and fingerprint data for each patient. New, unlabeled data from those patients are categorized according to the library. The categorization of data by this face-fingerprint detector is accomplished with new machine learning algorithms based on Sparse Modeling that have already begun transforming the foundation of Computer Vision. Previous patient recognition software required special subroutines for faces and different tailored subroutines for fingerprints. In this research, the same exact model is used for both fingerprints and faces, without any additional subroutines and even without adjusting the two hyperparameters. Sparse modeling is a powerful tool, already shown utility in the areas of super-resolution, denoising, inpainting, demosaicing, and sub-nyquist sampling, i.e. compressed sensing. Sparse Modeling is possible because natural images are inherently sparse in some bases, due to their inherent structure. This research chooses datasets of face and fingerprint images to test the patient identification model. The model stores the images of each dataset as a basis (library). One image at a time is removed from the library, and is classified by a sparse code in terms of the remaining library. The Locally Competitive Algorithm, a truly neural inspired Artificial Neural Network, solves the computationally difficult task of finding the sparse code for the test image. The components of the sparse representation vector are summed by ℓ1 pooling, and correct patient identification is consistently achieved 100% over 1000 trials, when either the face data or fingerprint data are implemented as a classification basis. The algorithm gets 100% classification when faces and fingerprints are concatenated into multimodal datasets. This suggests that 100% patient identification will be achievable in the clinal setting.
Scanned-probe field-emission studies of vertically aligned carbon nanofibers
NASA Astrophysics Data System (ADS)
Merkulov, Vladimir I.; Lowndes, Douglas H.; Baylor, Larry R.
2001-02-01
Field emission properties of dense and sparse "forests" of randomly placed, vertically aligned carbon nanofibers (VACNFs) were studied using a scanned probe with a small tip diameter of ˜1 μm. The probe was scanned in directions perpendicular and parallel to the sample plane, which allowed for measuring not only the emission turn-on field at fixed locations but also the emission site density over large surface areas. The results show that dense forests of VACNFs are not good field emitters as they require high extracting (turn-on) fields. This is attributed to the screening of the local electric field by the neighboring VACNFs. In contrast, sparse forests of VACNFs exhibit moderate-to-low turn-on fields as well as high emission site and current densities, and long emission lifetime, which makes them very promising for various field emission applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
Fast and low-dose computed laminography using compressive sensing based technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbas, Sajid, E-mail: scho@kaist.ac.kr; Park, Miran, E-mail: scho@kaist.ac.kr; Cho, Seungryong, E-mail: scho@kaist.ac.kr
2015-03-31
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspiredmore » total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.« less
McCoy, Alene T; Bartels, Michael J; Rick, David L; Saghir, Shakil A
2012-07-01
TK Modeler 1.0 is a Microsoft® Excel®-based pharmacokinetic (PK) modeling program created to aid in the design of toxicokinetic (TK) studies. TK Modeler 1.0 predicts the diurnal blood/plasma concentrations of a test material after single, multiple bolus or dietary dosing using known PK information. Fluctuations in blood/plasma concentrations based on test material kinetics are calculated using one- or two-compartment PK model equations and the principle of superposition. This information can be utilized for the determination of appropriate dosing regimens based on reaching a specific desired C(max), maintaining steady-state blood/plasma concentrations, or other exposure target. This program can also aid in the selection of sampling times for accurate calculation of AUC(24h) (diurnal area under the blood concentration time curve) using sparse-sampling methodologies (one, two or three samples). This paper describes the construction, use and validation of TK Modeler. TK Modeler accurately predicted blood/plasma concentrations of test materials and provided optimal sampling times for the calculation of AUC(24h) with improved accuracy using sparse-sampling methods. TK Modeler is therefore a validated, unique and simple modeling program that can aid in the design of toxicokinetic studies. Copyright © 2012 Elsevier Inc. All rights reserved.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe
2018-02-15
We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Mannocci, Laura; Roberts, Jason J; Miller, David L; Halpin, Patrick N
2017-06-01
As human activities expand beyond national jurisdictions to the high seas, there is an increasing need to consider anthropogenic impacts to species inhabiting these waters. The current scarcity of scientific observations of cetaceans in the high seas impedes the assessment of population-level impacts of these activities. We developed plausible density estimates to facilitate a quantitative assessment of anthropogenic impacts on cetacean populations in these waters. Our study region extended from a well-surveyed region within the U.S. Exclusive Economic Zone into a large region of the western North Atlantic sparsely surveyed for cetaceans. We modeled densities of 15 cetacean taxa with available line transect survey data and habitat covariates and extrapolated predictions to sparsely surveyed regions. We formulated models to reduce the extent of extrapolation beyond covariate ranges, and constrained them to model simple and generalizable relationships. To evaluate confidence in the predictions, we mapped where predictions were made outside sampled covariate ranges, examined alternate models, and compared predicted densities with maps of sightings from sources that could not be integrated into our models. Confidence levels in model results depended on the taxon and geographic area and highlighted the need for additional surveying in environmentally distinct areas. With application of necessary caution, our density estimates can inform management needs in the high seas, such as the quantification of potential cetacean interactions with military training exercises, shipping, fisheries, and deep-sea mining and be used to delineate areas of special biological significance in international waters. Our approach is generally applicable to other marine taxa and geographic regions for which management will be implemented but data are sparse. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru
2016-03-30
As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.
On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.
Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi
2018-02-01
On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.
Accelerated High-Dimensional MR Imaging with Sparse Sampling Using Low-Rank Tensors
He, Jingfei; Liu, Qiegen; Christodoulou, Anthony G.; Ma, Chao; Lam, Fan
2017-01-01
High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI. PMID:27093543
A soil map of a large watershed in China: applying digital soil mapping in a data sparse region
NASA Astrophysics Data System (ADS)
Barthold, F.; Blank, B.; Wiesmeier, M.; Breuer, L.; Frede, H.-G.
2009-04-01
Prediction of soil classes in data sparse regions is a major research challenge. With the advent of machine learning the possibilities to spatially predict soil classes have increased tremendously and given birth to new possibilities in soil mapping. Digital soil mapping is a research field that has been established during the last decades and has been accepted widely. We now need to develop tools to reduce the uncertainty in soil predictions. This is especially challenging in data sparse regions. One approach to do this is to implement soil taxonomic distance as a classification error criterion in classification and regression trees (CART) as suggested by Minasny et al. (Geoderma 142 (2007) 285-293). This approach assumes that the classification error should be larger between soils that are more dissimilar, i.e. differ in a larger number of soil properties, and smaller between more similar soils. Our study area is the Xilin River Basin, which is located in central Inner Mongolia in China. It is characterized by semi arid climate conditions and is representative for the natural occurring steppe ecosystem. The study area comprises 3600 km2. We applied a random, stratified sampling design after McKenzie and Ryan (Geoderma 89 (1999) 67-94) with landuse and topography as stratifying variables. We defined 10 sampling classes, from each class 14 replicates were randomly drawn and sampled. The dataset was split into 100 soil profiles for training and 40 soil profiles for validation. We then applied classification and regression trees (CART) to quantify the relationships between soil classes and environmental covariates. The classification tree explained 75.5% of the variance with land use and geology as most important predictor variables. Among the 8 soil classes that we predicted, the Kastanozems cover most of the area. They are predominantly found in steppe areas. However, even some of the soils at sand dune sites, which were thought to show only little soil formation, can be classified as Kastanozems. Besides the Kastanozems, Regosols are most common at the sand dune sites as well as at sites that are defined as bare soil which are characterized by little or no vegetation. Gleysols are mostly found at sites in the vicinity of the Xilin river, which are connected to the groundwater. They can also be found in small valleys or depressions where sub-surface waters from neighboring areas collect. The richest soils are found in mountain meadow areas. Pedogenetic conditions here are most favorable and lead to the formation of Chernozems with deep humic Ah horizons. Other soil types that occur in the study area are Arenosols, Calcisols, Cambisol and Phaeozems. In addition, soil taxonomic distance is implemented into the decision tree procedure as a measure of classification error. The results of incorporating taxonomic distance as a loss function in the decision tree will be compared with the standard application of the decision tree.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Towards sparse characterisation of on-body ultra-wideband wireless channels.
Yang, Xiaodong; Ren, Aifeng; Zhang, Zhiya; Ur Rehman, Masood; Abbasi, Qammer Hussain; Alomainy, Akram
2015-06-01
With the aim of reducing cost and power consumption of the receiving terminal, compressive sensing (CS) framework is applied to on-body ultra-wideband (UWB) channel estimation. It is demonstrated in this Letter that the sparse on-body UWB channel impulse response recovered by the CS framework fits the original sparse channel well; thus, on-body channel estimation can be achieved using low-speed sampling devices.
Towards sparse characterisation of on-body ultra-wideband wireless channels
Ren, Aifeng; Zhang, Zhiya; Ur Rehman, Masood; Abbasi, Qammer Hussain; Alomainy, Akram
2015-01-01
With the aim of reducing cost and power consumption of the receiving terminal, compressive sensing (CS) framework is applied to on-body ultra-wideband (UWB) channel estimation. It is demonstrated in this Letter that the sparse on-body UWB channel impulse response recovered by the CS framework fits the original sparse channel well; thus, on-body channel estimation can be achieved using low-speed sampling devices. PMID:26609409
Mei, Kai; Kopp, Felix K; Bippus, Rolf; Köhler, Thomas; Schwaiger, Benedikt J; Gersing, Alexandra S; Fehringer, Andreas; Sauter, Andreas; Münzel, Daniela; Pfeiffer, Franz; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B; Baum, Thomas
2017-12-01
Osteoporosis diagnosis using multidetector CT (MDCT) is limited to relatively high radiation exposure. We investigated the effect of simulated ultra-low-dose protocols on in-vivo bone mineral density (BMD) and quantitative trabecular bone assessment. Institutional review board approval was obtained. Twelve subjects with osteoporotic vertebral fractures and 12 age- and gender-matched controls undergoing routine thoracic and abdominal MDCT were included (average effective dose: 10 mSv). Ultra-low radiation examinations were achieved by simulating lower tube currents and sparse samplings at 50%, 25% and 10% of the original dose. BMD and trabecular bone parameters were extracted in T10-L5. Except for BMD measurements in sparse sampling data, absolute values of all parameters derived from ultra-low-dose data were significantly different from those derived from original dose images (p<0.05). BMD, apparent bone fraction and trabecular thickness were still consistently lower in subjects with than in those without fractures (p<0.05). In ultra-low-dose scans, BMD and microstructure parameters were able to differentiate subjects with and without vertebral fractures, suggesting osteoporosis diagnosis is feasible. However, absolute values differed from original values. BMD from sparse sampling appeared to be more robust. This dose-dependency of parameters should be considered for future clinical use. • BMD and quantitative bone parameters are assessable in ultra-low-dose in vivo MDCT scans. • Bone mineral density does not change significantly when sparse sampling is applied. • Quantitative trabecular bone microstructure measurements are sensitive to dose reduction. • Osteoporosis subjects could be differentiated even at 10% of original dose. • Radiation exposure should be considered when comparing quantitative bone parameters.
NASA Astrophysics Data System (ADS)
Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan
2018-02-01
The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.
NASA Astrophysics Data System (ADS)
Cheng, M.; Jin, J.
2017-12-01
Vegetation phenology is one of the most sensitive bio-indicators of climate change, and it has received increasing interests in the context of global warming. As one of the most sensitive areas to global change, the Tibetan Plateau is a unique region to study the trends in vegetation phenology in response to climate change because of its unique vegetation composition, climate features and low-level human disturbance. Although some studies have aroused wide controversies about the actual plant phenology patterns in the Tibetan Plateau, yet the reasons remain unclear. In particular, the phenology characteristics of sparse herbaceous or sparse shrub and evergreen forest that are mostly located in the northwest and southeast of the Tibetan Plateau remain less studied. In this study, the spatio-temporal patterns of the start (SOS), end (EOS) and length (LOS) of the vegetation growing season for six vegetation types in the Tibetan Plateau, including evergreen broadleaf forests, evergreen coniferous forests, evergreen shrub, meadow, steppe and sparse herbaceous or sparse shrub, were quantified from 1982 to 2014 using NOAA/AVHRR NDVI data set at a spatial resolution of 0.05°×0.05° and 7-day intervals using NDVI relative change rate threshold and sixth order polynomial fit models. Assisted with the monthly precipitation and temperature data, the relative effects of changing climates on the variability of phenology were also examined. Diverse phenological changes were observed for different land cover types, with an advancing start of growing season (SOS), delaying end of growing season (EOS) and increasing length of growing season (LOS) in the eastern Tibetan Plateau where meadow was the dominant vegetation type, but with the opposite changes in the steppe and sparse herbaceous or sparse shrub regions which are mostly located in the northwestern and western edges of the Tibetan Plateau. Correlation analysis indicated that sufficient preseason precipitation may delay the SOS of evergreen forests in the southeastern Plateau and advance the SOS of steppe and sparse herbaceous or sparse shrub in relatively arid areas, while the advance of SOS in meadow areas could be related to higher preseason temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Jakeman, John D.; Narayan, Akil; Zhou, Tao
2017-06-22
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
Heat Production as a Tool in Geothermal Exploration
NASA Astrophysics Data System (ADS)
Rhodes, J. M.; Koteas, C.; Mabee, S. B.; Thomas, M.; Gagnon, T.
2012-12-01
Heat flow data (together with knowledge, or assumptions, of stratigraphy, thermal conductivity and heat production) provide the prime parameter for estimating the potential of geothermal resources. Unfortunately this information is expensive to obtain as it requires deep boreholes. Consequently it is sparse or lacking in areas not traditionally considered as having geothermal potential. New England (and most of the northeastern U.S.A.) is one such area. However, in the absence of volcano-derived hydrothermal activity with its attendant high heat flow, granitic plutons provide an alternative geothermal resource. Compared with other crustal rocks, granites contain higher concentrations of heat-producing elements (K, U, Th). Additionally, they are relatively homogeneous, compared to surrounding country rock, allowing for stimulation through hydro-fracking of large (>1 km3) geothermal reservoirs. Consequently we have adopted a different approach, obtaining heat production data rather then relying on the very sparse heat flow data. Birch and colleagues long since recognized the relationship between heat flow and heat production as an integral part of their concept of Heat Flow Provinces. Heat production is readily determined in the laboratory by measuring the density of a sample and the concentrations of its heat-producing elements potassium, uranium and thorium. We have determined the heat production for 570 samples from most of the major granitic and gneissic bodies in Massachusetts and Connecticut. We have also measured these parameters for 70 sedimentary rocks that cover granites and gneiss in the Connecticut and Narragansett Basins. This data is being used to calculate inferred heat flow data for these localities. Comparison of these inferred heat flow values with the sparse number of those measured directly in boreholes in the two States is encouraging, indicating that this approach has merit. We have also measured thermal conductivity on all of these samples. This, together with the measured heat production and the inferred heat flow allow the calculation of inferred temperature - depth profiles for these localities, from which we have produced maps showing the distribution of heat production, thermal conductivity, inferred heat flow and inferred temperatures at depths of 2, 4 and 6 km in the two States. We believe that this is a rapid and relatively cheap approach for evaluating the geothermal potential of a region lacking in heat flow data allowing identification of areas that warrant more detailed investigation which would include geophysical surveys and drilling. In Massachusetts and Connecticut such areas include the Fitchburg pluton, Permian granites and the Narragansett and Hartford Basins, where gneiss and granites are buried beneath Carboniferous and Triassic sediments respectively. This project is funded by the Department of Energy through an award to the Association of American State Geologists.
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Exhaustive Search for Sparse Variable Selection in Linear Regression
NASA Astrophysics Data System (ADS)
Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato
2018-04-01
We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.
Deploying temporary networks for upscaling of sparse network stations
NASA Astrophysics Data System (ADS)
Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Kelly, Victoria; Hall, Mark; Palecki, Michael A.; Temimi, Marouane
2016-10-01
Soil observations networks at the national scale play an integral role in hydrologic modeling, drought assessment, agricultural decision support, and our ability to understand climate change. Understanding soil moisture variability is necessary to apply these measurements to model calibration, business and consumer applications, or even human health issues. The installation of soil moisture sensors as sparse, national networks is necessitated by limited financial resources. However, this results in the incomplete sampling of the local heterogeneity of soil type, vegetation cover, topography, and the fine spatial distribution of precipitation events. To this end, temporary networks can be installed in the areas surrounding a permanent installation within a sparse network. The temporary networks deployed in this study provide a more representative average at the 3 km and 9 km scales, localized about the permanent gauge. The value of such temporary networks is demonstrated at test sites in Millbrook, New York and Crossville, Tennessee. The capacity of a single U.S. Climate Reference Network (USCRN) sensor set to approximate the average of a temporary network at the 3 km and 9 km scales using a simple linear scaling function is tested. The capacity of a temporary network to provide reliable estimates with diminishing numbers of sensors, the temporal stability of those networks, and ultimately, the relationship of the variability of those networks to soil moisture conditions at the permanent sensor are investigated. In this manner, this work demonstrates the single-season installation of a temporary network as a mechanism to characterize the soil moisture variability at a permanent gauge within a sparse network.
Distribution of butyltins in the waters and sediments along the coast of India.
Garg, Anita; Meena, Ram M; Jadhav, Sangeeta; Bhosle, Narayan B
2011-02-01
Water and surface sediment samples were analyzed for butyltins (TBT, DBT, MBT) from various ports along the east and west coast of India. The total butyltin (TB) in water samples varied between ~1.7 and 342 ng S nl⁻¹, whereas for sediments it varied between below detection limit to 14861 ng S ng⁻¹ dry weight of sediment. On an average Chennai port recorded the highest level of butyltins in the sediments while Paradip recorded the highest level of butylins in the waters. A fairly good relationship between the TB in the sediment and overlying water samples, as well as between organic carbon and TB, implicates the importance of adsorption/desorption process in controlling the levels of TBT in these port areas. In India the data on organotin pollution is very sparse; most of the port areas have been surveyed for butyltins for the first time during this study. Copyright © 2010 Elsevier Ltd. All rights reserved.
Research on segmentation based on multi-atlas in brain MR image
NASA Astrophysics Data System (ADS)
Qian, Yuejing
2018-03-01
Accurate segmentation of specific tissues in brain MR image can be effectively achieved with the multi-atlas-based segmentation method, and the accuracy mainly depends on the image registration accuracy and fusion scheme. This paper proposes an automatic segmentation method based on the multi-atlas for brain MR image. Firstly, to improve the registration accuracy in the area to be segmented, we employ a target-oriented image registration method for the refinement. Then In the label fusion, we proposed a new algorithm to detect the abnormal sparse patch and simultaneously abandon the corresponding abnormal sparse coefficients, this method is made based on the remaining sparse coefficients combined with the multipoint label estimator strategy. The performance of the proposed method was compared with those of the nonlocal patch-based label fusion method (Nonlocal-PBM), the sparse patch-based label fusion method (Sparse-PBM) and majority voting method (MV). Based on our experimental results, the proposed method is efficient in the brain MR images segmentation compared with MV, Nonlocal-PBM, and Sparse-PBM methods.
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
Quresh S. Latif; Martha M. Ellis; Victoria A. Saab; Kim Mellen-McLean
2017-01-01
Sparsely distributed species attract conservation concern, but insufficient information on population trends challenges conservation and funding prioritization. Occupancy-based monitoring is attractive for these species, but appropriate sampling design and inference depend on particulars of the study system. We employed spatially explicit simulations to identify...
Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
Temperature of ground water at Philadelphia, Pennsylvania, 1979- 1981
Paulachok, Gary N.
1986-01-01
Anthropogenic heat production has undoubtedly caused increased ground-water temperatures in many parts of Philadelphia, Pennsylvania, as shown by temperatures of 98 samples and logs of 40 wells measured during 1979-81. Most sample temperatures were higher than 12.6 degrees Celsius (the local mean annual air temperature), and many logs depict cooling trends with depth (anomalous gradients). Heating of surface and shallow-subsurface materials has likely caused the elevated temperatures and anomalous gradients. Solar radiation on widespread concrete and asphalt surfaces, fossil-fuel combustion, and radiant losses from buried pipelines containing steam and process chemicals are believed to be the chief sources of heat. Some heat from these and other sources is transferred to deeper zones, mainly by conduction. Temperatures in densely urbanized areas are commonly highest directly beneath the land surface and decrease progressively with depth. Temperatures in sparsely urbanized areas generally follow the natural geothermal gradient and increase downward at about that same rate.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.
Habitat relationships of birds overwintering in a managed coastal prairie
Baldwin, H.Q.; Grace, J.B.; Barrow, W.C.; Rohwer, F.C.
2007-01-01
Grassland birds are considered to be rapidly declining in North America. Management approaches for grassland birds frequently rely on prescribed burning to maintain habitat in suitable condition. We evaluated the relationships among years since burn, vegetation structure, and overwintering grassland bird abundance in coastal prairie. Le Conte's Sparrows (Ammodramus leconteii) were most common in areas that had: (1) been burned within the previous 2 years, (2) medium density herbaceous vegetation, and (3) sparse shrub densities. Savannah Sparrows (Passerculus sandwichensis) were associated with areas: (1) burned within 1 year, (2) with sparse herbaceous vegetation, and (3) with sparse shrub densities. Sedge Wrens (Cistothorus platensis) were most common in areas that had: (1) burned greater than 2 years prior and (2) dense herbaceous vegetation. Swamp Sparrows (Melospiza georgiana): (1) were most common in areas of dense shrubs, (2) not related to time since burnings, and (3) demonstrated no relationship to herbaceous vegetation densities. The relationships to fire histories for all four bird species could be explained by the associated vegetation characteristics indicating the need for a mosaic of burn rotations and modest levels of woody vegetation.
Zhuang, Chengxu; Wang, Yulong; Yamins, Daniel; Hu, Xiaolin
2017-01-01
Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1). However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised), receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas. PMID:29163117
Templin, W.E.; Smith, P.E.; DeBortoli, M.L.; Schluter, R.C.
1995-01-01
This report presents an evaluation of water- resources data-collection networks in the northern and coastal areas of Monterey County, California. This evaluation was done by the U.S. Geological Survey in cooperation with the Monterey County Flood Control and Water Conservation District to evaluate precipitation, surface water, and ground water monitoring networks. This report describes existing monitoring networks in the study areas and areas where possible additional data-collection is needed. During this study, 106 precipitation-quantity gages were identified, of which 84 were active; however, no precipitation-quality gages were identified in the study areas. The precipitaion-quantity gages were concentrated in the Monterey Peninsula and the northern part of the county. If the number of gages in these areas were reduced, coverage would still be adequate to meet most objectives; however, additional gages could improve coverage in the Tularcitos Creek basin and in the coastal areas south of Carmel to the county boundary. If collection of precipitation data were expanded to include monitoring precipitation quality, this expanded monitoring also could include monitoring precipitation for acid rain and pesticides. Eleven continuous streamflow-gaging stations were identified during this study, of which seven were active. To meet the objectives of the streamflow networks outlined in this report, the seven active stations would need to be continued, four stations would need to be reactivated, and an additional six streamflow-gaging stations would need to be added. Eleven stations that routinely were sampled for chemical constituents were identified in the study areas. Surface water in the lower Big Sur River basin was sampled annually for total coli- form and fecal coliform bacteria, and the Big Sur River was sampled monthly at 16 stations for these bacteria. Routine sampling for chemical constituents also was done in the Big Sur River basin. The Monterey County Flood Control and Water Conservation District maintained three networks in the study areas to measure ground-water levels: (1) the summer network, (2) the monthly network, and (3) the annual autumn network. The California American Water Company also did some ground-water-level monitoring in these areas. Well coverage for ground-water monitoring was dense in the seawater-intrusion area north of Moss Landing (possibly because of multiple overlying aquifers), but sparse in other parts of the study areas. During the study, 44 sections were identified as not monitored for ground-water levels. In an ideal ground-water-level network, wells would be evenly spaced, except where local conditions or correlations of wells make monitoring unnecessary. A total of 384 wells that monitor ground-water levels and/or ground-water quality were identified during this study. The Monterey County Flood Control and Water Conservation District sampled ground-water quality monthly during the irrigation season to monitor seawater intrusion. Once each year (during the summer), the wells in this network were monitored for chlorides, specific conductance, and nitrates. Additional samples were collected from each well once every 5 years for complete mineral analysis. The California Department of Health Services, the California American Water Company, the U.S. Army Health Service at Ford Ord, and the Monterey Peninsula Water Management District also monitored ground-water quality in wells in the study areas. Well coverage for the ground-water- quality networks was dense in the seawater- intrusion area north of Moss Landing, but sparse in the rest of the study areas. During this study, 54 sections were identified as not monitored for water quality.
Wang, Li-wen; Wei, Ya-xing; Niu, Zheng
2008-06-01
1 km MODIS NDVI time series data combining with decision tree classification, supervised classification and unsupervised classification was used to classify land cover type of Qinghai Province into 14 classes. In our classification system, sparse grassland and sparse shrub were emphasized, and their spatial distribution locations were labeled. From digital elevation model (DEM) of Qinghai Province, five elevation belts were achieved, and we utilized geographic information system (GIS) software to analyze vegetation cover variation on different elevation belts. Our research result shows that vegetation cover in Qinghai Province has been improved in recent five years. Vegetation cover area increases from 370047 km2 in 2001 to 374576 km2 in 2006, and vegetation cover rate increases by 0.63%. Among five grade elevation belts, vegetation cover ratio of high mountain belt is the highest (67.92%). The area of middle density grassland in high mountain belt is the largest, of which area is 94 003 km2. Increased area of dense grassland in high mountain belt is the greatest (1280 km2). During five years, the biggest variation is the conversion from sparse grassland to middle density grassland in high mountain belt, of which area is 15931 km2.
Analysing Local Sparseness in the Macaque Brain Network
Singh, Raghavendra; Nagar, Seema; Nanavati, Amit A.
2015-01-01
Understanding the network structure of long distance pathways in the brain is a necessary step towards developing an insight into the brain’s function, organization and evolution. Dense global subnetworks of these pathways have often been studied, primarily due to their functional implications. Instead we study sparse local subnetworks of the pathways to establish the role of a brain area in enabling shortest path communication between its non-adjacent topological neighbours. We propose a novel metric to measure the topological communication load on a vertex due to its immediate neighbourhood, and show that in terms of distribution of this local communication load, a network of Macaque long distance pathways is substantially different from other real world networks and random graph models. Macaque network contains the entire range of local subnetworks, from star-like networks to clique-like networks, while other networks tend to contain a relatively small range of subnetworks. Further, sparse local subnetworks in the Macaque network are not only found across topographical super-areas, e.g., lobes, but also within a super-area, arguing that there is conservation of even relatively short-distance pathways. To establish the communication role of a vertex we borrow the concept of brokerage from social science, and present the different types of brokerage roles that brain areas play, highlighting that not only the thalamus, but also cingulate gyrus and insula often act as “relays” for areas in the neocortex. These and other analysis of communication load and roles of the sparse subnetworks of the Macaque brain provide new insights into the organisation of its pathways. PMID:26437077
Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-02-22
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.
Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin
2018-01-01
The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406
Pan, Minghao; Yang, Yongmin; Guan, Fengjiao; Hu, Haifeng; Xu, Hailong
2017-01-01
The accurate monitoring of blade vibration under operating conditions is essential in turbo-machinery testing. Blade tip timing (BTT) is a promising non-contact technique for the measurement of blade vibrations. However, the BTT sampling data are inherently under-sampled and contaminated with several measurement uncertainties. How to recover frequency spectra of blade vibrations though processing these under-sampled biased signals is a bottleneck problem. A novel method of BTT signal processing for alleviating measurement uncertainties in recovery of multi-mode blade vibration frequency spectrum is proposed in this paper. The method can be divided into four phases. First, a single measurement vector model is built by exploiting that the blade vibration signals are sparse in frequency spectra. Secondly, the uniqueness of the nonnegative sparse solution is studied to achieve the vibration frequency spectrum. Thirdly, typical sources of BTT measurement uncertainties are quantitatively analyzed. Finally, an improved vibration frequency spectra recovery method is proposed to get a guaranteed level of sparse solution when measurement results are biased. Simulations and experiments are performed to prove the feasibility of the proposed method. The most outstanding advantage is that this method can prevent the recovered multi-mode vibration spectra from being affected by BTT measurement uncertainties without increasing the probe number. PMID:28758952
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Galaxy redshift surveys with sparse sampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro
2013-12-01
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V{sub survey} ∼ 10Gpc{sup 3}) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V{sub survey}, we observe only a fraction of the volume. The distribution of observed regions should bemore » chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V{sub survey} (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V{sub survey} (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.« less
Sample-Starved Large Scale Network Analysis
2016-05-05
As reported in our journal publication (G. Marjanovic and A. O. Hero, ”l0 Sparse Inverse Covariance Estimation,” IEEE Trans on Signal Processing, vol... Marjanovic and A. O. Hero, ”l0 Sparse Inverse Covariance Estimation,” in IEEE Trans on Signal Processing, vol. 63, no. 12, pp. 3218-3231, May 2015. 6. G
Two-dimensional sparse wavenumber recovery for guided wavefields
NASA Astrophysics Data System (ADS)
Sabeti, Soroosh; Harley, Joel B.
2018-04-01
The multi-modal and dispersive behavior of guided waves is often characterized by their dispersion curves, which describe their frequency-wavenumber behavior. In prior work, compressive sensing based techniques, such as sparse wavenumber analysis (SWA), have been capable of recovering dispersion curves from limited data samples. A major limitation of SWA, however, is the assumption that the structure is isotropic. As a result, SWA fails when applied to composites and other anisotropic structures. There have been efforts to address this issue in the literature, but they either are not easily generalizable or do not sufficiently express the data. In this paper, we enhance the existing approaches by employing a two-dimensional wavenumber model to account for direction-dependent velocities in anisotropic media. We integrate this model with tools from compressive sensing to reconstruct a wavefield from incomplete data. Specifically, we create a modified two-dimensional orthogonal matching pursuit algorithm that takes an undersampled wavefield image, with specified unknown elements, and determines its sparse wavenumber characteristics. We then recover the entire wavefield from the sparse representations obtained with our small number of data samples.
WEST AND EAST PALISADES ROADLESS AREAS, IDAHO AND WYOMING.
Oriel, Steven S.; Benham, John R.
1984-01-01
Studies of the West and East Palisades Roadless Areas, which lie within the Idaho-Wyoming thrust belt, document structures, reservoir formations, source beds, and thermal maturities comparable to those in producing oil and gas field farther south in the belt. Therefore, the areas are highly favorable for the occurrence of oil and gas. Phosphate beds of appropriate grade within the roadless areas are thinner and less accessible than those being mined from higher thrust sheets to the southwest; however, they contain 98 million tons of inferred phosphate rock resources in areas of substantiated phosphate resource potential. Sparsely distributed thin coal seams occur in the roadless areas. Although moderately pure limestone is present, it is available from other sources closer to markets. Geochemical anomalies from stream-sediment and rock samples for silver, copper, molydenum, and lead occur in the roadless areas but they offer little promise for the occurrence of metallic mineral resources. A possible geothermal resource is unproven, despite thermal phenomena at nearby sites.
Model's sparse representation based on reduced mixed GMsFE basis methods
NASA Astrophysics Data System (ADS)
Jiang, Lijian; Li, Qiuqi
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.
Model's sparse representation based on reduced mixed GMsFE basis methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn
2017-06-01
In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less
Comparison of various techniques for calibration of AIS data
NASA Technical Reports Server (NTRS)
Roberts, D. A.; Yamaguchi, Y.; Lyon, R. J. P.
1986-01-01
The Airborne Imaging Spectrometer (AIS) samples a region which is strongly influenced by decreasing solar irradiance at longer wavelengths and strong atmospheric absorptions. Four techniques, the Log Residual, the Least Upper Bound Residual, the Flat Field Correction and calibration using field reflectance measurements were investigated as a means for removing these two features. Of the four techniques field reflectance calibration proved to be superior in terms of noise and normalization. Of the other three techniques, the Log Residual was superior when applied to areas which did not contain one dominant cover type. In heavily vegetated areas, the Log Residual proved to be ineffective. After removing anomalously bright data values, the Least Upper Bound Residual proved to be almost as effective as the Log Residual in sparsely vegetated areas and much more effective in heavily vegetated areas. Of all the techniques, the Flat Field Correction was the noisest.
Compressed Sensing for Metrics Development
NASA Astrophysics Data System (ADS)
McGraw, R. L.; Giangrande, S. E.; Liu, Y.
2012-12-01
Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.
Open-target sparse sensing of biological agents using DNA microarray
2011-01-01
Background Current biosensors are designed to target and react to specific nucleic acid sequences or structural epitopes. These 'target-specific' platforms require creation of new physical capture reagents when new organisms are targeted. An 'open-target' approach to DNA microarray biosensing is proposed and substantiated using laboratory generated data. The microarray consisted of 12,900 25 bp oligonucleotide capture probes derived from a statistical model trained on randomly selected genomic segments of pathogenic prokaryotic organisms. Open-target detection of organisms was accomplished using a reference library of hybridization patterns for three test organisms whose DNA sequences were not included in the design of the microarray probes. Results A multivariate mathematical model based on the partial least squares regression (PLSR) was developed to detect the presence of three test organisms in mixed samples. When all 12,900 probes were used, the model correctly detected the signature of three test organisms in all mixed samples (mean(R2)) = 0.76, CI = 0.95), with a 6% false positive rate. A sampling algorithm was then developed to sparsely sample the probe space for a minimal number of probes required to capture the hybridization imprints of the test organisms. The PLSR detection model was capable of correctly identifying the presence of the three test organisms in all mixed samples using only 47 probes (mean(R2)) = 0.77, CI = 0.95) with nearly 100% specificity. Conclusions We conceived an 'open-target' approach to biosensing, and hypothesized that a relatively small, non-specifically designed, DNA microarray is capable of identifying the presence of multiple organisms in mixed samples. Coupled with a mathematical model applied to laboratory generated data, and sparse sampling of capture probes, the prototype microarray platform was able to capture the signature of each organism in all mixed samples with high sensitivity and specificity. It was demonstrated that this new approach to biosensing closely follows the principles of sparse sensing. PMID:21801424
Downes, Michelle R; Gibson, Eli; Sykes, Jenna; Haider, Masoom; van der Kwast, Theo H; Ward, Aaron
2016-11-01
The study aimed to determine the relationship between T2-weighted magnetic resonance imaging (MRI) signal and histologic sub-patterns in prostate cancer areas with different Gleason grades. MR images of prostates (n = 25) were obtained prior to radical prostatectomy. These were processed as whole-mount specimens with tumors and the peripheral zone was annotated digitally by two pathologists. Gleason grade 3 was the most prevalent grade and was subdivided into packed, intermediate, and sparse based on gland-to-stroma ratio. Large cribriform, intraductal carcinoma, and small cribriform glands (grade 4 group) were separately annotated but grouped together for statistical analysis. The log MRI signal intensity for each contoured region (n = 809) was measured, and pairwise comparisons were performed using the open-source software R version 3.0.1. Packed grade 3 sub-pattern has a significantly lower MRI intensity than the grade 4 group (P < 0.00001). Sparse grade 3 has a significantly higher MRI intensity than the packed grade 3 sub-pattern (P < 0.0001). No significant difference in MRI intensity was observed between the Gleason grade 4 group and the sparse sub-pattern grade 3 group (P = 0.54). In multivariable analysis adjusting for peripheral zone, the P values maintained significance (packed grade 3 group vs grade 4 group, P < 0.001; and sparse grade 3 sub-pattern vs packed grade 3 sub-pattern, P < 0.001). This study demonstrated that T2-weighted MRI signal is dependent on histologic sub-patterns within Gleason grades 3 and 4 cancers, which may have implications for directed biopsy sampling and patient management. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Moreo, Michael T.; Andraski, Brian J.; Garcia, C. Amanda
2017-08-29
This report documents methodology and results of a study to evaluate groundwater discharge by evapotranspiration (GWET) in sparsely vegetated areas of Amargosa Desert and improve understanding of hydrologic-continuum processes controlling groundwater discharge. Evapotranspiration and GWET rates were computed and characterized at three sites over 2 years using a combination of micrometeorological, unsaturated zone, and stable-isotope measurements. One site (Amargosa Flat Shallow [AFS]) was in a sparse and isolated area of saltgrass (Distichlis spicata) where the depth to groundwater was 3.8 meters (m). The second site (Amargosa Flat Deep [AFD]) was in a sparse cover of predominantly shadscale (Atriplex confertifolia) where the depth to groundwater was 5.3 m. The third site (Amargosa Desert Research Site [ADRS]), selected as a control site where GWET is assumed to be zero, was located in sparse vegetation dominated by creosote bush (Larrea tridentata) where the depth to groundwater was 110 m.Results indicated that capillary rise brought groundwater to within 0.9 m (at AFS) and 3 m (at AFD) of land surface, and that GWET rates were largely controlled by the slow but relatively persistent upward flow of water through the unsaturated zone in response to atmospheric-evaporative demands. Greater GWET at AFS (50 ± 20 millimeters per year [mm/yr]) than at AFD (16 ± 15 mm/yr) corresponded with its shallower depth to the capillary fringe and constantly higher soil-water content. The stable-isotope dataset for hydrogen (δ2H) and oxygen (δ18O) illustrated a broad range of plant-water-uptake scenarios. The AFS saltgrass and AFD shadscale responded to changing environmental conditions and their opportunistic water use included the time- and depth-variable uptake of unsaturated-zone water derived from a combination of groundwater and precipitation. These results can be used to estimate GWET in other areas of Amargosa Desert where hydrologic conditions are similar.
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.
Heymann, R; Weitmann, K; Weiss, S; Thierfelder, D; Flessa, S; Hoffmann, W
2009-07-01
This study examines and compares the frequency of home visits by general practitioners in regions with a lower population density and regions with a higher population density. The discussion centres on the hypothesis whether the number of home visits in rural and remote areas with a low population density is, in fact, higher than in urbanised areas with a higher population density. The average age of the population has been considered in both cases. The communities of Mecklenburg West-Pomerania were aggregated into postal code regions. The analysis is based on these postal code regions. The average frequency of home visits per 100 inhabitants/km2 has been calculated via a bivariate, linear regression model with the population density and the average age for the postal code region as independent variables. The results are based on billing data of the year 2006 as provided by the Association of Statutory Health Insurance Physicians of Mecklenburg-Western Pomerania. In a second step a variable which clustered the postal codes of urbanised areas was added to a multivariate model. The hypothesis of a negative correlation between the frequency of home visits and the population density of the areas examined cannot be confirmed for Mecklenburg-Western Pomerania. Following the dichotomisation of the postal code regions into sparsely and densely populated areas, only the very sparsely populated postal code regions (less than 100 inhabitants/km2) show a tendency towards a higher frequency of home visits. Overall, the frequency of home visits in sparsely populated postal code regions is 28.9% higher than in the densely populated postal code regions (more than 100 inhabitants/km2), although the number of general practitioners is approximately the same in both groups. In part this association seems to be confirmed by a positive correlation between the average age in the individual postal code regions and the number of home visits carried out in the area. As calculated on the basis of the data at hand, only the very sparsely populated areas with a still gradually decreasing population show a tendency towards a higher frequency of home visits. According to the data of 2006, the number of home visits remains high in sparsely populated areas. It may increase in the near future as the number of general practitioners in these areas will gradually decrease while the number of immobile and older inhabitants will increase.
Visual saliency detection based on in-depth analysis of sparse representation
NASA Astrophysics Data System (ADS)
Wang, Xin; Shen, Siqiu; Ning, Chen
2018-03-01
Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Group-sparse representation with dictionary learning for medical image denoising and fusion.
Li, Shutao; Yin, Haitao; Fang, Leyuan
2012-12-01
Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.
Compressive sampling by artificial neural networks for video
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt
2011-06-01
We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.
Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat
2013-07-01
The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
Sparse partial least squares regression for simultaneous dimension reduction and variable selection
Chun, Hyonho; Keleş, Sündüz
2010-01-01
Partial least squares regression has been an alternative to ordinary least squares for handling multicollinearity in several areas of scientific research since the 1960s. It has recently gained much attention in the analysis of high dimensional genomic data. We show that known asymptotic consistency of the partial least squares estimator for a univariate response does not hold with the very large p and small n paradigm. We derive a similar result for a multivariate response regression with partial least squares. We then propose a sparse partial least squares formulation which aims simultaneously to achieve good predictive performance and variable selection by producing sparse linear combinations of the original predictors. We provide an efficient implementation of sparse partial least squares regression and compare it with well-known variable selection and dimension reduction approaches via simulation experiments. We illustrate the practical utility of sparse partial least squares regression in a joint analysis of gene expression and genomewide binding data. PMID:20107611
Sparse-sampling with time-encoded (TICO) stimulated Raman scattering for fast image acquisition
NASA Astrophysics Data System (ADS)
Hakert, Hubertus; Eibl, Matthias; Karpf, Sebastian; Huber, Robert
2017-07-01
Modern biomedical imaging modalities aim to provide researchers a multimodal contrast for a deeper insight into a specimen under investigation. A very promising technique is stimulated Raman scattering (SRS) microscopy, which can unveil the chemical composition of a sample with a very high specificity. Although the signal intensities are enhanced manifold to achieve a faster acquisition of images if compared to standard Raman microscopy, there is a trade-off between specificity and acquisition speed. Commonly used SRS concepts either probe only very few Raman transitions as the tuning of the applied laser sources is complicated or record whole spectra with a spectrometer based setup. While the first approach is fast, it reduces the specificity and the spectrometer approach records whole spectra -with energy differences where no Raman information is present-, which limits the acquisition speed. Therefore, we present a new approach based on the TICO-Raman concept, which we call sparse-sampling. The TICO-sparse-sampling setup is fully electronically controllable and allows probing of only the characteristic peaks of a Raman spectrum instead of always acquiring a whole spectrum. By reducing the spectral points to the relevant peaks, the acquisition time can be greatly reduced compared to a uniformly, equidistantly sampled Raman spectrum while the specificity and the signal to noise ratio (SNR) are maintained. Furthermore, all laser sources are completely fiber based. The synchronized detection enables a full resolution of the Raman signal, whereas the analogue and digital balancing allows shot noise limited detection. First imaging results with polystyrene (PS) and polymethylmethacrylate (PMMA) beads confirm the advantages of TICO sparse-sampling. We achieved a pixel dwell time as low as 35 μs for an image differentiating both species. The mechanical properties of the applied voice coil stage for scanning the sample currently limits even faster acquisition.
Derek B. Van Berkel; Bronwyn Rayfield; Sebastián Martinuzzi; Martin J. Lechowicz; Eric White; Kathleen P. Bell; Chris R. Colocousis; Kent F. Kovacs; Anita T. Morzillo; Darla K. Munroe; Benoit Parmentier; Volker C. Radeloff; Brian J. McGill
2018-01-01
Sparsely settled forests (SSF) are poorly studied, coupled natural and human systems involving rural communities in forest ecosystems that are neither largely uninhabited wildland nor forests on the edges of urban areas. We developed and applied a multidisciplinary approach to define, map, and examine changes in the spatial extent and structure of both the landscapes...
NASA Astrophysics Data System (ADS)
Zhou, Lifan; Chai, Dengfeng; Xia, Yu; Ma, Peifeng; Lin, Hui
2018-01-01
Phase unwrapping (PU) is one of the key processes in reconstructing the digital elevation model of a scene from its interferometric synthetic aperture radar (InSAR) data. It is known that two-dimensional (2-D) PU problems can be formulated as maximum a posteriori estimation of Markov random fields (MRFs). However, considering that the traditional MRF algorithm is usually defined on a rectangular grid, it fails easily if large parts of the wrapped data are dominated by noise caused by large low-coherence area or rapid-topography variation. A PU solution based on sparse MRF is presented to extend the traditional MRF algorithm to deal with sparse data, which allows the unwrapping of InSAR data dominated by high phase noise. To speed up the graph cuts algorithm for sparse MRF, we designed dual elementary graphs and merged them to obtain the Delaunay triangle graph, which is used to minimize the energy function efficiently. The experiments on simulated and real data, compared with other existing algorithms, both confirm the effectiveness of the proposed MRF approach, which suffers less from decorrelation effects caused by large low-coherence area or rapid-topography variation.
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Iaroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustratemore » the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
The dark matter of galaxy voids
NASA Astrophysics Data System (ADS)
Sutter, P. M.; Lavaux, Guilhem; Wandelt, Benjamin D.; Weinberg, David H.; Warren, Michael S.
2014-03-01
How do observed voids relate to the underlying dark matter distribution? To examine the spatial distribution of dark matter contained within voids identified in galaxy surveys, we apply Halo Occupation Distribution models representing sparsely and densely sampled galaxy surveys to a high-resolution N-body simulation. We compare these galaxy voids to voids found in the halo distribution, low-resolution dark matter and high-resolution dark matter. We find that voids at all scales in densely sampled surveys - and medium- to large-scale voids in sparse surveys - trace the same underdensities as dark matter, but they are larger in radius by ˜20 per cent, they have somewhat shallower density profiles and they have centres offset by ˜ 0.4Rv rms. However, in void-to-void comparison we find that shape estimators are less robust to sampling, and the largest voids in sparsely sampled surveys suffer fragmentation at their edges. We find that voids in galaxy surveys always correspond to underdensities in the dark matter, though the centres may be offset. When this offset is taken into account, we recover almost identical radial density profiles between galaxies and dark matter. All mock catalogues used in this work are available at http://www.cosmicvoids.net.
Locality-preserving sparse representation-based classification in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting
2016-10-01
This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.
Ingersoll, George P.; Turk, John T.; Mast, M. Alisa; Clow, David W.; Campbell, Donald H.; Bailey, Zelda C.
2002-01-01
Because regional-scale atmospheric deposition data in the Rocky Mountains are sparse, a program was designed by the U.S. Geological Survey to more thoroughly determine the quality of precipitation and to identify sources of atmospherically deposited pollution in a network of high-elevation sites. Depth-integrated samples of seasonal snowpacks at 52 sampling sites, in a network from New Mexico to Montana, were collected and analyzed each year since 1993. The results of the first 5 years (1993?97) of the program are discussed in this report. Spatial patterns in regional data have emerged from the geographically distributed chemical concentrations of ammonium, nitrate, and sulfate that clearly indicate that concentrations of these acid precursors in less developed areas of the region are lower than concentrations in the heavily developed areas. Snowpacks in northern Colorado that lie adjacent to both the highly developed Denver metropolitan area to the east and coal-fired powerplants to the west had the highest overall concentrations of nitrate and sulfate in the network. Ammonium concentrations were highest in northwestern Wyoming and southern Montana.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obando A, L.; Malavassi R, L.; Ramirez E, O.
The objectives of this investigation were: (1) to locate potential peat deposits in Costa Rica; (2) to estimate as closely as possible by representative sampling the amount of peat present in each deposit, and (3) to make a preliminary evaluation of the quality of the peat in each deposit. With information from soil maps and a 3-week survey of Costa Rica, it is estimated that a potential area of about 1000 km{sup 2} is covered by peat. Most of the peat area (about 830 km{sup 2}) is in northeastern Costa Rica in the Tortuguero area. An aerial survey identified themore » potential peat areas by the exclusive presence of the Yolillo palm. The next largest potential area of peat (about 175 km{sup 2}) is in the cloud-covered areas of the Talamanca Mountains. Some reconnaissance has been done in the Talamanca Mountains, and samples of the peat indicate that it is very similar to the sphagnum peat moss found in Canada and the northern US. Smaller bogs have been discovered at Medio Queso, El Cairo, Moin, and the Limon airport. Two bogs of immediate interest are Medio Queso and El Cairo. The Medio Queso bog has been extensively sampled and contains about 182,000 metric tons (dry) of highly decomposed peat, which is being used as a carrier for nitrogen-fixing bacteria. The El Cairo bog is sparsely sampled and contains about 1,300,000 metric tons of slightly decomposed dry peat. Plans are to use this peat in horticultural applications on nearby farms. 10 refs., 11 figs., 7 tabs.« less
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases.
Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the state-of-the-art results on AR, FERET, FRGC and LFW databases. PMID:26571112
A coarse-to-fine approach for medical hyperspectral image classification with sparse representation
NASA Astrophysics Data System (ADS)
Chang, Lan; Zhang, Mengmeng; Li, Wei
2017-10-01
A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.
Wen, Zaidao; Hou, Zaidao; Jiao, Licheng
2017-11-01
Discriminative dictionary learning (DDL) framework has been widely used in image classification which aims to learn some class-specific feature vectors as well as a representative dictionary according to a set of labeled training samples. However, interclass similarities and intraclass variances among input samples and learned features will generally weaken the representability of dictionary and the discrimination of feature vectors so as to degrade the classification performance. Therefore, how to explicitly represent them becomes an important issue. In this paper, we present a novel DDL framework with two-level low rank and group sparse decomposition model. In the first level, we learn a class-shared and several class-specific dictionaries, where a low rank and a group sparse regularization are, respectively, imposed on the corresponding feature matrices. In the second level, the class-specific feature matrix will be further decomposed into a low rank and a sparse matrix so that intraclass variances can be separated to concentrate the corresponding feature vectors. Extensive experimental results demonstrate the effectiveness of our model. Compared with the other state-of-the-arts on several popular image databases, our model can achieve a competitive or better performance in terms of the classification accuracy.
A compressed sensing X-ray camera with a multilayer architecture
NASA Astrophysics Data System (ADS)
Wang, Zhehui; Iaroshenko, O.; Li, S.; Liu, T.; Parab, N.; Chen, W. W.; Chu, P.; Kenyon, G. T.; Lipton, R.; Sun, K.-X.
2018-01-01
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
SD-SEM: sparse-dense correspondence for 3D reconstruction of microscopic samples.
Baghaie, Ahmadreza; Tafti, Ahmad P; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-06-01
Scanning electron microscopy (SEM) imaging has been a principal component of many studies in biomedical, mechanical, and materials sciences since its emergence. Despite the high resolution of captured images, they remain two-dimensional (2D). In this work, a novel framework using sparse-dense correspondence is introduced and investigated for 3D reconstruction of stereo SEM images. SEM micrographs from microscopic samples are captured by tilting the specimen stage by a known angle. The pair of SEM micrographs is then rectified using sparse scale invariant feature transform (SIFT) features/descriptors and a contrario RANSAC for matching outlier removal to ensure a gross horizontal displacement between corresponding points. This is followed by dense correspondence estimation using dense SIFT descriptors and employing a factor graph representation of the energy minimization functional and loopy belief propagation (LBP) as means of optimization. Given the pixel-by-pixel correspondence and the tilt angle of the specimen stage during the acquisition of micrographs, depth can be recovered. Extensive tests reveal the strength of the proposed method for high-quality reconstruction of microscopic samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Efficient Computation of Anharmonic Force Constants via q-space, with Application to Graphene
NASA Astrophysics Data System (ADS)
Kornbluth, Mordechai; Marianetti, Chris
We present a new approach for extracting anharmonic force constants from a sparse sampling of the anharmonic dynamical tensor. We calculate the derivative of the energy with respect to q-space displacements (phonons) and strain, which guarantees the absence of supercell image errors. Central finite differences provide a well-converged quadratic error tail for each derivative, separating the contribution of each anharmonic order. These derivatives populate the anharmonic dynamical tensor in a sparse mesh that bounds the Brillouin Zone, which ensures comprehensive sampling of q-space while exploiting small-cell calculations for efficient, high-throughput computation. This produces a well-converged and precisely-defined dataset, suitable for big-data approaches. We transform this sparsely-sampled anharmonic dynamical tensor to real-space anharmonic force constants that obey full space-group symmetries by construction. Machine-learning techniques identify the range of real-space interactions. We show the entire process executed for graphene, up to and including the fifth-order anharmonic force constants. This method successfully calculates strain-based phonon renormalization in graphene, even under large strains, which solves a major shortcoming of previous potentials.
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
Using Passive Sampling to Asses Ozone Formation in Sparsely Monitored Areas
NASA Astrophysics Data System (ADS)
Crosby, C. M.; Mainord, J.; George, L. A.
2016-12-01
Tropospheric ozone (O3), a secondary pollutant, is detrimental to both human health and the environment. O3 is formed from nitrogen oxides (NOx) and volatile organic compounds, (VOC's) in the presence of sunlight. Hermiston is a low population rural city in Oregon (17,707), where O3 levels are expected to be minimal. However, Hermiston has recently experienced elevated O3 concentrations, approaching EPA levels of non-attainment. These levels were not predicted by airshed modeling of the region, suggesting that precursor emissions are not adequately represented in the model. Due to the limited monitoring in the area, there are no measurements of precursors in the region. In this study, passive Ogawa samplers were used to measure NOx and O3 levels at twenty sites in the area. The concentrations were then mapped in conjunction with wind trajectories derived from HYSPLIT and compared to NOx point sources attained from the National Emissions Inventory (NEI). The measurement campaign revealed areas of elevated NOx concentrations that were not accounted for in the airshed model. Further exploration is needed to identify these sources. This study lays groundwork for the use of passive sampling to ground-truth airshed models in the absence of monitoring networks.
A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.
Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie
2016-09-01
Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Reduced electron exposure for energy-dispersive spectroscopy using dynamic sampling
Zhang, Yan; Godaliyadda, G. M. Dilshan; Ferrier, Nicola; ...
2017-10-23
Analytical electron microscopy and spectroscopy of biological specimens, polymers, and other beam sensitive materials has been a challenging area due to irradiation damage. There is a pressing need to develop novel imaging and spectroscopic imaging methods that will minimize such sample damage as well as reduce the data acquisition time. The latter is useful for high-throughput analysis of materials structure and chemistry. Here, in this work, we present a novel machine learning based method for dynamic sparse sampling of EDS data using a scanning electron microscope. Our method, based on the supervised learning approach for dynamic sampling algorithm and neuralmore » networks based classification of EDS data, allows a dramatic reduction in the total sampling of up to 90%, while maintaining the fidelity of the reconstructed elemental maps and spectroscopic data. In conclusion, we believe this approach will enable imaging and elemental mapping of materials that would otherwise be inaccessible to these analysis techniques.« less
Alpha Matting with KL-Divergence Based Sparse Sampling.
Karacan, Levent; Erdem, Aykut; Erdem, Erkut
2017-06-22
In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.
The Joker: A custom Monte Carlo sampler for binary-star and exoplanet radial velocity data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-01-01
Given sparse or low-quality radial-velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and MCMC posterior sampling over the orbital parameters. The Joker is a custom-built Monte Carlo sampler that can produce a posterior sampling for orbital parameters given sparse or noisy radial-velocity measurements, even when the likelihood function is poorly behaved. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still highly informative and can be used in hierarchical (population) modeling.
Sparse-View Ultrasound Diffraction Tomography Using Compressed Sensing with Nonuniform FFT
2014-01-01
Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed. PMID:24868241
Uppal, Gina; Sibbald, Shannon L; Melling, James
2016-12-01
This study describes the ethnocultural influences associated with managing diabetes (Type 2) in a small sample of older Sikh immigrants in Toronto, Canada. The South Asian community, which includes Sikhs, is the fastest growing immigrant population, the second largest visible minority in Canada, and is five times more likely to have diabetes than their Canadian counterparts. The relationship between culture, immigration, and management of diabetes has been recognized, but research of how these areas intersect in the Sikh community is sparse. Data were collected using qualitative semi-structured interviews, and participants were recruited via purposive and snowball sampling techniques. Data were analysed using constant comparative methods. The complexities of diabetes management are organized in this study as the (1) external (2) internal and (3) actualized experiences participants faced navigating cultural dynamics, understanding their diagnosis, and interacting with health resources. An individual's diabetes diagnosis and treatment plan interacts with layers beyond the health system which must be understood in order to provide health care that is truly an empowering resource.
Effective dimension reduction for sparse functional data
YAO, F.; LEI, E.; WU, Y.
2015-01-01
Summary We propose a method of effective dimension reduction for functional data, emphasizing the sparse design where one observes only a few noisy and irregular measurements for some or all of the subjects. The proposed method borrows strength across the entire sample and provides a way to characterize the effective dimension reduction space, via functional cumulative slicing. Our theoretical study reveals a bias-variance trade-off associated with the regularizing truncation and decaying structures of the predictor process and the effective dimension reduction space. A simulation study and an application illustrate the superior finite-sample performance of the method. PMID:26566293
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
NASA Astrophysics Data System (ADS)
Su, Wei; Zhou, Ti; Zhang, Peng; Zhou, Hong; Li, Hui
2018-01-01
Some biological surfaces were proved to have excellent anti-wear performance. Being inspired, Nd:YAG pulsed laser was used to create striated biomimetic laser hardening tracks on medium carbon steel samples. Dry sliding wear tests biomimetic samples were performed to investigate specific influence of distribution of laser hardening tracks on sliding wear resistance of biomimetic samples. After comparing wear weight loss of biomimetic samples, quenched sample and untreated sample, it can be suggested that the sample covered with dense laser tracks (3.5 mm spacing) has lower wear weight loss than the one covered with sparse laser tracks (4.5 mm spacing); samples distributed with only dense laser tracks or sparse laser tracks (even distribution) were proved to have better wear resistance than samples distributed with both dense and sparse tracks (uneven distribution). Wear mechanisms indicate that laser track and exposed substrate of biomimetic sample can be regarded as hard zone and soft zone respectively. Inconsecutive striated hard regions, on the one hand, can disperse load into small branches, on the other hand, will hinder sliding abrasives during wear. Soft regions with small range are beneficial in consuming mechanical energy and storing lubricative oxides, however, soft zone with large width (>0.5 mm) will be harmful to abrasion resistance of biomimetic sample because damages and material loss are more obvious on surface of soft phase. As for the reason why samples with even distributed bionic laser tracks have better wear resistance, it can be explained by the fact that even distributed laser hardening tracks can inhibit severe worn of local regions, thus sliding process can be more stable and wear extent can be alleviated as well.
Enhancement of Beaconless Location-Based Routing with Signal Strength Assistance for Ad-Hoc Networks
NASA Astrophysics Data System (ADS)
Chen, Guowei; Itoh, Kenichi; Sato, Takuro
Routing in Ad-hoc networks is unreliable due to the mobility of the nodes. Location-based routing protocols, unlike other protocols which rely on flooding, excel in network scalability. Furthermore, new location-based routing protocols, like, e. g. BLR [1], IGF [2], & CBF [3] have been proposed, with the feature of not requiring beacons in MAC-layer, which improve more in terms of scalability. Such beaconless routing protocols can work efficiently in dense network areas. However, these protocols' algorithms have no ability to avoid from routing into sparse areas. In this article, historical signal strength has been added as a factor into the BLR algorithm, which avoids routing into sparse area, and consequently improves the global routing efficiency.
Baker, Daniel H; Meese, Tim S
2016-07-27
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.
Baker, Daniel H.; Meese, Tim S.
2016-01-01
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures. PMID:27460430
A sub-microwatt asynchronous level-crossing ADC for biomedical applications.
Li, Yongjia; Zhao, Duan; Serdijn, Wouter A
2013-04-01
A continuous-time level-crossing analog-to-digital converter (LC-ADC) for biomedical applications is presented. When compared to uniform-sampling (US) ADCs LC-ADCs generate fewer samples for various sparse biomedical signals. Lower power consumption and reduced design complexity with respect to conventional LC-ADCs are achieved due to: 1) replacing the n-bit digital-to-analog converter (DAC) with a 1-bit DAC; 2) splitting the level-crossing detections; and 3) fixing the comparison window. Designed and implemented in 0.18 μm CMOS technology, the proposed ADC uses a chip area of 220 × 203 μm(2). Operating from a supply voltage of 0.8 V, the ADC consumes 313-582 nW from 5 Hz to 5 kHz and achieves an ENOB up to 7.9 bits.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Massively parallel sparse matrix function calculations with NTPoly
NASA Astrophysics Data System (ADS)
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
Randomized subspace-based robust principal component analysis for hyperspectral anomaly detection
NASA Astrophysics Data System (ADS)
Sun, Weiwei; Yang, Gang; Li, Jialin; Zhang, Dianfa
2018-01-01
A randomized subspace-based robust principal component analysis (RSRPCA) method for anomaly detection in hyperspectral imagery (HSI) is proposed. The RSRPCA combines advantages of randomized column subspace and robust principal component analysis (RPCA). It assumes that the background has low-rank properties, and the anomalies are sparse and do not lie in the column subspace of the background. First, RSRPCA implements random sampling to sketch the original HSI dataset from columns and to construct a randomized column subspace of the background. Structured random projections are also adopted to sketch the HSI dataset from rows. Sketching from columns and rows could greatly reduce the computational requirements of RSRPCA. Second, the RSRPCA adopts the columnwise RPCA (CWRPCA) to eliminate negative effects of sampled anomaly pixels and that purifies the previous randomized column subspace by removing sampled anomaly columns. The CWRPCA decomposes the submatrix of the HSI data into a low-rank matrix (i.e., background component), a noisy matrix (i.e., noise component), and a sparse anomaly matrix (i.e., anomaly component) with only a small proportion of nonzero columns. The algorithm of inexact augmented Lagrange multiplier is utilized to optimize the CWRPCA problem and estimate the sparse matrix. Nonzero columns of the sparse anomaly matrix point to sampled anomaly columns in the submatrix. Third, all the pixels are projected onto the complemental subspace of the purified randomized column subspace of the background and the anomaly pixels in the original HSI data are finally exactly located. Several experiments on three real hyperspectral images are carefully designed to investigate the detection performance of RSRPCA, and the results are compared with four state-of-the-art methods. Experimental results show that the proposed RSRPCA outperforms four comparison methods both in detection performance and in computational time.
Dorr, John A.; O'Connor, Daniel V.; Foster, Neal R.; Jude, David J.
1981-01-01
Spawning by planted lake trout (Salvelinus namaycush) was documented by sampling with a diver-assisted pump in a traditional spawning area in southeastern Lake Michigan near Saugatuck, Michigan in mid-November in 1978 and 1979. Bottom depths at the 11 locations sampled ranged from 3 to 12 m and substrate size from boulders to sand. Periphyton (Cladophora and associated biota) was several millimeters thick at most stations but sparse at the shallowest. The most eggs recovered from a single sample occurred at the shallowest depth (3 m). In both years, some of the small numbers of eggs collected (9 in 1978, 14 in 1979) were alive and fertilized. Laboratory incubation of viable eggs resulted in successful hatching of larvae. When compared with egg densities measured at spawning sites used by self-sustaining populations of lake trout in other lakes, densities in the study are (0-13/m2) appeared to be critically low. Insufficient numbers of eggs, combined with harsh incubation conditions (turbulence, ice scour, sedimentation), were implicated as prime causes for lake trout reproductive failure in the study area, although other factors, such as inappropriate spawning behavior (selection of suboptimal spawning location, depth, or substrate) also may have reduced survival of eggs and larvae.
Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui
2017-08-24
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
Sparse modeling of spatial environmental variables associated with asthma
Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.
2014-01-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437
Sparse modeling of spatial environmental variables associated with asthma.
Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W
2015-02-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.
Similarity regularized sparse group lasso for cup to disc ratio computation.
Cheng, Jun; Zhang, Zhuo; Tao, Dacheng; Wong, Damon Wing Kee; Liu, Jiang; Baskaran, Mani; Aung, Tin; Wong, Tien Yin
2017-08-01
Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well.
NASA Astrophysics Data System (ADS)
Macnae, J.; Ley-Cooper, Y.
2009-05-01
Sub-surface porosity is of importance in estimating fluid contant and salt-load parameters for hydrological modelling. While sparse boreholes may adequately sample the depth to a sub-horizontal water-table and usually also adequately sample ground-water salinity, they do not provide adequate sampling of the spatial variations in porosity or hydraulic permeability caused by spatial variations in sedimentary and other geological processes.. We show in this presentation that spatially detailed porosity can be estimated by applying Archie's law to conductivity estimates from airborne electromagnetic surveys with interpolated ground-water conductivity values. The prediction was tested on data from the Chowilla flood plain in the Murray-Darling Basin of South Australia. A frequency domain, helicopter-borne electromagnetic system collected data at 6 frequencies and 3 to 4 m spacings on lines spaced 100 m apart. This data was transformed into conductivity-depth sections, from which a 3D bulk-conductivity map could be created with about 30 m spatial resolution and 2 to 5 m vertical depth resolution. For that portion of the volume below the interpolated water-table, we predicted porosity in each cell using Archie's law. Generally, predicted porosities were in the 30 to 50 % range, consistent with expectations for the partially consolidated sediments in the floodplain. Porosities were directly measured on core from eight boreholes in the area, and compared quite well with the predictions. The predicted porosity map was spatially consistent, and when combined with measured salinities in the ground water, was able to provide a detailed 3D map of salt-loads in the saturated zone, and as such contribute to a hazard assessment of the saline threat to the river.
Deep ensemble learning of sparse regression models for brain disease diagnosis.
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2017-04-01
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Image super-resolution via sparse representation.
Yang, Jianchao; Wright, John; Huang, Thomas S; Ma, Yi
2010-11-01
This paper presents a new approach to single-image super-resolution, based on sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low resolution and high resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low resolution image patch can be applied with the high resolution image patch dictionary to generate a high resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs, reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle super-resolution with noisy inputs in a more unified framework.
Deep ensemble learning of sparse regression models for brain disease diagnosis
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2018-01-01
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer’s disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call ‘ Deep Ensemble Sparse Regression Network.’ To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. PMID:28167394
Timber Volume and Biomass Estimates in Central Siberia from Satellite Data
NASA Technical Reports Server (NTRS)
Ranson, K. Jon; Kimes, Daniel S.; Kharuk, Vyetcheslav I.
2007-01-01
Mapping of boreal forest's type, structure parameters and biomass are critical for understanding the boreal forest's significance in the carbon cycle, its response to and impact on global climate change. The biggest deficiency of the existing ground based forest inventories is the uncertainty in the inventory data, particularly in remote areas of Siberia where sampling is sparse, lacking, and often decades old. Remote sensing methods can help overcome these problems. In this joint US and Russian study, we used the moderate resolution imaging spectroradiometer (MODIS) and unique waveform data of the geoscience laser altimeter system (GLAS) and produced a map of timber volume for a 10degx12deg area in Central Siberia. Using these methods, the mean timber volume for the forested area in the total study area was 203 m3/ ha. The new remote sensing methods used in this study provide a truly independent estimate of forest structure, which is not dependent on traditional ground forest inventory methods.
A compressed sensing X-ray camera with a multilayer architecture
Wang, Zhehui; Laroshenko, O.; Li, S.; ...
2018-01-25
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
A compressed sensing X-ray camera with a multilayer architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui; Laroshenko, O.; Li, S.
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. In this work, wemore » first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.« less
Immunological memory is associative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D.J.; Forrest, S.; Perelson, A.S.
1996-12-31
The purpose of this paper is to show that immunological memory is an associative and robust memory that belongs to the class of sparse distributed memories. This class of memories derives its associative and robust nature by sparsely sampling the input space and distributing the data among many independent agents. Other members of this class include a model of the cerebellar cortex and Sparse Distributed Memory (SDM). First we present a simplified account of the immune response and immunological memory. Next we present SDM, and then we show the correlations between immunological memory and SDM. Finally, we show how associativemore » recall in the immune response can be both beneficial and detrimental to the fitness of an individual.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, W; Yin, F; Wang, C
Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution ofmore » VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI-MM-ROI-FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while maintaining the estimation accuracy. Estimation using slices sampled uniformly through the tumor achieved better accuracy than slices sampled non-uniformly. Conclusions: Preliminary studies showed that it is feasible to generate VC-MRI from multi-slice sparsely-sampled 2D-cine images for real-time 3D-target verification. This work was supported by the National Institutes of Health under Grant No. R01-CA184173 and a research grant from Varian Medical Systems.« less
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Tomographic Imaging of a Forested Area By Airborne Multi-Baseline P-Band SAR.
Frey, Othmar; Morsdorf, Felix; Meier, Erich
2008-09-24
In recent years, various attempts have been undertaken to obtain information about the structure of forested areas from multi-baseline synthetic aperture radar data. Tomographic processing of such data has been demonstrated for airborne L-band data but the quality of the focused tomographic images is limited by several factors. In particular, the common Fourierbased focusing methods are susceptible to irregular and sparse sampling, two problems, that are unavoidable in case of multi-pass, multi-baseline SAR data acquired by an airborne system. In this paper, a tomographic focusing method based on the time-domain back-projection algorithm is proposed, which maintains the geometric relationship between the original sensor positions and the imaged target and is therefore able to cope with irregular sampling without introducing any approximations with respect to the geometry. The tomographic focusing quality is assessed by analysing the impulse response of simulated point targets and an in-scene corner reflector. And, in particular, several tomographic slices of a volume representing a forested area are given. The respective P-band tomographic data set consisting of eleven flight tracks has been acquired by the airborne E-SAR sensor of the German Aerospace Center (DLR).
Mineral resources of the Castle Peaks Wilderness Study Area, San Bernardino County, California
Miller, David A.W.; Frisken, James G.; Jachens, Robert C.; Gese, Diann D.
1986-01-01
The Castle Peaks Wilderness Study Area (CDCA266) comprises approximately 45,000 acres in the northern New York Mountains, San Bernardino County, California. At the request of the Bureau of Land Management, 39,303 acres of the wilderness study area were studied. The area was investigated during 1982-1985 using combined geologic, geochemical, and geophysical methods. are considered preliminarily suitable for wilderness deignation. There are no mineral reserves or identified resources in the study area. Fluorspar, occurring in sparse veins, has moderate resource potential, as do silver and lead in fault zones, and gold and silver in sparse, high-grade veins and fault breccia. Each area of moderate resource potential encompasses less than one square mile. These same commodities have low resource potential in similar occurrences throughout much of the study area. In addition, there is low resource potential for gold in placer deposits, uranium in altered breccia and gouge, and rare-earth elements in pegmatite dikes. There is no resource potential for oil and gas resources over most of the study area, but the potential is unknown along its western margin. In this report, the area studied is referred to"the wilderness study area", or simply "the study area."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Amy N.; Nagle, Nicholas N.
Techniques such as Iterative Proportional Fitting have been previously suggested as a means to generate new data with the demographic granularity of individual surveys and the spatial granularity of small area tabulations of censuses and surveys. This article explores internal and external validation approaches for synthetic, small area, household- and individual-level microdata using a case study for Bangladesh. Using data from the Bangladesh Census 2011 and the Demographic and Health Survey, we produce estimates of infant mortality rate and other household attributes for small areas using a variation of an iterative proportional fitting method called P-MEDM. We conduct an internalmore » validation to determine: whether the model accurately recreates the spatial variation of the input data, how each of the variables performed overall, and how the estimates compare to the published population totals. We conduct an external validation by comparing the estimates with indicators from the 2009 Multiple Indicator Cluster Survey (MICS) for Bangladesh to benchmark how well the estimates compared to a known dataset which was not used in the original model. The results indicate that the estimation process is viable for regions that are better represented in the microdata sample, but also revealed the possibility of strong overfitting in sparsely sampled sub-populations.« less
Rose, Amy N.; Nagle, Nicholas N.
2016-08-01
Techniques such as Iterative Proportional Fitting have been previously suggested as a means to generate new data with the demographic granularity of individual surveys and the spatial granularity of small area tabulations of censuses and surveys. This article explores internal and external validation approaches for synthetic, small area, household- and individual-level microdata using a case study for Bangladesh. Using data from the Bangladesh Census 2011 and the Demographic and Health Survey, we produce estimates of infant mortality rate and other household attributes for small areas using a variation of an iterative proportional fitting method called P-MEDM. We conduct an internalmore » validation to determine: whether the model accurately recreates the spatial variation of the input data, how each of the variables performed overall, and how the estimates compare to the published population totals. We conduct an external validation by comparing the estimates with indicators from the 2009 Multiple Indicator Cluster Survey (MICS) for Bangladesh to benchmark how well the estimates compared to a known dataset which was not used in the original model. The results indicate that the estimation process is viable for regions that are better represented in the microdata sample, but also revealed the possibility of strong overfitting in sparsely sampled sub-populations.« less
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
Gratz, Marcel; Schlamann, Marc; Goericke, Sophia; Maderwald, Stefan; Quick, Harald H
2017-03-01
To assess the image quality of sparsely sampled contrast-enhanced MR angiography (sparse CE-MRA) providing high spatial resolution and whole-head coverage. Twenty-three patients scheduled for contrast-enhanced MR imaging of the head, (N = 19 with intracranial pathologies, N = 9 with vascular diseases), were included. Sparse CE-MRA at 3 Tesla was conducted using a single dose of contrast agent. Two neuroradiologists independently evaluated the data regarding vascular visibility and diagnostic value of overall 24 parameters and vascular segments on a 5-point ordinary scale (5 = very good, 1 = insufficient vascular visibility). Contrast bolus timing and the resulting arterio-venous overlap was also evaluated. Where available (N = 9), sparse CE-MRA was compared to intracranial Time-of-Flight MRA. The overall rating across all patients for sparse CE-MRA was 3.50 ± 1.07. Direct influence of the contrast bolus timing on the resulting image quality was observed. Overall mean vascular visibility and image quality across different features was rated good to intermediate (3.56 ± 0.95). The average performance of intracranial Time-of-Flight was rated 3.84 ± 0.87 across all patients and 3.54 ± 0.62 across all features. Sparse CE-MRA provides high-quality 3D MRA with high spatial resolution and whole-head coverage within short acquisition time. Accurate contrast bolus timing is mandatory. • Sparse CE-MRA enables fast vascular imaging with full brain coverage. • Volumes with sub-millimetre resolution can be acquired within 10 seconds. • Reader's ratings are good to intermediate and dependent on contrast bolus timing. • The method provides an excellent overview and allows screening for vascular pathologies.
14 CFR 91.305 - Flight test areas.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 2 2012-01-01 2012-01-01 false Flight test areas. 91.305 Section 91.305... AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Special Flight Operations § 91.305 Flight test areas. No person may flight test an aircraft except over open water, or sparsely populated...
14 CFR 91.305 - Flight test areas.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 2 2013-01-01 2013-01-01 false Flight test areas. 91.305 Section 91.305... AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Special Flight Operations § 91.305 Flight test areas. No person may flight test an aircraft except over open water, or sparsely populated...
14 CFR 91.305 - Flight test areas.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Flight test areas. 91.305 Section 91.305... AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Special Flight Operations § 91.305 Flight test areas. No person may flight test an aircraft except over open water, or sparsely populated...
DEM generation from contours and a low-resolution DEM
NASA Astrophysics Data System (ADS)
Li, Xinghua; Shen, Huanfeng; Feng, Ruitao; Li, Jie; Zhang, Liangpei
2017-12-01
A digital elevation model (DEM) is a virtual representation of topography, where the terrain is established by the three-dimensional co-ordinates. In the framework of sparse representation, this paper investigates DEM generation from contours. Since contours are usually sparsely distributed and closely related in space, sparse spatial regularization (SSR) is enforced on them. In order to make up for the lack of spatial information, another lower spatial resolution DEM from the same geographical area is introduced. In this way, the sparse representation implements the spatial constraints in the contours and extracts the complementary information from the auxiliary DEM. Furthermore, the proposed method integrates the advantage of the unbiased estimation of kriging. For brevity, the proposed method is called the kriging and sparse spatial regularization (KSSR) method. The performance of the proposed KSSR method is demonstrated by experiments in Shuttle Radar Topography Mission (SRTM) 30 m DEM and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) 30 m global digital elevation model (GDEM) generation from the corresponding contours and a 90 m DEM. The experiments confirm that the proposed KSSR method outperforms the traditional kriging and SSR methods, and it can be successfully used for DEM generation from contours.
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Li, Pin; Lee, Jay
2018-03-01
Sparsity is becoming a more and more important topic in the area of machine learning and signal processing recently. One big family of sparse measures in current literature is the generalized lp /lq norm, which is scale invariant and is widely regarded as normalized lp norm. However, the characteristics of the generalized lp /lq norm are still less discussed and its application to the condition monitoring of rotating devices has been still unexplored. In this study, we firstly discuss the characteristics of the generalized lp /lq norm for sparse optimization and then propose a method of sparse filtering with the generalized lp /lq norm for the purpose of impulsive signature enhancement. Further driven by the trend of industrial big data and the need of reducing maintenance cost for industrial equipment, the proposed sparse filter is customized for vibration signal processing and also implemented on bearing and gearbox for the purpose of condition monitoring. Based on the results from the industrial implementations in this paper, the proposed method has been found to be a promising tool for impulsive feature enhancement, and the superiority of the proposed method over previous methods is also demonstrated.
Asynchronous signal-dependent non-uniform sampler
NASA Astrophysics Data System (ADS)
Can-Cimino, Azime; Chaparro, Luis F.; Sejdić, Ervin
2014-05-01
Analog sparse signals resulting from biomedical and sensing network applications are typically non-stationary with frequency-varying spectra. By ignoring that the maximum frequency of their spectra is changing, uniform sampling of sparse signals collects unnecessary samples in quiescent segments of the signal. A more appropriate sampling approach would be signal-dependent. Moreover, in many of these applications power consumption and analog processing are issues of great importance that need to be considered. In this paper we present a signal dependent non-uniform sampler that uses a Modified Asynchronous Sigma Delta Modulator which consumes low-power and can be processed using analog procedures. Using Prolate Spheroidal Wave Functions (PSWF) interpolation of the original signal is performed, thus giving an asynchronous analog to digital and digital to analog conversion. Stable solutions are obtained by using modulated PSWFs functions. The advantage of the adapted asynchronous sampler is that range of frequencies of the sparse signal is taken into account avoiding aliasing. Moreover, it requires saving only the zero-crossing times of the non-uniform samples, or their differences, and the reconstruction can be done using their quantized values and a PSWF-based interpolation. The range of frequencies analyzed can be changed and the sampler can be implemented as a bank of filters for unknown range of frequencies. The performance of the proposed algorithm is illustrated with an electroencephalogram (EEG) signal.
Inventory methods for trees in nonforest areas in the Great Plains States
Andrew J. Lister; Charles T. Scott; Steven Rasmussen
2012-01-01
The US Forest Service's Forest Inventory and Analysis (FIA) program collects information on trees in areas that meet its definition of forest. However, the inventory excludes trees in areas that do not meet this definition, such as those found in urban areas, in isolated patches, in areas with sparse or predominantly herbaceous vegetation, in narrow strips (e.g.,...
An algorithm for extraction of periodic signals from sparse, irregularly sampled data
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1994-01-01
Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.
Thin-film sparse boundary array design for passive acoustic mapping during ultrasound therapy.
Coviello, Christian M; Kozick, Richard J; Hurrell, Andrew; Smith, Penny Probert; Coussios, Constantin-C
2012-10-01
A new 2-D hydrophone array for ultrasound therapy monitoring is presented, along with a novel algorithm for passive acoustic mapping using a sparse weighted aperture. The array is constructed using existing polyvinylidene fluoride (PVDF) ultrasound sensor technology, and is utilized for its broadband characteristics and its high receive sensitivity. For most 2-D arrays, high-resolution imagery is desired, which requires a large aperture at the cost of a large number of elements. The proposed array's geometry is sparse, with elements only on the boundary of the rectangular aperture. The missing information from the interior is filled in using linear imaging techniques. After receiving acoustic emissions during ultrasound therapy, this algorithm applies an apodization to the sparse aperture to limit side lobes and then reconstructs acoustic activity with high spatiotemporal resolution. Experiments show verification of the theoretical point spread function, and cavitation maps in agar phantoms correspond closely to predicted areas, showing the validity of the array and methodology.
NASA Astrophysics Data System (ADS)
Yan, Dan; Bai, Lianfa; Zhang, Yi; Han, Jing
2018-02-01
For the problems of missing details and performance of the colorization based on sparse representation, we propose a conceptual model framework for colorizing gray-scale images, and then a multi-sparse dictionary colorization algorithm based on the feature classification and detail enhancement (CEMDC) is proposed based on this framework. The algorithm can achieve a natural colorized effect for a gray-scale image, and it is consistent with the human vision. First, the algorithm establishes a multi-sparse dictionary classification colorization model. Then, to improve the accuracy rate of the classification, the corresponding local constraint algorithm is proposed. Finally, we propose a detail enhancement based on Laplacian Pyramid, which is effective in solving the problem of missing details and improving the speed of image colorization. In addition, the algorithm not only realizes the colorization of the visual gray-scale image, but also can be applied to the other areas, such as color transfer between color images, colorizing gray fusion images, and infrared images.
Sparse PCA with Oracle Property.
Gu, Quanquan; Wang, Zhaoran; Liu, Han
In this paper, we study the estimation of the k -dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank- k , and attains a [Formula: see text] statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.
Sparse PCA with Oracle Property
Gu, Quanquan; Wang, Zhaoran; Liu, Han
2014-01-01
In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a s/n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets. PMID:25684971
Improved analysis of SP and CoSaMP under total perturbations
NASA Astrophysics Data System (ADS)
Li, Haifeng
2016-12-01
Practically, in the underdetermined model y= A x, where x is a K sparse vector (i.e., it has no more than K nonzero entries), both y and A could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector x is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.
Face recognition based on two-dimensional discriminant sparse preserving projection
NASA Astrophysics Data System (ADS)
Zhang, Dawei; Zhu, Shanan
2018-04-01
In this paper, a supervised dimensionality reduction algorithm named two-dimensional discriminant sparse preserving projection (2DDSPP) is proposed for face recognition. In order to accurately model manifold structure of data, 2DDSPP constructs within-class affinity graph and between-class affinity graph by the constrained least squares (LS) and l1 norm minimization problem, respectively. Based on directly operating on image matrix, 2DDSPP integrates graph embedding (GE) with Fisher criterion. The obtained projection subspace preserves within-class neighborhood geometry structure of samples, while keeping away samples from different classes. The experimental results on the PIE and AR face databases show that 2DDSPP can achieve better recognition performance.
Weiss, Christian; Zoubir, Abdelhak M
2017-05-01
We propose a compressed sampling and dictionary learning framework for fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is generated from a model for the reflected sensor signal. Imperfect prior knowledge is considered in terms of uncertain local and global parameters. To estimate a sparse representation and the dictionary parameters, we present an alternating minimization algorithm that is equipped with a preprocessing routine to handle dictionary coherence. The support of the obtained sparse signal indicates the reflection delays, which can be used to measure impairments along the sensing fiber. The performance is evaluated by simulations and experimental data for a fiber sensor system with common core architecture.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
NASA Astrophysics Data System (ADS)
Gong, Maoguo; Yang, Hailun; Zhang, Puzhao
2017-07-01
Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.
Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O
2015-12-01
To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.
2018-01-01
Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.
Statistical regularities of art images and natural scenes: spectra, sparseness and nonlinearities.
Graham, Daniel J; Field, David J
2007-01-01
Paintings are the product of a process that begins with ordinary vision in the natural world and ends with manipulation of pigments on canvas. Because artists must produce images that can be seen by a visual system that is thought to take advantage of statistical regularities in natural scenes, artists are likely to replicate many of these regularities in their painted art. We have tested this notion by computing basic statistical properties and modeled cell response properties for a large set of digitized paintings and natural scenes. We find that both representational and non-representational (abstract) paintings from our sample (124 images) show basic similarities to a sample of natural scenes in terms of their spatial frequency amplitude spectra, but the paintings and natural scenes show significantly different mean amplitude spectrum slopes. We also find that the intensity distributions of paintings show a lower skewness and sparseness than natural scenes. We account for this by considering the range of luminances found in the environment compared to the range available in the medium of paint. A painting's range is limited by the reflective properties of its materials. We argue that artists do not simply scale the intensity range down but use a compressive nonlinearity. In our studies, modeled retinal and cortical filter responses to the images were less sparse for the paintings than for the natural scenes. But when a compressive nonlinearity was applied to the images, both the paintings' sparseness and the modeled responses to the paintings showed the same or greater sparseness compared to the natural scenes. This suggests that artists achieve some degree of nonlinear compression in their paintings. Because paintings have captivated humans for millennia, finding basic statistical regularities in paintings' spatial structure could grant insights into the range of spatial patterns that humans find compelling.
Sparse feature learning for instrument identification: Effects of sampling and pooling methods.
Han, Yoonchang; Lee, Subin; Nam, Juhan; Lee, Kyogu
2016-05-01
Feature learning for music applications has recently received considerable attention from many researchers. This paper reports on the sparse feature learning algorithm for musical instrument identification, and in particular, focuses on the effects of the frame sampling techniques for dictionary learning and the pooling methods for feature aggregation. To this end, two frame sampling techniques are examined that are fixed and proportional random sampling. Furthermore, the effect of using onset frame was analyzed for both of proposed sampling methods. Regarding summarization of the feature activation, a standard deviation pooling method is used and compared with the commonly used max- and average-pooling techniques. Using more than 47 000 recordings of 24 instruments from various performers, playing styles, and dynamics, a number of tuning parameters are experimented including the analysis frame size, the dictionary size, and the type of frequency scaling as well as the different sampling and pooling methods. The results show that the combination of proportional sampling and standard deviation pooling achieve the best overall performance of 95.62% while the optimal parameter set varies among the instrument classes.
Optimal sparse approximation with integrate and fire neurons.
Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher
2014-08-01
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi
2015-01-01
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195
NASA Astrophysics Data System (ADS)
Mei, Kai; Kopp, Felix K.; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.; Baum, Thomas
2017-03-01
The trabecular bone microstructure is a key to the early diagnosis and advanced therapy monitoring of osteoporosis. Regularly measuring bone microstructure with conventional multi-detector computer tomography (MDCT) would expose patients with a relatively high radiation dose. One possible solution to reduce exposure to patients is sampling fewer projection angles. This approach can be supported by advanced reconstruction algorithms, with their ability to achieve better image quality under reduced projection angles or high levels of noise. In this work, we investigated the performance of iterative reconstruction from sparse sampled projection data on trabecular bone microstructure in in-vivo MDCT scans of human spines. The computed MDCT images were evaluated by calculating bone microstructure parameters. We demonstrated that bone microstructure parameters were still computationally distinguishable when half or less of the radiation dose was employed.
Sparse representation based image interpolation with nonlocal autoregressive modeling.
Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming
2013-04-01
Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.
Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.
Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli
2016-05-01
Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.
Sparse aperiodic arrays for optical beam forming and LIDAR.
Komljenovic, Tin; Helkey, Roger; Coldren, Larry; Bowers, John E
2017-02-06
We analyze optical phased arrays with aperiodic pitch and element-to-element spacing greater than one wavelength at channel counts exceeding hundreds of elements. We optimize the spacing between waveguides for highest side-mode suppression providing grating lobe free steering in full visible space while preserving the narrow beamwidth. Optimum waveguide placement strategies are derived and design guidelines for sparse (> 1.5 λ and > 3 λ average element spacing) optical phased arrays are given. Scaling to larger array areas by means of tiling is considered.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Interest focuses on exploratory areas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, K.
1984-10-01
Speculative geophysical programs are underway in sparsely drilled areas throughout the southern Rocky Mountain region. Responding to significant operator interest generated by new production in Nevada, a few contractors are designing programs to establish optimum recording parameters. Geophysical exploration activities in Colorado and Utah are discussed.
Igawa, Satomi; Kishibe, Mari; Honma, Masaru; Murakami, Masamoto; Mizuno, Yuki; Suga, Yasushi; Seishima, Mariko; Ohguchi, Yuka; Akiyama, Masashi; Hirose, Kenji; Ishida-Yamamoto, Akemi; Iizuka, Hajime
2013-10-01
Atopic dermatitis (AD), Netherton syndrome (NS) and peeling skin syndrome type B (PSS) may show some clinical phenotypic overlap. Corneodesmosomes are crucial for maintaining stratum corneum integrity and the components' localization can be visualized by immunostaining tape-stripped corneocytes. In normal skin, they are detected at the cell periphery. To determine whether AD, NS, PSS and ichthyosis vulgaris (IV) have differences in the corneodesmosomal components' distribution and corneocytes surface areas. Corneocytes were tape-stripped from a control group (n=12) and a disease group (37 AD cases, 3 IV cases, 4 NS cases, and 3 PSS cases), and analyzed with immunofluorescent microscopy. The distribution patterns of corneodesmosomal components: desmoglein 1, corneodesmosin, and desmocollin 1 were classified into four types: peripheral, sparse diffuse, dense diffuse and partial diffuse. Corneocyte surface areas were also measured. The corneodesmosome staining patterns were abnormal in the disease group. Other than in the 3 PSS cases, all three components showed similar patterns in each category. In lesional AD skin, the dense diffuse pattern was prominent. A high rate of the partial diffuse pattern, loss of linear cell-cell contacts, and irregular stripping manners were unique to NS. Only in PSS was corneodesmosin staining virtually absent. The corneocyte surface areas correlated significantly with the rate of combined sparse and dense diffuse patterns of desmoglein 1. This method may be used to assess abnormally differentiated corneocytes in AD and other diseases tested. In PSS samples, tape stripping analysis may serve as a non-invasive diagnostic test. Copyright © 2013 Japanese Society for Investigative Dermatology. Published by Elsevier Ireland Ltd. All rights reserved.
Gallé, Róbert; Urák, István; Nikolett, Gallé-Szpisjak; Hartel, Tibor
2017-01-01
The integration of food production and biodiversity conservation represents a key challenge for sustainability. Several studies suggest that even small structural elements in the landscape can make a substantial contribution to the overall biodiversity value of the agricultural landscapes. Pastures can have high biodiversity potential. However, their intensive and monofunctional use typically erodes its natural capital, including biodiversity. Here we address the ecological value of fine scale structural elements represented by sparsely scattered trees and shrubs for the spider communities in a moderately intensively grazed pasture in Transylvania, Eastern Europe. The pasture was grazed with sheep, cattle and buffalo (ca 1 Livestock Unit ha-1) and no chemical fertilizers were applied. Sampling sites covered the open pasture as well as the existing fine-scale heterogeneity created by scattered trees and shrub. 40 sampling locations each being represented by three 1 m2 quadrats were situated in a stratified design while assuring spatial independency of sampling locations. We identified 140 species of spiders, out of which 18 were red listed and four were new for the Romanian fauna. Spider species assemblages of open pasture, scattered trees, trees and shrubs and the forest edge were statistically distinct. Our study shows that sparsely scattered mature woody vegetation and shrubs substantially increases the ecological value of managed pastures. The structural complexity provided by scattered trees and shrubs makes possible the co-occurrence of high spider diversity with a moderately high intensity grazing possible in this wood-pasture. Our results are in line with recent empirical research showing that sparse trees and shrubs increases the biodiversity potential of pastures managed for commodity production.
Nikolett, Gallé-Szpisjak; Hartel, Tibor
2017-01-01
The integration of food production and biodiversity conservation represents a key challenge for sustainability. Several studies suggest that even small structural elements in the landscape can make a substantial contribution to the overall biodiversity value of the agricultural landscapes. Pastures can have high biodiversity potential. However, their intensive and monofunctional use typically erodes its natural capital, including biodiversity. Here we address the ecological value of fine scale structural elements represented by sparsely scattered trees and shrubs for the spider communities in a moderately intensively grazed pasture in Transylvania, Eastern Europe. The pasture was grazed with sheep, cattle and buffalo (ca 1 Livestock Unit ha-1) and no chemical fertilizers were applied. Sampling sites covered the open pasture as well as the existing fine-scale heterogeneity created by scattered trees and shrub. 40 sampling locations each being represented by three 1 m2 quadrats were situated in a stratified design while assuring spatial independency of sampling locations. We identified 140 species of spiders, out of which 18 were red listed and four were new for the Romanian fauna. Spider species assemblages of open pasture, scattered trees, trees and shrubs and the forest edge were statistically distinct. Our study shows that sparsely scattered mature woody vegetation and shrubs substantially increases the ecological value of managed pastures. The structural complexity provided by scattered trees and shrubs makes possible the co-occurrence of high spider diversity with a moderately high intensity grazing possible in this wood-pasture. Our results are in line with recent empirical research showing that sparse trees and shrubs increases the biodiversity potential of pastures managed for commodity production. PMID:28886058
Inventory of trees in nonforest areas in the Great Plains states
Andrew Lister; Chip Scott; Steve Rasmussen
2009-01-01
The U.S. Forest Service's Forest Inventory and Analysis (FIA) program collects information on trees in areas that meet its definition of forest. However, the inventory excludes trees in areas that do not meet this definition, such as those found in isolated patches, in areas with sparse or predominantly herbaceous vegetation, in narrow strips (e.g., shelterbelts...
Completing sparse and disconnected protein-protein network by deep learning.
Huang, Lei; Liao, Li; Wu, Cathy H
2018-03-22
Protein-protein interaction (PPI) prediction remains a central task in systems biology to achieve a better and holistic understanding of cellular and intracellular processes. Recently, an increasing number of computational methods have shifted from pair-wise prediction to network level prediction. Many of the existing network level methods predict PPIs under the assumption that the training network should be connected. However, this assumption greatly affects the prediction power and limits the application area because the current golden standard PPI networks are usually very sparse and disconnected. Therefore, how to effectively predict PPIs based on a training network that is sparse and disconnected remains a challenge. In this work, we developed a novel PPI prediction method based on deep learning neural network and regularized Laplacian kernel. We use a neural network with an autoencoder-like architecture to implicitly simulate the evolutionary processes of a PPI network. Neurons of the output layer correspond to proteins and are labeled with values (1 for interaction and 0 for otherwise) from the adjacency matrix of a sparse disconnected training PPI network. Unlike autoencoder, neurons at the input layer are given all zero input, reflecting an assumption of no a priori knowledge about PPIs, and hidden layers of smaller sizes mimic ancient interactome at different times during evolution. After the training step, an evolved PPI network whose rows are outputs of the neural network can be obtained. We then predict PPIs by applying the regularized Laplacian kernel to the transition matrix that is built upon the evolved PPI network. The results from cross-validation experiments show that the PPI prediction accuracies for yeast data and human data measured as AUC are increased by up to 8.4 and 14.9% respectively, as compared to the baseline. Moreover, the evolved PPI network can also help us leverage complementary information from the disconnected training network and multiple heterogeneous data sources. Tested by the yeast data with six heterogeneous feature kernels, the results show our method can further improve the prediction performance by up to 2%, which is very close to an upper bound that is obtained by an Approximate Bayesian Computation based sampling method. The proposed evolution deep neural network, coupled with regularized Laplacian kernel, is an effective tool in completing sparse and disconnected PPI networks and in facilitating integration of heterogeneous data sources.
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
Augmented l1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm. Revision 1
2012-10-17
nonzero and sampled from the standard Gaussian distribution (for Figure 2) or the Bernoulli distribution (for Figure 3). Both tests had the same sensing...dual variable y(k) Figure 3: Convergence of primal and dual variables of three algorithms on Bernoulli sparse x0 was the slowest. Besides the obvious...slower convergence than the final stage. Comparing the results of two tests, the convergence was faster on the Bernoulli sparse signal than the
A practical modification of horizontal line sampling for snag and cavity tree inventory
M. J. Ducey; G. J. Jordan; J. H. Gove; H. T. Valentine
2002-01-01
Snags and cavity trees are important structural features in forests, but they are often sparsely distributed, making efficient inventories problematic. We present a straightforward modification of horizontal line sampling designed to facilitate inventory of these features while remaining compatible with commonly employed sampling methods for the living overstory. The...
Miniature Laboratory for Detecting Sparse Biomolecules
NASA Technical Reports Server (NTRS)
Lin, Ying; Yu, Nan
2005-01-01
A miniature laboratory system has been proposed for use in the field to detect sparsely distributed biomolecules. By emphasizing concentration and sorting of specimens prior to detection, the underlying system concept would make it possible to attain high detection sensitivities without the need to develop ever more sensitive biosensors. The original purpose of the proposal is to aid the search for signs of life on a remote planet by enabling the detection of specimens as sparse as a few molecules or microbes in a large amount of soil, dust, rocks, water/ice, or other raw sample material. Some version of the system could prove useful on Earth for remote sensing of biological contamination, including agents of biological warfare. Processing in this system would begin with dissolution of the raw sample material in a sample-separation vessel. The solution in the vessel would contain floating microscopic magnetic beads coated with substances that could engage in chemical reactions with various target functional groups that are parts of target molecules. The chemical reactions would cause the targeted molecules to be captured on the surfaces of the beads. By use of a controlled magnetic field, the beads would be concentrated in a specified location in the vessel. Once the beads were thus concentrated, the rest of the solution would be discarded. This procedure would obviate the filtration steps and thereby also eliminate the filter-clogging difficulties of typical prior sample-concentration schemes. For ferrous dust/soil samples, the dissolution would be done first in a separate vessel before the solution is transferred to the microbead-containing vessel.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).
Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-05-16
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.
Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)
Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando
2018-01-01
A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731
Long, D.A.; Arp, C.D.
2011-01-01
Growing visitor traffic and resource use, as well as natural and anthropogenic land and climatic changes, can place increasing stress on lake ecosystems in Denali National Park and Preserve. Baseline data required to substantiate impact assessment in this sub-arctic region is sparse to non-existent. The U.S. Geological Survey, in cooperation with the National Park Service, conducted a water-quality assessment of several large lakes in and around the Park from June 2006 to August 2008. Discrete water-quality samples, lake profiles of pH, specific conductivity, dissolved-oxygen concentration, water temperature, turbidity, and continuous-record temperature profile data were collected from Wonder Lake, Chilchukabena Lake, and Lake Minchumina. In addition, zooplankton, snow chemistry data, fecal coliform, and inflow/outflow water-quality samples also were collected from Wonder Lake.
Semi-blind sparse image reconstruction with application to MRFM.
Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O
2012-09-01
We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-09-01
We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.
Estimation of sampling error uncertainties in observed surface air temperature change in China
NASA Astrophysics Data System (ADS)
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks.
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R; Nguyen, Tuan N; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively.
Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks
Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R.; Nguyen, Tuan N.; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T.
2017-01-01
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively. PMID:28326009
Colantoni, Andrea; Grigoriadis, Efstathios; Sateriano, Adele; Venanzoni, Giuseppe; Salvati, Luca
2016-03-01
The present study investigates changes in the use of land caused by the expansion of an informal city in the Mediterranean region (Athens, Greece) and it proposes a simplified methodology to assess selective land take at the scale of municipalities. The amount of land take over twenty years (1987-2007) for cropland, sparsely vegetated areas and natural land was compared with the surface area of the respective class at the beginning of the study period (1987). Indicators of selective land take by class were correlated with socioeconomic indicators at the scale of municipalities to verify the influence of the local context and the impact of urban planning on land take processes. Evidence indicates that urban expansion into fringe land consumes primarily cropland and sparse vegetation in the case of the Athens' metropolitan region. Cropland and sparse vegetation were consumed proportionally more than the respective availability in 16 municipalities out of 60. Agricultural land take was positively correlated with population density and growth rate, rate of participation to the job market and road density. Sparse vegetation land take was observed in municipalities with predominance of high density settlements. As a result of second-home expansion in coastal municipalities, natural land was converted to urban use in proportion to the availability in the landscape. Urban planning seems to have a limited impact on selective land take. Copyright © 2015 Elsevier B.V. All rights reserved.
The application of low-rank and sparse decomposition method in the field of climatology
NASA Astrophysics Data System (ADS)
Gupta, Nitika; Bhaskaran, Prasad K.
2018-04-01
The present study reports a low-rank and sparse decomposition method that separates the mean and the variability of a climate data field. Until now, the application of this technique was limited only in areas such as image processing, web data ranking, and bioinformatics data analysis. In climate science, this method exactly separates the original data into a set of low-rank and sparse components, wherein the low-rank components depict the linearly correlated dataset (expected or mean behavior), and the sparse component represents the variation or perturbation in the dataset from its mean behavior. The study attempts to verify the efficacy of this proposed technique in the field of climatology with two examples of real world. The first example attempts this technique on the maximum wind-speed (MWS) data for the Indian Ocean (IO) region. The study brings to light a decadal reversal pattern in the MWS for the North Indian Ocean (NIO) during the months of June, July, and August (JJA). The second example deals with the sea surface temperature (SST) data for the Bay of Bengal region that exhibits a distinct pattern in the sparse component. The study highlights the importance of the proposed technique used for interpretation and visualization of climate data.
Effects of satellite image spatial aggregation and resolution on estimates of forest land area
M.D. Nelson; R.E. McRoberts; G.R. Holden; M.E. Bauer
2009-01-01
Satellite imagery is being used increasingly in association with national forest inventories (NFIs) to produce maps and enhance estimates of forest attributes. We simulated several image spatial resolutions within sparsely and heavily forested study areas to assess resolution effects on estimates of forest land area, independent of other sensor characteristics. We...
Forest/non-forest mapping using inventory data and satellite imagery
Ronald E. McRoberts
2002-01-01
For two study areas in Minnesota, USA, one heavily forested and one sparsely forested, maps of predicted proportion forest area were created using Landsat Thematic Mapper imagery, forest inventory plot data, and two prediction techniques, logistic regression and a k-Nearest Neighbours technique. The maps were used to increase the precision of forest area estimates by...
ERIC Educational Resources Information Center
Walford, Nigel
2007-01-01
Exchanges of population between supposedly "urban" and "rural" spaces have occurred throughout history as people migrate between areas with relatively, densely and sparsely settled populations. However, comparatively little is known about whether the same small areas persistently contribute to the flow and what types of…
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
NASA Astrophysics Data System (ADS)
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
Adaptive low-rank subspace learning with online optimization for robust visual tracking.
Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan
2017-04-01
In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of a sparseness constraint in multivariate curve resolution - Alternating least squares.
Hugelier, Siewert; Piqueras, Sara; Bedia, Carmen; de Juan, Anna; Ruckebusch, Cyril
2018-02-13
The use of sparseness in chemometrics is a concept that has increased in popularity. The advantage is, above all, a better interpretability of the results obtained. In this work, sparseness is implemented as a constraint in multivariate curve resolution - alternating least squares (MCR-ALS), which aims at reproducing raw (mixed) data by a bilinear model of chemically meaningful profiles. In many cases, the mixed raw data analyzed are not sparse by nature, but their decomposition profiles can be, as it is the case in some instrumental responses, such as mass spectra, or in concentration profiles linked to scattered distribution maps of powdered samples in hyperspectral images. To induce sparseness in the constrained profiles, one-dimensional and/or two-dimensional numerical arrays can be fitted using a basis of Gaussian functions with a penalty on the coefficients. In this work, a least squares regression framework with L 0 -norm penalty is applied. This L 0 -norm penalty constrains the number of non-null coefficients in the fit of the array constrained without having an a priori on the number and their positions. It has been shown that the sparseness constraint induces the suppression of values linked to uninformative channels and noise in MS spectra and improves the location of scattered compounds in distribution maps, resulting in a better interpretability of the constrained profiles. An additional benefit of the sparseness constraint is a lower ambiguity in the bilinear model, since the major presence of null coefficients in the constrained profiles also helps to limit the solutions for the profiles in the counterpart matrix of the MCR bilinear model. Copyright © 2017 Elsevier B.V. All rights reserved.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Decoding memory features from hippocampal spiking activities using sparse classification models.
Dong Song; Hampson, Robert E; Robinson, Brian S; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W
2016-08-01
To understand how memory information is encoded in the hippocampus, we build classification models to decode memory features from hippocampal CA3 and CA1 spatio-temporal patterns of spikes recorded from epilepsy patients performing a memory-dependent delayed match-to-sample task. The classification model consists of a set of B-spline basis functions for extracting memory features from the spike patterns, and a sparse logistic regression classifier for generating binary categorical output of memory features. Results show that classification models can extract significant amount of memory information with respects to types of memory tasks and categories of sample images used in the task, despite the high level of variability in prediction accuracy due to the small sample size. These results support the hypothesis that memories are encoded in the hippocampal activities and have important implication to the development of hippocampal memory prostheses.
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION
Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong
2015-01-01
In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645
On the sparseness of 1-norm support vector machines.
Zhang, Li; Zhou, Weida
2010-04-01
There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis. Copyright 2009 Elsevier Ltd. All rights reserved.
Yang, C L; Wei, H Y; Adler, A; Soleimani, M
2013-06-01
Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.
McGrail, Matthew Richard; Humphreys, John Stirling; Ward, Bernadette
2015-05-29
Poor access to doctors at times of need remains a significant impediment to achieving good health for many rural residents. The two-step floating catchment area (2SFCA) method has emerged as a key tool for measuring healthcare access in rural areas. However, the choice of catchment size, a key component of the 2SFCA method, is problematic because little is known about the distance tolerance of rural residents for health-related travel. Our study sought new evidence to test the hypothesis that residents of sparsely settled rural areas are prepared to travel further than residents of closely settled rural areas when accessing primary health care at times of need. A questionnaire survey of residents in five small rural communities of Victoria and New South Wales in Australia was used. The two outcome measures were current travel time to visit their usual doctor and maximum time prepared to travel to visit a doctor, both for non-emergency care. Kaplan-Meier charts were used to compare the association between increased distance and decreased travel propensity for closely-settled and sparsely-settled areas, and ordinal multivariate regression models tested significance after controlling for health-related travel moderating factors and town clustering. A total of 1079 questionnaires were completed with 363 from residents in closely-settled locations and 716 from residents in sparsely-settled areas. Residents of sparsely-settled communities travel, on average, 10 min further than residents of closely-settled communities (26.3 vs 16.9 min, p < 0.001), though this difference was not significant after controlling for town clustering. Differences were more apparent in terms of maximum time prepared to travel (54.1 vs 31.9 min, p < 0.001). Differences of maximum time remained significant after controlling for demographic and other constraints to access, such as transport availability or difficulties getting doctor appointments, as well as after controlling for town clustering and current travel times. Improved geographical access remains a key issue underpinning health policies designed to improve the provision of rural primary health care services. This study provides empirical evidence that travel behaviour should not be implicitly assumed constant amongst rural populations when modelling access through methods like the 2SFCA.
ERIC Educational Resources Information Center
Diaz-Puente, Jose M.; Moreno, Francisco Jose Gallego; Zamorano, Ramon
2012-01-01
Training is a key tool for community development processes in rural areas. This training is made difficult by the characteristics of the rural areas and their population. Furthermore, the methods used by traditional training bodies are not adapted to the peculiarities of these areas. This article analyses the training methodology used by the…
Compressive-sampling-based positioning in wireless body area networks.
Banitalebi-Dehkordi, Mehdi; Abouei, Jamshid; Plataniotis, Konstantinos N
2014-01-01
Recent achievements in wireless technologies have opened up enormous opportunities for the implementation of ubiquitous health care systems in providing rich contextual information and warning mechanisms against abnormal conditions. This helps with the automatic and remote monitoring/tracking of patients in hospitals and facilitates and with the supervision of fragile, elderly people in their own domestic environment through automatic systems to handle the remote drug delivery. This paper presents a new modeling and analysis framework for the multipatient positioning in a wireless body area network (WBAN) which exploits the spatial sparsity of patients and a sparse fast Fourier transform (FFT)-based feature extraction mechanism for monitoring of patients and for reporting the movement tracking to a central database server containing patient vital information. The main goal of this paper is to achieve a high degree of accuracy and resolution in the patient localization with less computational complexity in the implementation using the compressive sensing theory. We represent the patients' positions as a sparse vector obtained by the discrete segmentation of the patient movement space in a circular grid. To estimate this vector, a compressive-sampling-based two-level FFT (CS-2FFT) feature vector is synthesized for each received signal from the biosensors embedded on the patient's body at each grid point. This feature extraction process benefits in the combination of both short-time and long-time properties of the received signals. The robustness of the proposed CS-2FFT-based algorithm in terms of the average positioning error is numerically evaluated using the realistic parameters in the IEEE 802.15.6-WBAN standard in the presence of additive white Gaussian noise. Due to the circular grid pattern and the CS-2FFT feature extraction method, the proposed scheme represents a significant reduction in the computational complexity, while improving the level of the resolution and the localization accuracy when compared to some classical CS-based positioning algorithms.
Sparse Logistic Regression for Diagnosis of Liver Fibrosis in Rat by Using SCAD-Penalized Likelihood
Yan, Fang-Rong; Lin, Jin-Guan; Liu, Yu
2011-01-01
The objective of the present study is to find out the quantitative relationship between progression of liver fibrosis and the levels of certain serum markers using mathematic model. We provide the sparse logistic regression by using smoothly clipped absolute deviation (SCAD) penalized function to diagnose the liver fibrosis in rats. Not only does it give a sparse solution with high accuracy, it also provides the users with the precise probabilities of classification with the class information. In the simulative case and the experiment case, the proposed method is comparable to the stepwise linear discriminant analysis (SLDA) and the sparse logistic regression with least absolute shrinkage and selection operator (LASSO) penalty, by using receiver operating characteristic (ROC) with bayesian bootstrap estimating area under the curve (AUC) diagnostic sensitivity for selected variable. Results show that the new approach provides a good correlation between the serum marker levels and the liver fibrosis induced by thioacetamide (TAA) in rats. Meanwhile, this approach might also be used in predicting the development of liver cirrhosis. PMID:21716672
Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia
2016-11-22
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k -nearest neighbor ( k -NN) method produced the greatest accuracy. A geostatistically-weighted k -NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer's and user's accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas.
In Situ Monitoring of Groundwater Contamination Using the Kalman Filter For Sustainable Remediation
NASA Astrophysics Data System (ADS)
Schmidt, F.; Wainwright, H. M.; Faybishenko, B.; Denham, M. E.; Eddy-Dilek, C. A.
2017-12-01
Sustainable remediation - based on less intensive passive remediation and natural attenuation - has become a desirable remediation alternative at contaminated sites. Although it has a number of benefits, such as reduced waste and water/energy usage, it carries a significant burden of proof to verify plume stability and to ensure insignificant increase of risk to public health. Modeling of contaminant transport is still challenging despite recent advances in numerical methods. Long-term monitoring has, therefore, become a critical component in sustainable remediation. However, the current approach, which relies on sparse groundwater sampling, is problematic, since it could miss sudden significant changes in plume behavior. A new method is needed to combine existing knowledge about contaminant behavior and latest advances in in situ groundwater sensors. This study presents an example of the effective use of the Kalman filter approach to estimate contaminant concentrations, based on in situ measured water quality parameters (e.g. electrical conductivity and pH) along with the results of sparse groundwater sampling. The Kalman filter can effectively couple physical models and data correlations between the contaminant concentrations and in situ measured variables. We aim (1) to develop a framework capable of integrating different data types to provide accurate contaminant concentration estimates, (2) to demonstrate that these results remain reliable, even when the groundwater sampling frequency is reduced, and (3) to evaluate the future efficacy of this strategy using reactive transport simulations. This framework can also serve as an early warning system for detecting unexpected plume migration. We demonstrate our approach using historical and current groundwater data from the Savannah River Site (SRS) F-Area Seepage Basins to estimate uranium and tritium concentrations. The results show that the developed method can provide reliable estimates of contaminant concentrations. We also show that we can reduce the groundwater sampling frequency significantly, while capturing the dynamics of contaminant concentration changes. To our knowledge, this is the first study to apply data analytics to long-term groundwater monitoring extensively.
Thakur, Anil S.; Robin, Gautier; Guncar, Gregor; Saunders, Neil F. W.; Newman, Janet; Martin, Jennifer L.; Kobe, Bostjan
2007-01-01
Background Crystallization is a major bottleneck in the process of macromolecular structure determination by X-ray crystallography. Successful crystallization requires the formation of nuclei and their subsequent growth to crystals of suitable size. Crystal growth generally occurs spontaneously in a supersaturated solution as a result of homogenous nucleation. However, in a typical sparse matrix screening experiment, precipitant and protein concentration are not sampled extensively, and supersaturation conditions suitable for nucleation are often missed. Methodology/Principal Findings We tested the effect of nine potential heterogenous nucleating agents on crystallization of ten test proteins in a sparse matrix screen. Several nucleating agents induced crystal formation under conditions where no crystallization occurred in the absence of the nucleating agent. Four nucleating agents: dried seaweed; horse hair; cellulose and hydroxyapatite, had a considerable overall positive effect on crystallization success. This effect was further enhanced when these nucleating agents were used in combination with each other. Conclusions/Significance Our results suggest that the addition of heterogeneous nucleating agents increases the chances of crystal formation when using sparse matrix screens. PMID:17971854
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapin, M.A.; Tiller, G.M.; Mahaffie, M.J.
1996-12-31
Economic considerations of the deep-water turbidite play, in the Gulf of Mexico and elsewhere, require large reservoir volumes to be drained by relatively few, very expensive wells. Deep-water development projects to date have been planned on the basis of high-quality 3-D seismic data and sparse well control. The link between 3-D seismic, well control, and the 3-D geological and reservoir architecture model are demonstrated here for Pliocene turbidite sands of the {open_quotes}Pink{close_quotes} reservoir, Prospect Mars, Mississippi Canyon Areas 763 and 807, Gulf of Mexico. This information was used to better understand potential reservoir compartments for development well planning.
Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.
Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David
2016-02-01
In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
A Modified Sparse Representation Method for Facial Expression Recognition.
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.
A Modified Sparse Representation Method for Facial Expression Recognition
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kendall, J.; Hams, J.E.; Buck, S.P.
1990-05-01
Advances in high resolution side-scan sonar imaging technology are so effective at imaging sea-floor geology that they have greatly improved the efficiency of a bottom sampling program The traditional sea-floor geology methodology of shooting a high-resolution seismic survey and sampling along the seismic grid was considered successful if outcrops were sampled on 20% of the attempts. A submersible was used sparingly because of the inability to consistently locate sea-floor outcrops. Side-scan sonar images have increased the sampling success ratio to 70-95% and allow the cost-effective use of a submersible even in areas of sparse sea-floor outcrops. In offshore basins thismore » new technology has been used in consolidated and semiconsolidated rock terranes. When combined with observations from a two-man submersible, SCUBA traverses, seismic data, and traditional sea-floor bottom sampling techniques, enough data are provided to develop an integrated sea-floor geologic interpretation. On individual prospects, side-scan sonar has aided the establishment of critical dip in poor seismic data areas, located seeps and tar mounds, and determined erosional breaching of a prospect. Over a mature producing field, side-scan sonar has influenced the search for field extension by documenting the orientation and location of critical trapping cross faults. These relatively inexpensive techniques can provide critical data in any marine basin where rocks crop out on the sea floor.« less
Moses, C.S.; Andrefouet, S.; Kranenburg, C.; Muller-Karger, F. E.
2009-01-01
Using imagery at 30 m spatial resolution from the most recent Landsat satellite, the Landsat 7 Enhanced Thematic Mapper Plus (ETM+), we scale up reef metabolic productivity and calcification from local habitat-scale (10 -1 to 100 km2) measurements to regional scales (103 to 104 km2). Distribution and spatial extent of the North Florida Reef Tract (NFRT) habitats come from supervised classification of the Landsat imagery within independent Landsat-derived Millennium Coral Reef Map geomorphologic classes. This system minimizes the depth range and variability of benthic habitat characteristics found in the area of supervised classification and limits misclassification. Classification of Landsat imagery into 5 biotopes (sand, dense live cover, sparse live cover, seagrass, and sparse seagrass) by geomorphologic class is >73% accurate at regional scales. Based on recently published habitat-scale in situ metabolic measurements, gross production (P = 3.01 ?? 109 kg C yr -1), excess production (E = -5.70 ?? 108 kg C yr -1), and calcification (G = -1.68 ?? 106 kg CaCO 3 yr-1) are estimated over 2711 km2 of the NFRT. Simple models suggest sensitivity of these values to ocean acidification, which will increase local dissolution of carbonate sediments. Similar approaches could be applied over large areas with poorly constrained bathymetry or water column properties and minimal metabolic sampling. This tool has potential applications for modeling and monitoring large-scale environmental impacts on reef productivity, such as the influence of ocean acidification on coral reef environments. ?? Inter-Research 2009.
Sousa, Joana; Casanova, Catarina; Barata, André V; Sousa, Cláudia
2014-04-01
The present study aimed to gather baseline information about chimpanzee nesting and density in Lagoas de Cufada Natural Park (LCNP), in Guinea-Bissau. Old and narrow trails were followed to estimate chimpanzee density through marked-nest counts and to test the effect of canopy closure (woodland savannah, forest with a sparse canopy, and forest with a dense canopy) on nest distribution. Chimpanzee abundance was estimated at 0.79 nest builders/km(2), the lowest among the areas of Guinea-Bissau with currently studied chimpanzee populations. Our data suggest that sub-humid forest with a dense canopy accounts for significantly higher chimpanzee nest abundance (1.50 nests/km of trail) than sub-humid forest with a sparse canopy (0.49 nests/km of trail) or woodland savannah (0.30 nests/km of trail). Dense-canopy forests play an important role in chimpanzee nesting in the patchy and highly humanized landscape of LCNP. The tree species most frequently used for nesting are Dialium guineense (46%) and Elaeis guineensis (28%). E. guineensis contain nests built higher in the canopy, while D. guineense contain nests built at lower heights. Nests observed during baseline sampling and replications suggest seasonal variations in the tree species used for nest building.
Mapping soil textural fractions across a large watershed in north-east Florida.
Lamsal, S; Mishra, U
2010-08-01
Assessment of regional scale soil spatial variation and mapping their distribution is constrained by sparse data which are collected using field surveys that are labor intensive and cost prohibitive. We explored geostatistical (ordinary kriging-OK), regression (Regression Tree-RT), and hybrid methods (RT plus residual Sequential Gaussian Simulation-SGS) to map soil textural fractions across the Santa Fe River Watershed (3585 km(2)) in north-east Florida. Soil samples collected from four depths (L1: 0-30 cm, L2: 30-60 cm, L3: 60-120 cm, and L4: 120-180 cm) at 141 locations were analyzed for soil textural fractions (sand, silt and clay contents), and combined with textural data (15 profiles) assembled under the Florida Soil Characterization program. Textural fractions in L1 and L2 were autocorrelated, and spatially mapped across the watershed. OK performance was poor, which may be attributed to the sparse sampling. RT model structure varied among textural fractions, and the model explained variations ranged from 25% for L1 silt to 61% for L2 clay content. Regression residuals were simulated using SGS, and the average of simulated residuals were used to approximate regression residual distribution map, which were added to regression trend maps. Independent validation of the prediction maps showed that regression models performed slightly better than OK, and regression combined with average of simulated regression residuals improved predictions beyond the regression model. Sand content >90% in both 0-30 and 30-60 cm covered 80.6% of the watershed area. Copyright 2010 Elsevier Ltd. All rights reserved.
A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu
2014-06-15
This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
Kim, Steve M; Ganguli, Surya; Frank, Loren M
2012-08-22
Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.
NASA Astrophysics Data System (ADS)
Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David
2012-04-01
The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.
NASA Astrophysics Data System (ADS)
Yang, Z.; Hsu, K. L.; Sorooshian, S.; Xu, X.
2017-12-01
Precipitation in mountain regions generally occurs with high-frequency-intensity, whereas it is not well-captured by sparsely distributed rain-gauges imposing a great challenge on water management. Satellite-based Precipitation Estimation (SPE) provides global high-resolution alternative data for hydro-climatic studies, but are subject to considerable biases. In this study, a model named PDMMA-USESGO for Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations is developed to support precipitation mapping and hydrological modeling in mountainous catchments. The PDMMA-USESGO framework includes two calculating steps—adjusting SPE biases and merging satellite-gauge estimates—using the quantile mapping approach, a two-dimensional Gaussian weighting scheme (considering elevation effect), and an inverse root mean square error weighting method. The model is applied and evaluated over the Tibetan Plateau (TP) with the PERSIANN-CCS precipitation retrievals (daily, 0.04°×0.04°) and sparse observations from 89 gauges, for the 11-yr period of 2003-2013. To assess the data merging effects on streamflow modeling, a hydrological evaluation is conducted over a watershed in southeast TP based on the Soil and Water Assessment Tool (SWAT). Evaluation results indicate effectiveness of the model in generating high-resolution-accuracy precipitation estimates over mountainous terrain, with the merged estimates (Mer-SG) presenting consistently improved correlation coefficients, root mean square errors and absolute mean biases from original satellite estimates (Ori-CCS). It is found the Mer-SG forced streamflow simulations exhibit great improvements from those simulations using Ori-CCS, with coefficient of determination (R2) and Nash-Sutcliffe efficiency reach to 0.8 and 0.65, respectively. The presented model and case study serve as valuable references for the hydro-climatic applications using remote sensing-gauge information in other mountain areas of the world.
NASA Astrophysics Data System (ADS)
Carmo, Vanda; Santos, Mariana; Menezes, Gui M.; Loureiro, Clara M.; Lambardi, Paolo; Martins, Ana
2013-12-01
Seamounts are common topographic features around the Azores archipelago (NE Atlantic). Recently there has been increasing research effort devoted to the ecology of these ecosystems. In the Azores, the mesozooplankon is poorly studied, particularly in relation to these seafloor elevations. In this study, zooplankton communities in the Condor seamount area (Azores) were investigated during March, July and September 2010. Samples were taken during both day and night with a Bongo net of 200 µm mesh that towed obliquely within the first 100 m of the water column. Total abundance, biomass and chlorophyll a concentrations did not vary with sampling site or within the diel cycle but significant seasonal variation was observed. Moreover, zooplankton community composition showed the same strong seasonal pattern regardless of spatial or daily variability. Despite seasonal differences, the zooplankton community structure remained similar for the duration of this study. Seasonal variability better explained our results than mesoscale spatial variability. Spatial homogeneity is probably related with island proximity and local dynamics over Condor seamount. Zooplankton literature for the region is sparse, therefore a short review of the most important zooplankton studies from the Azores is also presented.
Continuous measurement of soil evaporation in a drip-irrigated wine vineyard in a desert area
USDA-ARS?s Scientific Manuscript database
Evaporation from the soil surface (E) can be a significant source of water loss in arid areas. In sparsely vegetated systems, E is expected to be a function of soil, climate, irrigation regime, precipitation patterns, and plant canopy development, and will therefore change dynamically at both daily ...
Providing Services for Handicapped Persons in Rural/Sparsely Populated Areas.
ERIC Educational Resources Information Center
Weatherman, Richard
The experiences of the 3-year Minnesota Severely Handicapped Delivery System Project have led to a model which utilizes resources of regional systems as key elements of a differentiated system for educational service delivery to the handicapped in rural areas and involves state education agencies, statewide regional centers, local education units,…
Center for Support of Mental Health Services in Isolated Rural Areas. Final Report.
ERIC Educational Resources Information Center
Ciarlo, James A.
In 1994, the University of Denver received a grant to develop and operate the Frontier Mental Health Services Resource Network (FMHSRN). FMHSRN's principal aim was to improve delivery of mental health services in sparsely populated "frontier" areas by providing technical assistance to frontier and rural audiences. Traditional…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, Darin P.; Badea, Cristian T., E-mail: cristian.badea@duke.edu; Lee, Chang-Lung
Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem withinmore » the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. Results: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. Conclusions: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.« less
Spectrotemporal CT data acquisition and reconstruction at low dose
Clark, Darin P.; Lee, Chang-Lung; Kirsch, David G.; Badea, Cristian T.
2015-01-01
Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. Results: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. Conclusions: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time. PMID:26520724
Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao
2016-01-01
At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models
NASA Astrophysics Data System (ADS)
Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael
2016-06-01
We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
Mangen, M-J J; Nielen, M; Burrell, A M
2002-12-18
We examined the importance of pig-population density in the area of an outbreak of classical swine fever (CSF) for the spread of the infection and the choice of control measures. A spatial, stochastic, dynamic epidemiological simulation model linked to a sector-level market-and-trade model for The Netherlands were used. Outbreaks in sparsely and densely populated areas were compared under four different control strategies and with two alternative trade assumptions. The obligatory control strategy required by current EU legislation was predicted to be enough to eradicate an epidemic starting in an area with sparse pig population. By contrast, additional control measures would be necessary if the outbreak began in an area with high pig density. The economic consequences of using preventive slaughter rather than emergency vaccination as an additional control measure depended strongly on the reactions of trading partners. Reducing the number of animal movements significantly reduced the size and length of epidemics in areas with high pig density. The phenomenon of carrier piglets was included in the model with realistic probabilities of infection by this route, but it made a negligible contribution to the spread of the infection.
Study of low density air transportation concepts
NASA Technical Reports Server (NTRS)
Webb, H. M.
1972-01-01
Low density air transport refers to air service to sparsely populated regions. There are two major objectives. The first is to examine those characteristics of sparsely populated areas which pertain to air transportation. This involves determination of geographical, commercial and population trends, as well as those traveler characteristics which affect the viability of air transport in the region. The second objective is to analyze the technical, economic and operational characteristics of low density air service. Two representative, but diverse arenas, West Virginia and Arizona, were selected for analysis: The results indicate that Arizona can support air service under certain assumptions whereas West Virginia cannot.
NASA Astrophysics Data System (ADS)
Zhao, Fengjun; Liu, Junting; Qu, Xiaochao; Xu, Xianhui; Chen, Xueli; Yang, Xiang; Cao, Feng; Liang, Jimin; Tian, Jie
2014-12-01
To solve the multicollinearity issue and unequal contribution of vascular parameters for the quantification of angiogenesis, we developed a quantification evaluation method of vascular parameters for angiogenesis based on in vivo micro-CT imaging of hindlimb ischemic model mice. Taking vascular volume as the ground truth parameter, nine vascular parameters were first assembled into sparse principal components (PCs) to reduce the multicolinearity issue. Aggregated boosted trees (ABTs) were then employed to analyze the importance of vascular parameters for the quantification of angiogenesis via the loadings of sparse PCs. The results demonstrated that vascular volume was mainly characterized by vascular area, vascular junction, connectivity density, segment number and vascular length, which indicated they were the key vascular parameters for the quantification of angiogenesis. The proposed quantitative evaluation method was compared with both the ABTs directly using the nine vascular parameters and Pearson correlation, which were consistent. In contrast to the ABTs directly using the vascular parameters, the proposed method can select all the key vascular parameters simultaneously, because all the key vascular parameters were assembled into the sparse PCs with the highest relative importance.
Energy budgets and resistances to energy transport in sparsely vegetated rangeland
Nichols, W.D.
1992-01-01
Partitioning available energy between plants and bare soil in sparsely vegetated rangelands will allow hydrologists and others to gain a greater understanding of water use by native vegetation, especially phreatophytes. Standard methods of conducting energy budget studies result in measurements of latent and sensible heat fluxes above the plant canopy which therefore include the energy fluxes from both the canopy and the soil. One-dimensional theoretical numerical models have been proposed recently for the partitioning of energy in sparse crops. Bowen ratio and other micrometeorological data collected over phreatophytes growing in areas of shallow ground water in central Nevada were used to evaluate the feasibility of using these models, which are based on surface and within-canopy aerodynamic resistances, to determine heat and water vapor transport in sparsely vegetated rangelands. The models appear to provide reasonably good estimates of sensible heat flux from the soil and latent heat flux from the canopy. Estimates of latent heat flux from the soil were less satisfactory. Sensible heat flux from the canopy was not well predicted by the present resistance formulations. Also, estimates of total above-canopy fluxes were not satisfactory when using a single value for above-canopy bulk aerodynamic resistance. ?? 1992.
Luo, Hanjiang; Guo, Zhongwen; Wu, Kaishun; Hong, Feng; Feng, Yuan
2009-01-01
Underwater acoustic sensor networks (UWA-SNs) are envisioned to perform monitoring tasks over the large portion of the world covered by oceans. Due to economics and the large area of the ocean, UWA-SNs are mainly sparsely deployed networks nowadays. The limited battery resources is a big challenge for the deployment of such long-term sensor networks. Unbalanced battery energy consumption will lead to early energy depletion of nodes, which partitions the whole networks and impairs the integrity of the monitoring datasets or even results in the collapse of the entire networks. On the contrary, balanced energy dissipation of nodes can prolong the lifetime of such networks. In this paper, we focus on the energy balance dissipation problem of two types of sparsely deployed UWA-SNs: underwater moored monitoring systems and sparsely deployed two-dimensional UWA-SNs. We first analyze the reasons of unbalanced energy consumption in such networks, then we propose two energy balanced strategies to maximize the lifetime of networks both in shallow and deep water. Finally, we evaluate our methods by simulations and the results show that the two strategies can achieve balanced energy consumption per node while at the same time prolong the networks lifetime. PMID:22399970
Impact of mercury from the Canadian boreal forest widfires to New England
NASA Astrophysics Data System (ADS)
Hwang, G.; Talbot, R. W.
2010-12-01
Canadian Boreal forest fires release significant amounts of mercury and constitute several air quality episodes every year in New England, especially during summer. With continuous monitoring of mercury in two New England sites in both rural and elevated area from 2004 to date, several events of the wildfire transport was screened out using ensembles of backward trajectories to ensure the air parcels sampled spent substantial residence time within the box of burned area defined by the the Fire Information for Resource Management System(FIRMS) MODIS hotspot/fires data. Other biomass burning tracers, (such as HCN), were also used as criteria if they are were available during the events period. The mercury to CO ratios during the events were calculated as the input to the Sparse Matrix Operator Kernel Emissions System (SMOKE) model to simulate the high and low ranges of mercury emissions frorm the burned area. We are now using the Community Multiscale Air Quality Modeling System (CMAQ) to study the impact of the mercury emission from the Canadian boreal forest wildfires to the New England region in more details.
Poverty Dynamics, Ecological Endowments, and Land Use among Smallholders in the Brazilian Amazon
Guedes, Gilvan R.; VanWey, Leah K.; Hull, James R.; Antigo, Mariangela; Barbieri, Alisson F.
2013-01-01
Rural settlement in previously sparsely occupied areas of the Brazilian Amazon has been associated with high levels of forest loss and unclear long-term social outcomes. We focus here on the micro-level processes in one settlement area to answer the question of how settler and farm endowments affect household poverty. We analyze the extent to which poverty is sensitive to changes in natural capital, land use strategies, and biophysical characteristics of properties (particularly soil quality). Cumulative time spent in poverty is simulated using Markovian processes, which show that accessibility to markets and land use system are especially important for decreasing poverty among households in our sample. Wealthier households are selected into commercial production of perennials before our initial observation, and are therefore in poverty a lower proportion of the time. Land in pasture, in contrast, has an independent effect on reducing the proportion of time spent in poverty. Taken together, these results show that investments in roads and the institutional structures needed to make commercial agriculture or ranching viable in existing and new settlement areas can improve human well-being in frontiers. PMID:24267754
Brain activity related to phonation in young patients with adductor spasmodic dysphonia.
Kiyuna, Asanori; Maeda, Hiroyuki; Higa, Asano; Shingaki, Kouta; Uehara, Takayuki; Suzuki, Mikio
2014-06-01
This study investigated the brain activities during phonation of young patients with adductor spasmodic dysphonia (ADSD) of relatively short disease duration (<10 years). Six subjects with ADSD of short duration (mean age: 24. 3 years; mean disease duration: 41 months) and six healthy controls (mean age: 30.8 years) underwent functional magnetic resonance imaging (fMRI) using a sparse sampling method to identify brain activity during vowel phonation (/i:/). Intragroup and intergroup analyses were performed using statistical parametric mapping software. Areas of activation in the ADSD and control groups were similar to those reported previously for vowel phonation. All of the activated areas were observed bilaterally and symmetrically. Intergroup analysis revealed higher brain activities in the SD group in the auditory-related areas (Brodmann's areas [BA] 40, 41), motor speech areas (BA44, 45), bilateral insula (BA13), bilateral cerebellum, and middle frontal gyrus (BA46). Areas with lower activation were in the left primary sensory area (BA1-3) and bilateral subcortical nucleus (putamen and globus pallidus). The auditory cortical responses observed may reflect that young ADSD patients control their voice by use of the motor speech area, insula, inferior parietal cortex, and cerebellum. Neural activity in the primary sensory area and basal ganglia may affect the voice symptoms of young ADSD patients with short disease duration. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, John D.; Narayan, Akil; Zhou, Tao
We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less
Porosity estimation by semi-supervised learning with sparsely available labeled samples
NASA Astrophysics Data System (ADS)
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
Artificial neural network does better spatiotemporal compressive sampling
NASA Astrophysics Data System (ADS)
Lee, Soo-Young; Hsu, Charles; Szu, Harold
2012-06-01
Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.
Genetics Home Reference: autosomal recessive hypotrichosis
... Autosomal recessive hypotrichosis is a condition that affects hair growth. People with this condition have sparse hair ( hypotrichosis ) ... erosions) on the scalp. In areas of poor hair growth, they may also develop bumps called hyperkeratotic follicular ...
Zhang, Chuncheng; Song, Sutao; Wen, Xiaotong; Yao, Li; Long, Zhiying
2015-04-30
Feature selection plays an important role in improving the classification accuracy of multivariate classification techniques in the context of fMRI-based decoding due to the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Recently, several sparse representation methods have been applied to the voxel selection of fMRI data. Despite the low computational efficiency of the sparse representation methods, they still displayed promise for applications that select features from fMRI data. In this study, we proposed the Laplacian smoothed L0 norm (LSL0) approach for feature selection of fMRI data. Based on the fast sparse decomposition using smoothed L0 norm (SL0) (Mohimani, 2007), the LSL0 method used the Laplacian function to approximate the L0 norm of sources. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of LSL0 for the sparse source estimation and feature selection. Simulated results indicated that LSL0 produced more accurate source estimation than SL0 at high noise levels. The classification accuracy using voxels that were selected by LSL0 was higher than that by SL0 in both simulated and real fMRI experiment. Moreover, both LSL0 and SL0 showed higher classification accuracy and required less time than ICA and t-test for the fMRI decoding. LSL0 outperformed SL0 in sparse source estimation at high noise level and in feature selection. Moreover, LSL0 and SL0 showed better performance than ICA and t-test for feature selection. Copyright © 2015 Elsevier B.V. All rights reserved.
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
ERIC Educational Resources Information Center
Staihr, Brian
High speed data services known as broadband have the potential to make rural areas less isolated and improve the rural quality of life, but physical barriers, sparse population density, and few markets present significant obstacles to their deployment in rural areas. Broadband applications such as e-commerce, distance education, and telemedicine…
ERIC Educational Resources Information Center
Brown, Sandra; Williams, Michael
This study of educational provisions in Western Australia, an area of 2.5 million square kilometers with a mere 1.2 million inhabitants, provides a broad picture of the complex, difficult, and expensive undertaking of providing education to a small, widely-spread population which differs in demographic, economic, and cultural characteristics. The…
Small area estimation in forests affected by wildfire in the Interior West
G. G. Moisen; J. A. Blackard; M. Finco
2004-01-01
Recent emphasis has been placed on estimating amount and characteristics of forests affected by wildfire in the Interior West. Data collected by FIA is intended for estimation over large geographic areas and is too sparse to construct sufficiently precise estimates within burn perimeters. This paper illustrates how recently built MODISbased maps of forest/nonforest and...
Bridging the gap between strategic and management forest inventories
Ronald E. McRoberts
2009-01-01
Strategic forest inventory programs collect information for a large number of variables on a relatively sparse array of field plots. Data from these inventories are used to produce estimates for large areas such as states and provinces, regions, or countries. The purpose of management forest inventories is to guide management decisions for small areas such as stands....
The effects of forest fragmentation on forest stand attributes
Ronald E. McRoberts; Greg C. Liknes
2002-01-01
For two study areas in Minnesota, USA, one heavily forested and one sparsely forested, maps of predicted proportion forest area were created using Landsat Thematic Mapper imagery, forest inventory plot data, and a logistic regression model. The maps were used to estimate quantitative indices of forest fragmentation. Correlations between the values of the indices and...
Organization of Educational Programs in Sparsely Settled Areas of the World.
ERIC Educational Resources Information Center
Edington, Everett D.
Only one-third of the world's population presently lives in countries where as much as a complete primary education is provided for children in rural areas. While the number of one-teacher schools in the United States has decreased from 148,711 in 1930 to 15,018 in 1961, a similar trend is not taking place as rapidly in other areas of the world,…
SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, B; Gao, H
Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less
Retrieval of Understory NDVI in Sparse Boreal Forests By MODIS Brdf Data
NASA Astrophysics Data System (ADS)
Yang, W.; Kobayashi, H.; Suzuki, R.; Nasahara, K. N.
2014-12-01
Global products of leaf area index (LAI) usually show large uncertainties in sparsely vegetated areas. The reason is that the understory contribution is not negligible in reflectance modeling for the case of low to intermediate canopy cover. Therefore many efforts have been carried out on inclusion of understory properties in the LAI estimation algorithms. Compared with conventional data bank method, estimation of forest understory property from satellite data is superior in the studies at global or continental scale during a long periods. However, the existing remote sensing method based on multi-angular observations is very complicated to implement. Alternatively, a simple method to retrieve understory NDVI (NDVIu) for sparse boreal forests was proposed in this study. The method is based on the property that the bi-directional variation of NDVIu is much smaller than that of the canopy-level NDVI. To retrieve NDVIu for a certain pixel, linear extrapolation was applied using the pixels within a 5×5 target-pixel-centered window. The NDVI values were reconstructed from the MODIS BRDF data corresponding to eight different solar-view angles. NDVIu was estimated as the average of the NDVI values corresponding to the position where the stand NDVI has the smallest angular variation. Validation by noise-free simulation dataset yielded high agreement between estimated and true NDVIu with R2 and RMSE of 0.99 and 0.03, respectively. By the MODIS BRDF data, we got the estimate of NDVIu close to the in situ measured value (0.61 vs. 0.66 for estimate and measurement, respectively), and also reasonable seasonal patterns of NDVIu in 2010-2013. The results imply a potential application of the retrieved NDVIu to improve the estimation of overstory LAI for sparse boreal forests.
Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
4D Infant Cortical Surface Atlas Construction using Spherical Patch-based Sparse Representation.
Wu, Zhengwang; Li, Gang; Meng, Yu; Wang, Li; Lin, Weili; Shen, Dinggang
2017-09-01
The 4D infant cortical surface atlas with densely sampled time points is highly needed for neuroimaging analysis of early brain development. In this paper, we build the 4D infant cortical surface atlas firstly covering 6 postnatal years with 11 time points (i.e., 1, 3, 6, 9, 12, 18, 24, 36, 48, 60, and 72 months), based on 339 longitudinal MRI scans from 50 healthy infants. To build the 4D cortical surface atlas, first , we adopt a two-stage groupwise surface registration strategy to ensure both longitudinal consistency and unbiasedness. Second , instead of simply averaging over the co-registered surfaces, a spherical patch-based sparse representation is developed to overcome possible surface registration errors across different subjects. The central idea is that, for each local spherical patch in the atlas space, we build a dictionary, which includes the samples of current local patches and their spatially-neighboring patches of all co-registered surfaces, and then the current local patch in the atlas is sparsely represented using the built dictionary. Compared to the atlas built with the conventional methods, the 4D infant cortical surface atlas constructed by our method preserves more details of cortical folding patterns, thus leading to boosted accuracy in registration of new infant cortical surfaces.
Tang, Yunwei; Jing, Linhai; Li, Hui; Liu, Qingjie; Yan, Qi; Li, Xiuxia
2016-01-01
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k-nearest neighbor (k-NN) method produced the greatest accuracy. A geostatistically-weighted k-NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer’s and user’s accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas. PMID:27879661
Machine-learned Identification of RR Lyrae Stars from Sparse, Multi-band Data: The PS1 Sample
NASA Astrophysics Data System (ADS)
Sesar, Branimir; Hernitschek, Nina; Mitrović, Sandra; Ivezić, Željko; Rix, Hans-Walter; Cohen, Judith G.; Bernard, Edouard J.; Grebel, Eva K.; Martin, Nicolas F.; Schlafly, Edward F.; Burgett, William S.; Draper, Peter W.; Flewelling, Heather; Kaiser, Nick; Kudritzki, Rolf P.; Magnier, Eugene A.; Metcalfe, Nigel; Tonry, John L.; Waters, Christopher
2017-05-01
RR Lyrae stars may be the best practical tracers of Galactic halo (sub-)structure and kinematics. The PanSTARRS1 (PS1) 3π survey offers multi-band, multi-epoch, precise photometry across much of the sky, but a robust identification of RR Lyrae stars in this data set poses a challenge, given PS1's sparse, asynchronous multi-band light curves (≲ 12 epochs in each of five bands, taken over a 4.5 year period). We present a novel template fitting technique that uses well-defined and physically motivated multi-band light curves of RR Lyrae stars, and demonstrate that we get accurate period estimates, precise to 2 s in > 80 % of cases. We augment these light-curve fits with other features from photometric time-series and provide them to progressively more detailed machine-learned classification models. From these models, we are able to select the widest (three-fourths of the sky) and deepest (reaching 120 kpc) sample of RR Lyrae stars to date. The PS1 sample of ˜45,000 RRab stars is pure (90%) and complete (80% at 80 kpc) at high galactic latitudes. It also provides distances that are precise to 3%, measured with newly derived period-luminosity relations for optical/near-infrared PS1 bands. With the addition of proper motions from Gaia and radial velocity measurements from multi-object spectroscopic surveys, we expect the PS1 sample of RR Lyrae stars to become the premier source for studying the structure, kinematics, and the gravitational potential of the Galactic halo. The techniques presented in this study should translate well to other sparse, multi-band data sets, such as those produced by the Dark Energy Survey and the upcoming Large Synoptic Survey Telescope Galactic plane sub-survey.
Motor development in 9-month-old infants in relation to cultural differences and iron status.
Angulo-Barroso, Rosa M; Schapiro, Lauren; Liang, Weilang; Rodrigues, Onike; Shafir, Tal; Kaciroti, Niko; Jacobson, Sandra W; Lozoff, Betsy
2011-03-01
Motor development, which allows infants to explore their environment, promoting cognitive, social, and perceptual development, can be influenced by cultural practices and nutritional factors, such as iron deficiency. This study compared fine and gross motor development in 209 9-month-old infants from urban areas of China, Ghana, and USA (African-Americans) and considered effects of iron status. Iron deficiency anemia was most common in the Ghana sample (55%) followed by USA and China samples. Controlling for iron status, Ghanaian infants displayed precocity in gross motor development and most fine-motor reach-and-grasp tasks. US African-Americans performed the poorest in all tasks except bimanual coordination and the large ball. Controlling for cultural site, iron status showed linear trends for gross motor milestones and fine motor skills with small objects. Our findings add to the sparse literature on infant fine motor development across cultures. The results also indicate the need to consider nutritional factors when examining cultural differences in infant development. Copyright © 2010 Wiley Periodicals, Inc.
Watson, Hunna J.; Torgersen, Leila; Zerwas, Stephanie; Reichborn-Kjennerud, Ted; Knoph, Cecilie; Stoltenberg, Camilla; Siega-Riz, Anna Maria; Von Holle, Ann; Hamer, Robert M.; Meltzer, Helle; Ferguson, Elizabeth H.; Haugen, Margaretha; Magnus, Per; Kuhns, Rebecca; Bulik, Cynthia M.
2016-01-01
This review summarizes studies on eating disorders in pregnancy and the postpartum period that have been conducted as part of the broader Norwegian Mother and Child Cohort Study (MoBa). Prior to the 2000s, empirical literature on eating disorders in pregnancy was sparse and consisted mostly of studies in small clinical samples. MoBa has contributed to a new era of research by making population-based and large-sample research possible. To date, MoBa has led to 19 studies on diverse questions including the prevalence, course, and risk correlates of eating disorders during pregnancy and the postpartum. The associations between eating disorder exposure and pregnancy, birth and obstetric outcomes, and maternal and offspring health and well-being, have also been areas of focus. The findings indicate that eating disorders in pregnancy are relatively common and appear to confer health risks to mother and her child related to sleep, birth outcomes, maternal nutrition, and child feeding and eating. PMID:27110061
Motor Development in 9-Month-Old Infants in Relation to Cultural Differences and Iron Status
Schapiro, Lauren; Liang, Weilang; Rodrigues, Onike; Shafir, Tal; Kaciroti, Niko; Jacobson, Sandra W.; Lozoff, Betsy
2011-01-01
Motor development, which allows infants to explore their environment, promoting cognitive, social, and perceptual development, can be influenced by cultural practices and nutritional factors, such as iron deficiency. This study compared fine and gross motor development in 209 9-month-old infants from urban areas of China, Ghana, and USA (African-Americans) and considered effects of iron status. Iron deficiency anemia was most common in the Ghana sample (55%) followed by USA and China samples. Controlling for iron status, Ghanaian infants displayed precocity in gross motor development and most fine-motor reach-and-grasp tasks. US African-Americans performed the poorest in all tasks except bimanual coordination and the large ball. Controlling for cultural site, iron status showed linear trends for gross motor milestones and fine motor skills with small objects. Our findings add to the sparse literature on infant fine motor development across cultures. The results also indicate the need to consider nutritional factors when examining cultural differences in infant development. PMID:21298634
Chen, Bo; Chen, Minhua; Paisley, John; Zaas, Aimee; Woods, Christopher; Ginsburg, Geoffrey S; Hero, Alfred; Lucas, Joseph; Dunson, David; Carin, Lawrence
2010-11-09
Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data.
NASA Astrophysics Data System (ADS)
Yang, Yongchao; Nagarajaiah, Satish
2016-06-01
Randomly missing data of structural vibration responses time history often occurs in structural dynamics and health monitoring. For example, structural vibration responses are often corrupted by outliers or erroneous measurements due to sensor malfunction; in wireless sensing platforms, data loss during wireless communication is a common issue. Besides, to alleviate the wireless data sampling or communication burden, certain accounts of data are often discarded during sampling or before transmission. In these and other applications, recovery of the randomly missing structural vibration responses from the available, incomplete data, is essential for system identification and structural health monitoring; it is an ill-posed inverse problem, however. This paper explicitly harnesses the data structure itself-of the structural vibration responses-to address this (inverse) problem. What is relevant is an empirical, but often practically true, observation, that is, typically there are only few modes active in the structural vibration responses; hence a sparse representation (in frequency domain) of the single-channel data vector, or, a low-rank structure (by singular value decomposition) of the multi-channel data matrix. Exploiting such prior knowledge of data structure (intra-channel sparse or inter-channel low-rank), the new theories of ℓ1-minimization sparse recovery and nuclear-norm-minimization low-rank matrix completion enable recovery of the randomly missing or corrupted structural vibration response data. The performance of these two alternatives, in terms of recovery accuracy and computational time under different data missing rates, is investigated on a few structural vibration response data sets-the seismic responses of the super high-rise Canton Tower and the structural health monitoring accelerations of a real large-scale cable-stayed bridge. Encouraging results are obtained and the applicability and limitation of the presented methods are discussed.
Stable Sparse Classifiers Identify qEEG Signatures that Predict Learning Disabilities (NOS) Severity
Bosch-Bayard, Jorge; Galán-García, Lídice; Fernandez, Thalia; Lirio, Rolando B.; Bringas-Vega, Maria L.; Roca-Stappung, Milene; Ricardo-Garcell, Josefina; Harmony, Thalía; Valdes-Sosa, Pedro A.
2018-01-01
In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven) regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to) different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS) disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia), Mathematics (Dyscalculia), or Writing (Dysgraphia). By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented. PMID:29379411
Bosch-Bayard, Jorge; Galán-García, Lídice; Fernandez, Thalia; Lirio, Rolando B; Bringas-Vega, Maria L; Roca-Stappung, Milene; Ricardo-Garcell, Josefina; Harmony, Thalía; Valdes-Sosa, Pedro A
2017-01-01
In this paper, we present a novel methodology to solve the classification problem, based on sparse (data-driven) regressions, combined with techniques for ensuring stability, especially useful for high-dimensional datasets and small samples number. The sensitivity and specificity of the classifiers are assessed by a stable ROC procedure, which uses a non-parametric algorithm for estimating the area under the ROC curve. This method allows assessing the performance of the classification by the ROC technique, when more than two groups are involved in the classification problem, i.e., when the gold standard is not binary. We apply this methodology to the EEG spectral signatures to find biomarkers that allow discriminating between (and predicting pertinence to) different subgroups of children diagnosed as Not Otherwise Specified Learning Disabilities (LD-NOS) disorder. Children with LD-NOS have notable learning difficulties, which affect education but are not able to be put into some specific category as reading (Dyslexia), Mathematics (Dyscalculia), or Writing (Dysgraphia). By using the EEG spectra, we aim to identify EEG patterns that may be related to specific learning disabilities in an individual case. This could be useful to develop subject-based methods of therapy, based on information provided by the EEG. Here we study 85 LD-NOS children, divided in three subgroups previously selected by a clustering technique over the scores of cognitive tests. The classification equation produced stable marginal areas under the ROC of 0.71 for discrimination between Group 1 vs. Group 2; 0.91 for Group 1 vs. Group 3; and 0.75 for Group 2 vs. Group1. A discussion of the EEG characteristics of each group related to the cognitive scores is also presented.
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, Tatsuya J.; Nofiele, Joris; Yuan, Qing
Purpose: Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of “true” information due to greater reliance on a priori information. Methods: Lung tumor motion trajectories in the superior–inferior direction, previously recorded from ten lung cancer patients, were replayed usingmore » a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. Results: In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3–0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. Conclusions: The authors’ initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy.« less
Arai, Tatsuya J; Nofiele, Joris; Madhuranthakam, Ananth J; Yuan, Qing; Pedrosa, Ivan; Chopra, Rajiv; Sawant, Amit
2016-06-01
Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of "true" information due to greater reliance on a priori information. Lung tumor motion trajectories in the superior-inferior direction, previously recorded from ten lung cancer patients, were replayed using a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3-0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. The authors' initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy.
Arai, Tatsuya J.; Nofiele, Joris; Madhuranthakam, Ananth J.; Yuan, Qing; Pedrosa, Ivan; Chopra, Rajiv; Sawant, Amit
2016-01-01
Purpose: Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of “true” information due to greater reliance on a priori information. Methods: Lung tumor motion trajectories in the superior–inferior direction, previously recorded from ten lung cancer patients, were replayed using a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. Results: In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3–0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. Conclusions: The authors’ initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy. PMID:27277029
Enumerating Sparse Organisms in Ships’ Ballast Water: Why Counting to 10 Is Not So Easy
2011-01-01
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships’ ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed. PMID:21434685
Enumerating sparse organisms in ships' ballast water: why counting to 10 is not so easy.
Miller, A Whitman; Frazier, Melanie; Smith, George E; Perry, Elgin S; Ruiz, Gregory M; Tamburri, Mario N
2011-04-15
To reduce ballast water-borne aquatic invasions worldwide, the International Maritime Organization and United States Coast Guard have each proposed discharge standards specifying maximum concentrations of living biota that may be released in ships' ballast water (BW), but these regulations still lack guidance for standardized type approval and compliance testing of treatment systems. Verifying whether BW meets a discharge standard poses significant challenges. Properly treated BW will contain extremely sparse numbers of live organisms, and robust estimates of rare events require extensive sampling efforts. A balance of analytical rigor and practicality is essential to determine the volume of BW that can be reasonably sampled and processed, yet yield accurate live counts. We applied statistical modeling to a range of sample volumes, plankton concentrations, and regulatory scenarios (i.e., levels of type I and type II errors), and calculated the statistical power of each combination to detect noncompliant discharge concentrations. The model expressly addresses the roles of sampling error, BW volume, and burden of proof on the detection of noncompliant discharges in order to establish a rigorous lower limit of sampling volume. The potential effects of recovery errors (i.e., incomplete recovery and detection of live biota) in relation to sample volume are also discussed.
Expanding the Detection of Traversable Area with RealSense for the Visually Impaired
Yang, Kailun; Wang, Kaiwei; Hu, Weijian; Bai, Jian
2016-01-01
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers. PMID:27879634
Karmacharya, Dibesh B; Thapa, Kamal; Shrestha, Rinjan; Dhakal, Maheshwar; Janecka, Jan E
2011-11-28
The endangered snow leopard is found throughout major mountain ranges of Central Asia, including the remote Himalayas. However, because of their elusive behavior, sparse distribution, and poor access to their habitat, there is a lack of reliable information on their population status and demography, particularly in Nepal. Therefore, we utilized noninvasive genetic techniques to conduct a preliminary snow leopard survey in two protected areas of Nepal. A total of 71 putative snow leopard scats were collected and analyzed from two different areas; Shey Phoksundo National Park (SPNP) in the west and Kangchanjunga Conservation Area (KCA) in the east. Nineteen (27%) scats were genetically identified as snow leopards, and 10 (53%) of these were successfully genotyped at 6 microsatellite loci. Two samples showed identical genotype profiles indicating a total of 9 individual snow leopards. Four individual snow leopards were identified in SPNP (1 male and 3 females) and five (2 males and 3 females) in KCA. We were able to confirm the occurrence of snow leopards in both study areas and determine the minimum number present. This information can be used to design more in-depth population surveys that will enable estimation of snow leopard population abundance at these sites.
2011-01-01
Background The endangered snow leopard is found throughout major mountain ranges of Central Asia, including the remote Himalayas. However, because of their elusive behavior, sparse distribution, and poor access to their habitat, there is a lack of reliable information on their population status and demography, particularly in Nepal. Therefore, we utilized noninvasive genetic techniques to conduct a preliminary snow leopard survey in two protected areas of Nepal. Results A total of 71 putative snow leopard scats were collected and analyzed from two different areas; Shey Phoksundo National Park (SPNP) in the west and Kangchanjunga Conservation Area (KCA) in the east. Nineteen (27%) scats were genetically identified as snow leopards, and 10 (53%) of these were successfully genotyped at 6 microsatellite loci. Two samples showed identical genotype profiles indicating a total of 9 individual snow leopards. Four individual snow leopards were identified in SPNP (1 male and 3 females) and five (2 males and 3 females) in KCA. Conclusions We were able to confirm the occurrence of snow leopards in both study areas and determine the minimum number present. This information can be used to design more in-depth population surveys that will enable estimation of snow leopard population abundance at these sites. PMID:22117538
NASA Astrophysics Data System (ADS)
Saadi, Sameh; Boulet, Gilles; Bahir, Malik; Brut, Aurore; Delogu, Émilie; Fanise, Pascal; Mougenot, Bernard; Simonneaux, Vincent; Lili Chabaane, Zohra
2018-04-01
In semiarid areas, agricultural production is restricted by water availability; hence, efficient agricultural water management is a major issue. The design of tools providing regional estimates of evapotranspiration (ET), one of the most relevant water balance fluxes, may help the sustainable management of water resources. Remote sensing provides periodic data about actual vegetation temporal dynamics (through the normalized difference vegetation index, NDVI) and water availability under water stress (through the surface temperature Tsurf), which are crucial factors controlling ET. In this study, spatially distributed estimates of ET (or its energy equivalent, the latent heat flux LE) in the Kairouan plain (central Tunisia) were computed by applying the Soil Plant Atmosphere and Remote Sensing Evapotranspiration (SPARSE) model fed by low-resolution remote sensing data (Terra and Aqua MODIS). The work's goal was to assess the operational use of the SPARSE model and the accuracy of the modeled (i) sensible heat flux (H) and (ii) daily ET over a heterogeneous semiarid landscape with complex land cover (i.e., trees, winter cereals, summer vegetables). SPARSE was run to compute instantaneous estimates of H and LE fluxes at the satellite overpass times. The good correspondence (R2 = 0.60 and 0.63 and RMSE = 57.89 and 53.85 W m-2 for Terra and Aqua, respectively) between instantaneous H estimates and large aperture scintillometer (XLAS) H measurements along a path length of 4 km over the study area showed that the SPARSE model presents satisfactory accuracy. Results showed that, despite the fairly large scatter, the instantaneous LE can be suitably estimated at large scales (RMSE = 47.20 and 43.20 W m-2 for Terra and Aqua, respectively, and R2 = 0.55 for both satellites). Additionally, water stress was investigated by comparing modeled (SPARSE) and observed (XLAS) water stress values; we found that most points were located within a 0.2 confidence interval, thus the general tendencies are well reproduced. Even though extrapolation of instantaneous latent heat flux values to daily totals was less obvious, daily ET estimates are deemed acceptable.
NASA Astrophysics Data System (ADS)
Switzman, Harris; Coulibaly, Paulin; Adeel, Zafar
2015-01-01
Demand for freshwater in many dryland environments is exerting negative impacts on the quality and availability of groundwater resources, particularly in areas where demand is high due to irrigation or industrial water requirements to support dryland agricultural reclamation. Often however, information available to diagnose the drivers of groundwater degradation and assess management options through modeling is sparse, particularly in low and middle-income countries. This study presents an approach for generating transient groundwater model inputs to assess the long-term impacts of dryland agricultural land reclamation on groundwater resources in a highly data-sparse context. The approach was applied to the area of Wadi El Natrun in Northern Egypt, where dryland reclamation and the associated water use has been aggressive since the 1960s. Statistical distributions of water use information were constructed from a variety of sparse field and literature estimates and then combined with remote sensing data in spatio-temporal infilling model to produce the groundwater model inputs of well-pumping and surface recharge. An ensemble of groundwater model inputs were generated and used in a 3D groundwater flow (MODFLOW) of Wadi El Natrun's multi-layer aquifer system to analyze trends in water levels and water budgets over time. Validation of results against monitoring records, and model performance statistics demonstrated that despite the extremely sparse data, the approach used in this study was capable of simulating the cumulative impacts of agricultural land reclamation reasonably well. The uncertainty associated with the groundwater model itself was greater than that associated with the ensemble of well-pumping and surface recharge estimates. Water budget analysis of the groundwater model output revealed that groundwater recharge has not changed significantly over time, while pumping has. As a result of these trends, groundwater was estimated to be in a deficit of approximately 24 billion m3 (±15%) in 2011, compared to 1957. A significant trend in water level declines beginning in the 1990s that has been observed in monitoring records was evident in the model results and is directly attributed to abstraction.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
Label-free optical imaging of membrane patches for atomic force microscopy
Churnside, Allison B.; King, Gavin M.; Perkins, Thomas T.
2010-01-01
In atomic force microscopy (AFM), finding sparsely distributed regions of interest can be difficult and time-consuming. Typically, the tip is scanned until the desired object is located. This process can mechanically or chemically degrade the tip, as well as damage fragile biological samples. Protein assemblies can be detected using the back-scattered light from a focused laser beam. We previously used back-scattered light from a pair of laser foci to stabilize an AFM. In the present work, we integrate these techniques to optically image patches of purple membranes prior to AFM investigation. These rapidly acquired optical images were aligned to the subsequent AFM images to ~40 nm, since the tip position was aligned to the optical axis of the imaging laser. Thus, this label-free imaging efficiently locates sparsely distributed protein assemblies for subsequent AFM study while simultaneously minimizing degradation of the tip and the sample. PMID:21164738
Needs for Rural Research in the Northern Finland Context
ERIC Educational Resources Information Center
Muilu, Toivo
2010-01-01
The aim of this paper is to discuss the needs and demands which rural research faces at the interface between research and development. The case study area is northern Finland, which constitutes the most remote and sparsely populated areas of the European Union. This paper is based on the tradition of rural research since the 1980s in connection…
Informational analysis for compressive sampling in radar imaging.
Zhang, Jingxiong; Yang, Ke
2015-03-24
Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.
SUBSTRUCTURE WITHIN THE SSA22 PROTOCLUSTER AT z ≈ 3.09
DOE Office of Scientific and Technical Information (OSTI.GOV)
Topping, Michael W.; Shapley, Alice E.; Steidel, Charles C., E-mail: mtopping@astro.ucla.edu
We present the results of a densely sampled spectroscopic survey of the SSA22 protocluster at z ≈ 3.09. Our sample with Keck/LRIS spectroscopy includes 106 Ly α emitters (LAEs) and 40 Lyman break galaxies (LBGs) at z = 3.05–3.12. These galaxies are contained within the 9′ × 9′ region in which the protocluster was discovered, which also hosts the maximum galaxy overdensity in the SSA22 region. The redshift histogram of our spectroscopic sample reveals two distinct peaks, at z = 3.069 (blue; 43 galaxies) and z = 3.095 (red; 103 galaxies). Furthermore, objects in the blue and red peaks aremore » segregated on the sky, with galaxies in the blue peak concentrating toward the western half of the field. These results suggest that the blue and red redshift peaks represent two distinct structures in physical space. Although the double-peaked redshift histogram is traced in the same manner by LBGs and LAEs, and brighter and fainter galaxies, we find that 9 out of 10 X-ray AGNs in SSA22, and all 7 spectroscopically confirmed giant Ly α “blobs,” reside in the red peak. We combine our data set with sparsely sampled spectroscopy from the literature over a significantly wider area, finding preliminary evidence that the double-peaked structure in redshift space extends beyond the region of our dense spectroscopic sampling. In order to fully characterize the three-dimensional structure, dynamics, and evolution of large-scale structure in the SSA22 overdensity, we require the measurement of large samples of LAE and LBG redshifts over a significantly wider area, as well as detailed comparisons with cosmological simulations of massive cluster formation.« less
Sloped terrain segmentation for autonomous drive using sparse 3D point cloud.
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Jeong, Young-Sik; Um, Kyhyun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame.
Objective sea level pressure analysis for sparse data areas
NASA Technical Reports Server (NTRS)
Druyan, L. M.
1972-01-01
A computer procedure was used to analyze the pressure distribution over the North Pacific Ocean for eleven synoptic times in February, 1967. Independent knowledge of the central pressures of lows is shown to reduce the analysis errors for very sparse data coverage. The application of planned remote sensing of sea-level wind speeds is shown to make a significant contribution to the quality of the analysis especially in the high gradient mid-latitudes and for sparse coverage of conventional observations (such as over Southern Hemisphere oceans). Uniform distribution of the available observations of sea-level pressure and wind velocity yields results far superior to those derived from a random distribution. A generalization of the results indicates that the average lower limit for analysis errors is between 2 and 2.5 mb based on the perfect specification of the magnitude of the sea-level pressure gradient from a known verification analysis. A less than perfect specification will derive from wind-pressure relationships applied to satellite observed wind speeds.
The theoretical relationship between foliage temperature and canopy resistance in sparse crops
NASA Technical Reports Server (NTRS)
Shuttleworth, W. James; Gurney, Robert J.
1990-01-01
One-dimensional, sparse-crop interaction theory is reformulated to allow calculation of the canopy resistance from measurements of foliage temperature. A submodel is introduced to describe eddy diffusion within the canopy which provides a simple, empirical simulation of the reported behavior obtained from a second-order closure model. The sensitivity of the calculated canopy resistance to the parameters and formulas assumed in the model is investigated. The calculation is shown to exhibit a significant but acceptable sensitivity to extreme changes in canopy aerodynamics, and to changes in the surface resistance of the substrate beneath the canopy at high and intermediate values of leaf area index. In very sparse crops changes in the surface resistance of the substrate are shown to contaminate the calculated canopy resistance, tending to amplify the apparent response to changes in water availability. The theory is developed to allow the use of a measurement of substrate temperature as an option to mitigate this contamination.
Facial expression recognition based on weber local descriptor and sparse representation
NASA Astrophysics Data System (ADS)
Ouyang, Yan
2018-03-01
Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Jiang, Geng-Ming; Li, Zhao-Liang
2008-11-10
This work intercompared two Bi-directional Reflectance Distribution Function (BRDF) models, the modified Minnaert's model and the RossThick-LiSparse-R model, in the estimation of the directional emissivity in Middle Infra-Red (MIR) channel from the data acquired by the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) onboard the first Meteosat Second Generation (MSG1). The bi-directional reflectances in SEVIRI channel 4 (3.9 microm) were estimated from the combined MIR and Thermal Infra-Red (TIR) data and then were used to estimate the directional emissivity in this channel with aid of the BRDF models. The results show that: (1) Both models can relatively well describe the non-Lambertian reflective behavior of land surfaces in SEVIRI channel 4; (2) The RossThick-LiSparse-R model is better than the modified Minnaert's model in modeling the bi-directional reflectances, and the directional emissivities modeled by the modified Minnaert's model are always lower than the ones obtained by the RossThick-LiSparse-R model with averaged emissivity differences of approximately 0.01 and approximately 0.04 over the vegetated and bare areas, respectively. The use of the RossThick-LiSparse-R model in the estimation of the directional emissivity in MIR channel is recommended.
Functional fixedness in a technologically sparse culture.
German, Tim P; Barrett, H Clark
2005-01-01
Problem solving can be inefficient when the solution requires subjects to generate an atypical function for an object and the object's typical function has been primed. Subjects become "fixed" on the design function of the object, and problem solving suffers relative to control conditions in which the object's function is not demonstrated. In the current study, such functional fixedness was demonstrated in a sample of adolescents (mean age of 16 years) among the Shuar of Ecuadorian Amazonia, whose technologically sparse culture provides limited access to large numbers of artifacts with highly specialized functions. This result suggests that design function may universally be the core property of artifact concepts in human semantic memory.
Joint fMRI analysis and subject clustering using sparse dictionary learning
NASA Astrophysics Data System (ADS)
Kim, Seung-Jun; Dontaraju, Krishna K.
2017-08-01
Multi-subject fMRI data analysis methods based on sparse dictionary learning are proposed. In addition to identifying the component spatial maps by exploiting the sparsity of the maps, clusters of the subjects are learned by postulating that the fMRI volumes admit a subspace clustering structure. Furthermore, in order to tune the associated hyper-parameters systematically, a cross-validation strategy is developed based on entry-wise sampling of the fMRI dataset. Efficient algorithms for solving the proposed constrained dictionary learning formulations are developed. Numerical tests performed on synthetic fMRI data show promising results and provides insights into the proposed technique.
Moving Beam-Blocker-Based Low-Dose Cone-Beam CT
NASA Astrophysics Data System (ADS)
Lee, Taewon; Lee, Changwoo; Baek, Jongduk; Cho, Seungryong
2016-10-01
This paper experimentally demonstrates a feasibility of moving beam-blocker-based low-dose cone-beam CT (CBCT) and exploits the beam-blocking configurations to reach an optimal one that leads to the highest contrast-to-noise ratio (CNR). Sparse-view CT takes projections at sparse view angles and provides a viable option to reducing dose. We have earlier proposed a many-view under-sampling (MVUS) technique as an alternative to sparse-view CT. Instead of switching the x-ray tube power, one can place a reciprocating multi-slit beam-blocker between the x-ray tube and the patient to partially block the x-ray beam. We used a bench-top circular cone-beam CT system with a lab-made moving beam-blocker. For image reconstruction, we used a modified total-variation minimization (TV) algorithm that masks the blocked data in the back-projection step leaving only the measured data through the slits to be used in the computation. The number of slits and the reciprocation frequency have been varied and the effects of them on the image quality were investigated. For image quality assessment, we used CNR and the detectability. We also analyzed the sampling efficiency in the context of compressive sensing: the sampling density and data incoherence in each case. We tested three sets of slits with their number of 6, 12 and 18, each at reciprocation frequencies of 10, 30, 50 and 70 Hz/rot. The optimum condition out of the tested sets was found to be using 12 slits at 30 Hz/rot.
2014-01-01
and proportional correctors. The weighting function evaluates nearby data samples to determine the utility of each correction style , eliminating the...sparse methods may be of use. As for other multi-fidelity techniques, true cokriging in the style described by geo-statisticians[93] is beyond the...sampling style between sampling points predicted to fall near the contour and sampling points predicted to be farther from the contour but with
Blind compressed sensing image reconstruction based on alternating direction method
NASA Astrophysics Data System (ADS)
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
ERIC Educational Resources Information Center
Vista, Alvin; Care, Esther
2011-01-01
Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…
M&A For Lithography Of Sparse Arrays Of Sub-Micrometer Features
Brueck, Steven R.J.; Chen, Xiaolan; Zaidi, Saleem; Devine, Daniel J.
1998-06-02
Methods and apparatuses are disclosed for the exposure of sparse hole and/or mesa arrays with line:space ratios of 1:3 or greater and sub-micrometer hole and/or mesa diameters in a layer of photosensitive material atop a layered material. Methods disclosed include: double exposure interferometric lithography pairs in which only those areas near the overlapping maxima of each single-period exposure pair receive a clearing exposure dose; double interferometric lithography exposure pairs with additional processing steps to transfer the array from a first single-period interferometric lithography exposure pair into an intermediate mask layer and a second single-period interferometric lithography exposure to further select a subset of the first array of holes; a double exposure of a single period interferometric lithography exposure pair to define a dense array of sub-micrometer holes and an optical lithography exposure in which only those holes near maxima of both exposures receive a clearing exposure dose; combination of a single-period interferometric exposure pair, processing to transfer resulting dense array of sub-micrometer holes into an intermediate etch mask, and an optical lithography exposure to select a subset of initial array to form a sparse array; combination of an optical exposure, transfer of exposure pattern into an intermediate mask layer, and a single-period interferometric lithography exposure pair; three-beam interferometric exposure pairs to form sparse arrays of sub-micrometer holes; five- and four-beam interferometric exposures to form a sparse array of sub-micrometer holes in a single exposure. Apparatuses disclosed include arrangements for the three-beam, five-beam and four-beam interferometric exposures.
NASA Astrophysics Data System (ADS)
Zhang, Bin; Liu, Yueyan; Zhang, Zuyu; Shen, Yonglin
2017-10-01
A multifeature soft-probability cascading scheme to solve the problem of land use and land cover (LULC) classification using high-spatial-resolution images to map rural residential areas in China is proposed. The proposed method is used to build midlevel LULC features. Local features are frequently considered as low-level feature descriptors in a midlevel feature learning method. However, spectral and textural features, which are very effective low-level features, are neglected. The acquisition of the dictionary of sparse coding is unsupervised, and this phenomenon reduces the discriminative power of the midlevel feature. Thus, we propose to learn supervised features based on sparse coding, a support vector machine (SVM) classifier, and a conditional random field (CRF) model to utilize the different effective low-level features and improve the discriminability of midlevel feature descriptors. First, three kinds of typical low-level features, namely, dense scale-invariant feature transform, gray-level co-occurrence matrix, and spectral features, are extracted separately. Second, combined with sparse coding and the SVM classifier, the probabilities of the different LULC classes are inferred to build supervised feature descriptors. Finally, the CRF model, which consists of two parts: unary potential and pairwise potential, is employed to construct an LULC classification map. Experimental results show that the proposed classification scheme can achieve impressive performance when the total accuracy reached about 87%.
Schmidt, D.L.; Puffett, W.P.; Campbell, W.L.; Al-Koulak, Z. H.
1981-01-01
An ancient gold placer at Jabal Mokhyat (lat 20?12.2'N., long 43?28'E.), about 90 km east of Qalat Bishah in the southern Najd Province, Kingdom of Saudi Arabia, was studied in 1973. Seven hundred and twenty-eight samples in 25 measured sections were collected along trenches and pits 2.5 m in depth and 2,600 m in total length. Alluvium was thicker than the excavation depth along about 50 percent of the trench length. The average gold content was 4.4 mg per m3, and the highest grade trench contained 40 mg gold per m 3. Because fine particulate gold is rare in the alluvium, a few large particles, 1 to 5 mm in diameter, greatly affected the sampling results. The ancient placer diggings are in small headwater wadis distributed over a 30-km 2 area, and the total dug area is about 1.2 km2. The placer produced an estimated 50 kg of gold and was worked about 2,600 + 250 years ago. The potential for a present-day placer operation is small. The gold is sparsely distributed in locally derived, flood-deposited, immature gravels throughout a stratigraphic section that consists of 1) calichified, saprolitic bedrock of Precambrian age; 2) basal, intensely calichified, saprolitic gravel (0-3 m thick) of Pleistocene age; 3) disconformable, slightly consolidated gravel and sand (0-1 m thick) of late Pleistocene age containing sparse, disseminated caliche; 4) firm loessic silt (0-1 m thick) of early Holocene age; and 5) loose sand and gravel (0.3-1 m thick) of late Holocene age. The loessic silt accumulated during the Holocene pluvial. The top of the loessic silt unit is dated at about 6,000 years B.P. by using charcoal from hearths of ancient man. Following the Holocene pluvial, the climate became arid, and extreme desiccation resulted in abundant eolian sand that progressively diluted the late Holocene gravels. The remnants of the pre-Holocene stratigraphy suggest similar climatic cycles during the Pleistocene. Abundant, sparsely mineralized, gold-bearing quartz veins (0-1 m wide) were the source of the placer gold. These late Proterozoic veins have hydrothermally altered wall-rock zones (1-5 m wide). The veins are dispersed over an area of 50 km 2. Though many veins were prospected in ancient times and some were slightly worked, only the Mokhyat ancient mine, located on a quartz-vein zone 30 m wide by 200 m long, was extensively worked. The quartz contains chalcopyrite, galena, sphalerite, tetrahedrite, an unidentified bismuth mineral, and small amounts of dispersed gold. The fissure quartz veins lie at the complexly splayed, terminal end of a small northwest-trending Najd fault that elsewhere along strike has ii km of left-lateral displacement. Most large veins are in north-trending vertical fractures where the stresses were distributed along an older, north-trending structural grain in andesitic greenstone terrane. Subhorizontal fracture sets contain conspicuous, well-developed gold-bearing quartz veins and associated alteration zones. These attest to the shallowness and youthfulness of mineralization during latest Precambrian time. Late Precambrian granitic plutons (625-600 m.y. old) had been deeply eroded before the gold minerals were emplaced; hence, the gold is not related to granitic plutonism. Abundant, widely distributed diabasic dikes associated with the Najd faulting event of latest Precambrian age were probably the heat source for the hydrothermal convection system and possibly the source of the gold.
Evaluating the use of transfers for improving demand responsive systems adopting zoning strategies.
DOT National Transportation Integrated Search
2011-08-01
Due to widely dispersed population density over large and sparsely suburban/rural areas, : conventional fixed route transit services hardly satisfy the travel needs of their residents. As an : alternative, demand responsive transit (DRT) systems have...
Multiclass classification of microarray data samples with a reduced number of genes
2011-01-01
Background Multiclass classification of microarray data samples with a reduced number of genes is a rich and challenging problem in Bioinformatics research. The problem gets harder as the number of classes is increased. In addition, the performance of most classifiers is tightly linked to the effectiveness of mandatory gene selection methods. Critical to gene selection is the availability of estimates about the maximum number of genes that can be handled by any classification algorithm. Lack of such estimates may lead to either computationally demanding explorations of a search space with thousands of dimensions or classification models based on gene sets of unrestricted size. In the former case, unbiased but possibly overfitted classification models may arise. In the latter case, biased classification models unable to support statistically significant findings may be obtained. Results A novel bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification algorithms of microarray data samples is presented. The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples. Conclusions A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples. PMID:21342522
Self-Taught Learning Based on Sparse Autoencoder for E-Nose in Wound Infection Detection
He, Peilin; Jia, Pengfei; Qiao, Siqi; Duan, Shukai
2017-01-01
For an electronic nose (E-nose) in wound infection distinguishing, traditional learning methods have always needed large quantities of labeled wound infection samples, which are both limited and expensive; thus, we introduce self-taught learning combined with sparse autoencoder and radial basis function (RBF) into the field. Self-taught learning is a kind of transfer learning that can transfer knowledge from other fields to target fields, can solve such problems that labeled data (target fields) and unlabeled data (other fields) do not share the same class labels, even if they are from entirely different distribution. In our paper, we obtain numerous cheap unlabeled pollutant gas samples (benzene, formaldehyde, acetone and ethylalcohol); however, labeled wound infection samples are hard to gain. Thus, we pose self-taught learning to utilize these gas samples, obtaining a basis vector θ. Then, using the basis vector θ, we reconstruct the new representation of wound infection samples under sparsity constraint, which is the input of classifiers. We compare RBF with partial least squares discriminant analysis (PLSDA), and reach a conclusion that the performance of RBF is superior to others. We also change the dimension of our data set and the quantity of unlabeled data to search the input matrix that produces the highest accuracy. PMID:28991154
Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.
Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K
2014-02-01
Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.
Triffo, W. J.; Palsdottir, H.; McDonald, K. L.; Lee, J. K.; Inman, J. L.; Bissell, M. J.; Raphael, R. M.; Auer, M.
2009-01-01
Summary High-pressure freezing is the preferred method to prepare thick biological specimens for ultrastructural studies. However, the advantages obtained by this method often prove unattainable for samples that are difficult to handle during the freezing and substitution protocols. Delicate and sparse samples are difficult to manipulate and maintain intact throughout the sequence of freezing, infiltration, embedding and final orientation for sectioning and subsequent transmission electron microscopy. An established approach to surmount these difficulties is the use of cellulose microdialysis tubing to transport the sample. With an inner diameter of 200 µm, the tubing protects small and fragile samples within the thickness constraints of high-pressure freezing, and the tube ends can be sealed to avoid loss of sample. Importantly, the transparency of the tubing allows optical study of the specimen at different steps in the process. Here, we describe the use of a micromanipulator and microinjection apparatus to handle and position delicate specimens within the tubing. We report two biologically significant examples that benefit from this approach, 3D cultures of mammary epithelial cells and cochlear outer hair cells. We illustrate the potential for correlative light and electron microscopy as well as electron tomography. PMID:18445158
Relationships between milk culture results and milk yield in Norwegian dairy cattle.
Reksen, O; Sølverød, L; Østerås, O
2007-10-01
Associations between test-day milk yield and positive milk cultures for Staphylococcus aureus, Streptococcus spp., and other mastitis pathogens or a negative milk culture for mastitis pathogens were assessed in quarter milk samples from randomly sampled cows selected without regard to current or previous udder health status. Staphylococcus aureus was dichotomized according to sparse (< or =1,500 cfu/mL of milk) or rich (>1,500 cfu/mL of milk) growth of the bacteria. Quarter milk samples were obtained on 1 to 4 occasions from 2,740 cows in 354 Norwegian dairy herds, resulting in a total of 3,430 samplings. Measures of test-day milk yield were obtained monthly and related to 3,547 microbiological diagnoses at the cow level. Mixed model linear regression models incorporating an autoregressive covariance structure accounting for repeated test-day milk yields within cow and random effects at the herd and sample level were used to quantify the effect of positive milk cultures on test-day milk yields. Identical models were run separately for first-parity, second-parity, and third-parity or older cows. Fixed effects were days in milk, the natural logarithm of days in milk, sparse and rich growth of Staph. aureus (1/0), Streptococcus spp. (1/0), other mastitis pathogens (1/0), calving season, time of test-day milk yields relative to time of microbiological diagnosis (test day relative to time of diagnosis), and the interaction terms between microbiological diagnosis and test day relative to time of diagnosis. The models were run with the logarithmically transformed composite milk somatic cell count excluded and included. Rich growth of Staph. aureus was associated with decreased production levels in first-parity cows. An interaction between rich growth of Staph. aureus and test day relative to time of diagnosis also predicted a decline in milk production in third-parity or older cows. Interaction between sparse growth of Staph. aureus and test day relative to time of diagnosis predicted declining test-day milk yields in first-parity cows. Sparse growth of Staph. aureus was associated with high milk yields in third-parity or older cows after including the logarithmically transformed composite milk somatic cell count in the model, which illustrates that lower production levels are related to elevated somatic cell counts in high-producing cows. The same association with test-day milk yield was found among Streptococcus spp.-positive pluriparous cows.
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
Hall, Amee J; Brown, Trecia A; Grahn, Jessica A; Gati, Joseph S; Nixon, Pam L; Hughes, Sarah M; Menon, Ravi S; Lomber, Stephen G
2014-03-15
When conducting auditory investigations using functional magnetic resonance imaging (fMRI), there are inherent potential confounds that need to be considered. Traditional continuous fMRI acquisition methods produce sounds >90 dB which compete with stimuli or produce neural activation masking evoked activity. Sparse scanning methods insert a period of reduced MRI-related noise, between image acquisitions, in which a stimulus can be presented without competition. In this study, we compared sparse and continuous scanning methods to identify the optimal approach to investigate acoustically evoked cortical, thalamic and midbrain activity in the cat. Using a 7 T magnet, we presented broadband noise, 10 kHz tones, or 0.5 kHz tones in a block design, interleaved with blocks in which no stimulus was presented. Continuous scanning resulted in larger clusters of activation and more peak voxels within the auditory cortex. However, no significant activation was observed within the thalamus. Also, there was no significant difference found, between continuous or sparse scanning, in activations of midbrain structures. Higher magnitude activations were identified in auditory cortex compared to the midbrain using both continuous and sparse scanning. These results indicate that continuous scanning is the preferred method for investigations of auditory cortex in the cat using fMRI. Also, choice of method for future investigations of midbrain activity should be driven by other experimental factors, such as stimulus intensity and task performance during scanning. Copyright © 2014 Elsevier B.V. All rights reserved.
Microwave scattering models and basic experiments
NASA Technical Reports Server (NTRS)
Fung, Adrian K.
1989-01-01
Progress is summarized which has been made in four areas of study: (1) scattering model development for sparsely populated media, such as a forested area; (2) scattering model development for dense media, such as a sea ice medium or a snow covered terrain; (3) model development for randomly rough surfaces; and (4) design and conduct of basic scattering and attenuation experiments suitable for the verification of theoretical models.
Ozone in remote areas of the Southern Rocky Mountains
Robert C. Musselman; John L. Korfmacher
2014-01-01
Ozone (O3) data are sparse for remote, non-urban mountain areas of the western U.S. Ozone was monitored 2007e2011 at high elevation sites in national forests in Colorado and northeastern Utah using a portable battery-powered O3 monitor. The data suggest that many of these remote locations already have O3 concentrations that would contribute to exceedance of the current...
High-frame-rate full-vocal-tract 3D dynamic speech imaging.
Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P
2017-04-01
To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Particle Filter Based Tracking in a Detection Sparse Discrete Event Simulation Environment
2007-03-01
obtained by disqualifying a large number of particles. 52 (a) (b) ( c ) Figure 31. Particle Disqualification via Sanitization b...1 B. RESEARCH APPROACH..............................................................................5 C . THESIS ORGANIZATION...38 b. Detection Distribution Sampling............................................43 c . Estimated Position Calculation
Micrometeoroid and Lunar Secondary Ejecta Flux Measurements: Comparison of Three Acoustic Systems
NASA Technical Reports Server (NTRS)
Corsaro, R. D.; Giovane, F.; Liou, Jer-Chyi; Burtchell, M.; Pisacane, V.; Lagakos, N.; Williams, E.; Stansbery, E.
2010-01-01
This report examines the inherent capability of three large-area acoustic sensor systems and their applicability for micrometeoroids (MM) and lunar secondary ejecta (SE) detection and characterization for future lunar exploration activities. Discussion is limited to instruments that can be fabricated and deployed with low resource requirements. Previously deployed impact detection probes typically have instrumented capture areas less than 0.2 square meters. Since the particle flux decreases rapidly with increased particle size, such small-area sensors rarely encounter particles in the size range above 50 microns, and even their sampling the population above 10 microns is typically limited. Characterizing the sparse dust population in the size range above 50 microns requires a very large-area capture instrument. However it is also important that such an instrument simultaneously measures the population of the smaller particles, so as to provide a complete instantaneous snapshot of the population. For lunar or planetary surface studies, the system constraints are significant. The instrument must be as large as possible to sample the population of the largest MM. This is needed to reliably assess the particle impact risks and to develop cost-effective shielding designs for habitats, astronauts, and critical instrument. The instrument should also have very high sensitivity to measure the flux of small and slow SE particles. is the SE environment is currently poorly characterized, and possess a contamination risk to machinery and personnel involved in exploration. Deployment also requires that the instrument add very little additional mass to the spacecraft. Three acoustic systems are being explored for this application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bryan, Charles R.; Enos, David
In July, 2014, the Electric Power Research Institute and industry partners sampled dust on the surface of an unused canister that had been stored in an overpack at the Hope Creek Nuclear Generating Station for approximately one year. The foreign material exclusion (FME) cover that had been on the top of the canister during storage, and a second recently - removed FME cover, were also sampled. This report summarizes the results of analyses of dust samples collected from the unused Hope Creek canister and the FME covers. Both wet and dry samples of the dust/salts were collected, using SaltSmart(TM) sensorsmore » and Scotch - Brite(TM) abrasive pads, respectively. The SaltSmart(TM) samples were leached and the leachate analyzed chemically to determine the composition and surface load per unit area of soluble salts present on the canister surface. The dry pad samples were analyzed by X-ray fluorescence and by scanning electron microscopy to determine dust texture and mineralogy; and by leaching and chemical analysis to deter mine soluble salt compositions. The analyses showed that the dominant particles on the canister surface were stainless steel particles, generated during manufacturing of the canister. Sparse environmentally - derived silicates and aluminosilicates were also present. Salt phases were sparse, and consisted of mostly of sulfates with rare nitrates and chlorides. On the FME covers, the dusts were mostly silicates/aluminosilicates; the soluble salts were consistent with those on the canister surface, and were dominantly sulfates. It should be noted that the FME covers were w ashed by rain prior to sampling, which had an unknown effect of the measured salt loads and compositions. Sulfate salts dominated the assemblages on the canister and FME surfaces, and in cluded Ca - SO 4 , but also Na - SO 4 , K - SO 4 , and Na - Al - SO 4 . It is likely that these salts were formed by particle - gas conversion reactions, either prior to, or after, deposition. These reactions involve reaction of carbonate, chloride, or nitrate salts with at mospheric SO 2, sulfuric acid, or a mmonium sulfate to form sulfate minerals. The Na - Al - SO 4 phase is unusual, and may have formed by reaction of Na - Al containing phases in aluminum smelter emissions with SO 2 , also present in smelter emissions. An aluminum smelter is located in Camden, NJ, 40 miles NE of the Hope Creek Site.« less
Th-230 - U-238 series disequilibrium of the Olkaria basalts Gregory Rift Valley, Kenya
NASA Technical Reports Server (NTRS)
Black, S.; Macdonald, R.; Kelly, M.
1993-01-01
U-Th disequilibrium analyses of the Naivasha basalts show a very small (U-238/Th-230) ratios which are lower than any previously analyzed basalts. The broadly positive internal isochron trend from one sample indicates that the basalts may have source heterogeneities, this is supported by earlier work. The Naivasha complex comprises a bimodal suite of basalts and rhyolites. The basalts are divided into two stratigraphic groups each of a transitional nature. The early basalt series (EBS) which were erupted prior to the Group 1 comendites and, the late basalt series (LBS) which erupted temporally between the Broad Acres and the Ololbutot centers. The basalts represent a very small percentage of the overall eruptive volume of material at Naivasha (less than 2 percent). The analyzed samples come from four stratigraphic units in close proximity around Ndabibi, Hell's Gate and Akira areas. The earliest units occur as vesicular flows from the Ndabibi plain. These basalts are olivine-plagioclase phyric with the associated hawaiites being sparsely plagioclase phyric. An absolute age of 0.5Ma was estimated for these basalts. The next youngest basalts flows occur as younger tuft cones in the Ndabibi area and are mainly olivine-plagioclase-clinopyroxcene phyric with one purely plagioclase phyric sample. The final phase of activity at Ndabibi resulted in much younger tuft cones consisting of air fall ashes and lapilli tufts. Many of these contain resorbed plagioclase phenocrysts with sample number 120c also being clinopyroxene phyric. The isotopic evidence for the basalt formation is summarized.
Exploration for uranium deposits in the Spring Creek Mesa area, Montrose County, Colorado
Roach, Carl Houston
1954-01-01
4. The “ore-bearing sandstone” in the vicinity of relatively unoxidized ore deposits commonly contains sparse to abundant disseminated pyrite. In the vicinity of oxidized deposits it commonly contains abundant limonite spots and widespread limonite staining.
State-wide Conservation Forum to Facilitate Cooperative Conservation
2007-03-01
leaf pine ecosystem that is home to the red-cockaded woodpecker (RCW), an endangered species. After significant train- ing restrictions were imposed...ranking). Caroline and Essex counties were rural, sparsely populated areas, dominated by forestland and small farms. Both counties are experiencing
Poverty dynamics, ecological endowments, and land use among smallholders in the Brazilian Amazon.
Guedes, Gilvan R; VanWey, Leah K; Hull, James R; Antigo, Mariangela; Barbieri, Alisson F
2014-01-01
Rural settlement in previously sparsely occupied areas of the Brazilian Amazon has been associated with high levels of forest loss and unclear long-term social outcomes. We focus here on the micro-level processes in one settlement area to answer the question of how settler and farm endowments affect household poverty. We analyze the extent to which poverty is sensitive to changes in natural capital, land use strategies, and biophysical characteristics of properties (particularly soil quality). Cumulative time spent in poverty is simulated using Markovian processes, which show that accessibility to markets and land use system are especially important for decreasing poverty among households in our sample. Wealtheir households are selected into commercial production of perennials before our initial observation, and are therefore in poverty a lower proportion of the time. Land in pasture, in contrast, has an independent effect on reducing the proportion of time spent in poverty. Taken together, these results show that investments in roads and the institutional structures needed to make commercial agriculture or ranching viable in existing and new settlement areas can improve human well-being in frontiers. Copyright © 2013 Elsevier Inc. All rights reserved.
An efficient optical architecture for sparsely connected neural networks
NASA Technical Reports Server (NTRS)
Hine, Butler P., III; Downie, John D.; Reid, Max B.
1990-01-01
An architecture for general-purpose optical neural network processor is presented in which the interconnections and weights are formed by directing coherent beams holographically, thereby making use of the space-bandwidth products of the recording medium for sparsely interconnected networks more efficiently that the commonly used vector-matrix multiplier, since all of the hologram area is in use. An investigation is made of the use of computer-generated holograms recorded on such updatable media as thermoplastic materials, in order to define the interconnections and weights of a neural network processor; attention is given to limits on interconnection densities, diffraction efficiencies, and weighing accuracies possible with such an updatable thin film holographic device.
Landscape selection by piping plovers has implications for measuring habitat and population size
Anteau, Michael J.; Shaffer, Terry L.; Wiltermuth, Mark T.; Sherfy, Mark H.
2014-01-01
How breeding birds distribute in relation to landscape-scale habitat features has important implications for conservation because those features may constrain habitat suitability. Furthermore, knowledge of these associations can help build models to improve area-wide demographic estimates or to develop a sampling stratification for research and monitoring. This is particularly important for rare species that have uneven distributions across vast areas, such as the federally listed piping plover (Charadrius melodus; hereafter plover). We examined how remotely-sensed landscape features influenced the distribution of breeding plover pairs among 2-km shoreline segments during 2006–2009 at Lake Sakakawea in North Dakota, USA. We found strong associations between remotely-sensed landscape features and plover abundance and distribution (R2 = 0.65). Plovers were nearly absent from segments with bluffs (>25 m elevation increase within 250 m of shoreline). Relative plover density (pairs/ha) was markedly greater on islands (4.84 ± 1.22 SE) than on mainlands (0.85 ± 0.17 SE). Pair numbers increased with abundance of nesting habitat (unvegetated-flat areas β^=0.28±0.08SE ). On islands, pair numbers also increased with the relative proportion of the total area that was habitat ( β^=3.27±0.46SE ). Our model could be adapted to estimate the breeding population of plovers or to make predictions that provide a basis for stratification and design of future surveys. Knowledge of landscape features, such as bluffs, that exclude use by birds refines habitat suitability and facilitates more accurate estimates of habitat and population abundance, by decreasing the size of the sampling universe. Furthermore, techniques demonstrated here are applicable to other vast areas where birds breed in sparse or uneven densities.
Effects of different interior decorations in the seclusion area of a psychiatric acute ward.
Vaaler, Arne E; Morken, Gunnar; Linaker, Olav M
2005-01-01
The objective of the study was to compare development in symptoms, behaviours, treatment and patient satisfaction of a traditional interior and an interior furnished like an ordinary home in a seclusion area. A naturalistic sample of 56 consecutive patients admitted to an acute ward was allocated to two different seclusion areas, one with a traditional interior and one decorated as an ordinary home. Symptoms of psychopathology, therapeutic steps taken, violent episodes, length of patient stay and patient satisfaction were recorded. There were no differences in changes in scores on The Positive and Negative Syndrome Scale for schizophrenia, The Brøset Violence Checklist or the Global Assessment of Function split version scale between the two patient groups. Therapeutic steps taken, number of violent episodes and length of patient stay was also similar. Female patients preferred an ordinary home interior. It was concluded that interior and furnishing like an ordinary home in the seclusion areas created an environment with comparable treatment outcomes to the traditional dismal interior, and had positive effects on many patients' well-being, at least among the women. The traditional beliefs that a sparsely decorated interior is a method to reduce symptoms of psychopathology and dangerous behaviours were not supported by our data.
Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D
2012-09-01
It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.
Geochemical evidence for diversity of dust sources in the southwestern United States
Reheis, M.C.; Budahn, J.R.; Lamothe, P.J.
2002-01-01
Several potential dust sources, including generic sources of sparsely vegetated alluvium, playa deposits, and anthropogenic emissions, as well as the area around Owens Lake, California, affect the composition of modern dust in the southwestern United States. A comparison of geochemical analyses of modern and old (a few thousand years) dust with samples of potential local sources suggests that dusts reflect four primary sources: (1) alluvial sediments (represented by Hf, K, Rb, Zr, and rare-earth elements, (2) playas, most of which produce calcareous dust (Sr, associated with Ca), (3) the area of Owens (dry) Lake, a human-induced playa (As, Ba, Li, Pb, Sb, and Sr), and (4) anthropogenic and/or volcanic emissions (As, Cr, Ni, and Sb). A comparison of dust and source samples with previous analyses shows that Owens (dry) Lake and mining wastes from the adjacent Cerro Gordo mining district are the primary sources of As, Ba, Li, and Pb in dusts from Owens Valley. Decreases in dust contents of As, Ba, and Sb with distance from Owens Valley suggest that dust from southern Owens Valley is being transported at least 400 km to the east. Samples of old dust that accumulated before European settlement are distinctly lower in As, Ba, and Sb abundances relative to modern dust, likely due to modern transport of dust from Owens Valley. Thus, southern Owens Valley appears to be an important, geochemically distinct, point source for regional dust in the southwestern United States. Copyright ?? 2002 Elsevier Science Ltd.
Utilization of satellite imagery by in-flight aircraft. [for weather information
NASA Technical Reports Server (NTRS)
Luers, J. K.
1976-01-01
Present and future utilization of satellite weather data by commercial aircraft while in flight was assessed. Weather information of interest to aviation that is available or will become available with future geostationary satellites includes the following: severe weather areas, jet stream location, weather observation at destination airport, fog areas, and vertical temperature profiles. Utilization of this information by in-flight aircraft is especially beneficial for flights over the oceans or over remote land areas where surface-based observations and communications are sparse and inadequate.
The Mass Function in h+(chi) Persei
NASA Astrophysics Data System (ADS)
Bragg, Ann; Kenyon, Scott
2000-08-01
Knowledge of the stellar initial mass function (IMF) is critical to understanding star formation and galaxy evolution. Past studies of the IMF in open clusters have primarily used luminosity functions to determine mass functions, frequently in relatively sparse clusters. Our goal with this project is to derive a reliable, well- sampled IMF for a pair of very dense young clusters (h+(chi) Persei) with ages, 1-2 × 10^7 yr (e.g., Vogt A& A 11:359), where stellar evolution theory is robust. We will construct the HR diagram using both photometry and spectral types to derive more accurate stellar masses and ages than are possible using photometry alone. Results from the two clusters will be compared to examine the universality of the IMF. We currently have a spectroscopic sample covering an area within 9 arc-minutes of the center of each cluster taken with the FAST Spectrograph. The sample is complete to V=15.4 and contains ~ 1000 stars. We request 2 nights at WIYN/HYDRA to extend this sample to deeper magnitudes, allowing us to determine the IMF of the clusters to a lower limiting mass and to search for a pre-main sequence, theoretically predicted to be present for clusters of this age. Note that both clusters are contained within a single HYDRA field.
Efficient ICCG on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Hammond, Steven W.; Schreiber, Robert
1989-01-01
Different approaches are discussed for exploiting parallelism in the ICCG (Incomplete Cholesky Conjugate Gradient) method for solving large sparse symmetric positive definite systems of equations on a shared memory parallel computer. Techniques for efficiently solving triangular systems and computing sparse matrix-vector products are explored. Three methods for scheduling the tasks in solving triangular systems are implemented on the Sequent Balance 21000. Sample problems that are representative of a large class of problems solved using iterative methods are used. We show that a static analysis to determine data dependences in the triangular solve can greatly improve its parallel efficiency. We also show that ignoring symmetry and storing the whole matrix can reduce solution time substantially.
Does initial spacing influence crown and hydraulic architecture of Eucalyptus marginata?
Grigg, A H; Macfarlane, C; Evangelista, C; Eamus, D; Adams, M A
2008-05-01
Long-term declines in rainfall in south-western Australia have resulted in increased interest in the hydraulic characteristics of jarrah (Eucalyptus marginata Donn ex Smith) forest established in the region's drinking water catchments on rehabilitated bauxite mining sites. We hypothesized that in jarrah forest established on rehabilitated mine sites: (1) leaf area index (L) is independent of initial tree spacing; and (2) more densely planted trees have less leaf area for the same leaf mass, or the same sapwood area, and have denser sapwood. Initial stand densities ranged from about 600 to 9000 stems ha(-1), and trees were 18 years old at the time of sampling. Leaf area index was unaffected by initial stand density, except in the most sparsely stocked stands where L was 1.2 compared with 2.0-2.5 in stands at other spacings. The ratio of leaf area to sapwood area (A(l):A(s)) was unaffected by tree spacing or tree size and was 0.2 at 1.3 m height and 0.25 at the crown base. There were small increases in sapwood density and decreases in leaf specific area with increased spacing. Tree diameter or basal area was a better predictor of leaf area than sapwood area. At the stand scale, basal area was a good predictor of L (r(2) = 0.98, n = 15) except in the densest stands. We conclude that the hydraulic attributes of this forest type are largely independent of initial tree spacing, thus simplifying parameterization of stand and catchment water balance models.
Rocky Mountain snowpack chemistry at selected sites for 2001
Ingersoll, George P.; Mast, M. Alisa; Clow, David W.; Nanus, Leora; Campbell, Donald H.; Handran, Heather
2003-01-01
Because regional-scale atmospheric deposition data in the Rocky Mountains are sparse, a program was designed by the U.S. Geological Survey, in cooperation with the National Park Service, U.S. Department of Agriculture Forest Service, and other agencies, to more thoroughly determine the chemical composition of precipitation and to identify sources of atmospherically deposited contaminants in a network of high-elevation sites. Samples of seasonal snowpacks at 57 geographically distributed sites, in a regional network from New Mexico to Montana, were collected and analyzed for major ions (including ammonium, nitrate, and sulfate), alkalinity, and dissolved organic carbon during 2001. Sites selected in this report have been sampled annually since 1993, enabling identification of increases or decreases in chemical concentrations from year to year. Spatial patterns in snowpack-chemical data for concentrations of ammonium, nitrate, and sulfate indicate that concentrations of these acid precursors in less developed areas of the region are lower than concentrations in the heavily developed areas. Results for the 2001 snowpack-chemistry analyses, however, indicate increases in concentrations of ammonium and nitrate in particular at sites where past concentrations typically were lower. Since 1993, concentrations of nitrate and sulfate were highest from snowpack samples in northern Colorado that were collected from sites adjacent to the Denver metropolitan area to the east and the coal-fired powerplants to the west. In 2001, relatively high concentrations of nitrate (12.3 to 23.0 microequivalents per liter (?eq/L) and sulfate (7.7 to 12.5 ?eq/L) were detected in Montana and Wyoming. Ammonium concentrations were highest in north-central Colorado (14.5 to 16.9 ?eq/L) and southwestern Montana (12.8 to 14.2 ?eq/L).
Infrared moving small target detection based on saliency extraction and image sparse representation
NASA Astrophysics Data System (ADS)
Zhang, Xiaomin; Ren, Kan; Gao, Jin; Li, Chaowei; Gu, Guohua; Wan, Minjie
2016-10-01
Moving small target detection in infrared image is a crucial technique of infrared search and tracking system. This paper present a novel small target detection technique based on frequency-domain saliency extraction and image sparse representation. First, we exploit the features of Fourier spectrum image and magnitude spectrum of Fourier transform to make a rough extract of saliency regions and use a threshold segmentation system to classify the regions which look salient from the background, which gives us a binary image as result. Second, a new patch-image model and over-complete dictionary were introduced to the detection system, then the infrared small target detection was converted into a problem solving and optimization process of patch-image information reconstruction based on sparse representation. More specifically, the test image and binary image can be decomposed into some image patches follow certain rules. We select the target potential area according to the binary patch-image which contains salient region information, then exploit the over-complete infrared small target dictionary to reconstruct the test image blocks which may contain targets. The coefficients of target image patch satisfy sparse features. Finally, for image sequence, Euclidean distance was used to reduce false alarm ratio and increase the detection accuracy of moving small targets in infrared images due to the target position correlation between frames.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linenberg, A.; Lander, N.J.
1994-12-31
The need for remote monitoring of certain compounds in a sparsely populated area with limited user assistance led to the development and manufacture of a self contained, portable gas chromatography with the appropriate software. Part per billion levels of vinyl chloride, cis 1,2 dichloroethylene and trichloroethylene were detected in air using a trap for preconcentration of the compounds. The units were continuously calibrated with certified standards from Scott Specialty Gases, which in one case was 1 part per billion of the aforementioned compounds. The entire operation of the units, including monitoring instrument responses, changing operating parameters, data transfer, data reviewmore » and data reporting was done entirely on a remote basis from approximately 600 miles away using a remote computer with a modem and remote operating software. The entire system concept promises the availability of highly sensitive remote monitoring in sparsely populated areas for long periods of time.« less
Duhalde, Denisse J; Arumí, José L; Oyarzún, Ricardo A; Rivera, Diego A
2018-06-11
A fuzzy logic approach has been proposed to face the uncertainty caused by sparse data in the assessment of the intrinsic vulnerability of a groundwater system with parametric methods in Las Trancas Valley, Andean Mountain, south-central Chile, a popular touristic place in Chile, but lacking of a centralized drinking and sewage water public systems; this situation is a potentially source of groundwater pollution. Based on DRASTIC, GOD, and EKv and the expert knowledge of the study area, the Mamdani fuzzy approach was generated and the spatial data were processed by ArcGIS. The groundwater system exhibited areas with high, medium, and low intrinsic vulnerability indices. The fuzzy approach results were compared with traditional methods results, which, in general, have shown a good spatial agreement even though significant changes were also identified in the spatial distribution of the indices. The Mamdani logic approach has shown to be a useful and practical tool to assess the intrinsic vulnerability of an aquifer under sparse data conditions.
NASA Astrophysics Data System (ADS)
Ota, Junko; Umehara, Kensuke; Ishimaru, Naoki; Ohno, Shunsuke; Okamoto, Kentaro; Suzuki, Takanori; Shirai, Naoki; Ishida, Takayuki
2017-02-01
As the capability of high-resolution displays grows, high-resolution images are often required in Computed Tomography (CT). However, acquiring high-resolution images takes a higher radiation dose and a longer scanning time. In this study, we applied the Sparse-coding-based Super-Resolution (ScSR) method to generate high-resolution images without increasing the radiation dose. We prepared the over-complete dictionary learned the mapping between low- and highresolution patches and seek a sparse representation of each patch of the low-resolution input. These coefficients were used to generate the high-resolution output. For evaluation, 44 CT cases were used as the test dataset. We up-sampled images up to 2 or 4 times and compared the image quality of the ScSR scheme and bilinear and bicubic interpolations, which are the traditional interpolation schemes. We also compared the image quality of three learning datasets. A total of 45 CT images, 91 non-medical images, and 93 chest radiographs were used for dictionary preparation respectively. The image quality was evaluated by measuring peak signal-to-noise ratio (PSNR) and structure similarity (SSIM). The differences of PSNRs and SSIMs between the ScSR method and interpolation methods were statistically significant. Visual assessment confirmed that the ScSR method generated a high-resolution image with sharpness, whereas conventional interpolation methods generated over-smoothed images. To compare three different training datasets, there were no significance between the CT, the CXR and non-medical datasets. These results suggest that the ScSR provides a robust approach for application of up-sampling CT images and yields substantial high image quality of extended images in CT.
SparseBeads data: benchmarking sparsity-regularized computed tomography
NASA Astrophysics Data System (ADS)
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
Sormunen, Jani J; Klemola, Tero; Vesterinen, Eero J; Vuorinen, Ilppo; Hytönen, Jukka; Hänninen, Jari; Ruohomäki, Kai; Sääksjärvi, Ilari E; Tonteri, Elina; Penttinen, Ritva
2016-02-01
Studies have revealed that Ixodes ricinus (Acari: Ixodidae) have become more abundant and their geographical distribution extended northwards in some Nordic countries during the past few decades. However, ecological data of tick populations in Finland are sparse. In the current study, I. ricinus abundance, seasonal questing activity, and their Borrelia spp. and tick-borne encephalitis virus (TBEV) prevalence were evaluated in a Lyme borreliosis endemic area in Southwest Finland, Seili Island, where a previous study mapping tick densities was conducted 12 years earlier. A total of 1940 ticks were collected from five different biotopes by cloth dragging during May-September 2012. The overall tick density observed was 5.2 ticks/100m(2) for nymphs and adults. Seasonal questing activity of ticks differed between biotopes and life stages: bimodal occurrences were observed especially for nymphal and adult ticks in forested biotopes, while larvae in pastures exhibited mostly unimodal occurrence. Prevalence of Borrelia and TBEV in ticks was evaluated using conventional and real-time PCR. All samples were negative for TBEV. Borrelia prevalence was 25.0% for adults (n=44) and the minimum infection rate (MIR) 5.6% for pooled nymph samples (191 samples, 1-14 individuals per sample; 30/191 positive). No Borrelia were detected in pooled larval samples (63 samples, 1-139 individuals per sample). Five species of Borrelia were identified from the samples: B. afzelii, B. burgdorferi s.s., B. garinii, B. valaisiana and B. miyamotoi. In Finland, B. valaisiana and B. miyamotoi have previously been reported from the Åland Islands but not from the mainland or inner archipelago. The results of the present study suggest an increase in I. ricinus abundance on the island. Copyright © 2015 Elsevier GmbH. All rights reserved.
Large-region acoustic source mapping using a movable array and sparse covariance fitting.
Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2017-01-01
Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
The Hadley circulation: assessing NCEP/NCAR reanalysis and sparse in-situ estimates
NASA Astrophysics Data System (ADS)
Waliser, D. E.; Shi, Zhixiong; Lanzante, J. R.; Oort, A. H.
We present a comparison of the zonal mean meridional circulations derived from monthly in situ data (i.e. radiosondes and ship reports) and from the NCEP/NCAR reanalysis product. To facilitate the interpretation of the results, a third estimate of the mean meridional circulation is produced by subsampling the reanalysis at the locations where radiosonde and surface ship data are available for the in situ calculation. This third estimate, known as the subsampled estimate, is compared to the complete reanalysis estimate to assess biases in conventional, in situ estimates of the Hadley circulation associated with the sparseness of the data sources (i.e., radiosonde network). The subsampled estimate is also compared to the in situ estimate to assess the biases introduced into the reanalysis product by the numerical model, initialization process and/or indirect data sources such as satellite retrievals. The comparisons suggest that a number of qualitative differences between the in situ and reanalysis estimates are mainly associated with the sparse sampling and simplified interpolation schemes associated with in situ estimates. These differences include: (1) a southern Hadley cell that consistently extends up to 200 hPa in the reanalysis, whereas the bulk of the circulation for the in situ and subsampled estimates tends to be confined to the lower half of the troposphere, (2) more well-defined and consistent poleward limits of the Hadley cells in the reanalysis compared to the in-situ and subsampled estimates, and (3) considerably less variability in magnitude and latitudinal extent of the Ferrel cells and southern polar cell exhibited in the reanalysis estimate compared to the in situ and subsampled estimates. Quantitative comparison shows that the subsampled estimate, relative to the reanalysis estimate, produces a stronger northern Hadley cell ( 20%), a weaker southern Hadley cell ( 20-60%), and weaker Ferrel cells in both hemispheres. These differences stem from poorly measured oceanic regions which necessitate significant interpolation over broad regions. Moreover, they help to pinpoint specific shortcomings in the present and previous in situ estimates of the Hadley circulation. Comparisons between the subsampled and in situ estimates suggest that the subsampled estimate produces a slightly stronger Hadley circulation in both hemispheres, with the relative differences in some seasons as large as 20-30%. 6These differences suggest that the mean meridional circulation associated with the NCEP/NCAR reanalysis is more energetic than observations suggest. Examination of ENSO-related changes to the Hadley circulation suggest that the in situ and subsampled estimates significantly overestimate the effects of ENSO on the Hadley circulation due to the reliance on sparsely distributed data. While all three estimates capture the large-scale region of low-level equatorial convergence near the dateline that occurs during El Nino, the in situ and subsampled estimates fail to effectively reproduce the large-scale areas of equatorial mass divergence to the west and east of this convergence area, leading to an overestimate of the effects of ENSO on the zonal mean circulation.
NASA Astrophysics Data System (ADS)
Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong
2015-09-01
This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.
2014-06-17
100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with
Walrus areas of use in the Chukchi Sea during sparse sea ice cover
Jay, Chadwick V.; Fischbach, Anthony S.; Kochnev, Anatoly A.
2012-01-01
The Pacific walrus Odobenus rosmarus divergens feeds on benthic invertebrates on the continental shelf of the Chukchi and Bering Seas and rests on sea ice between foraging trips. With climate warming, ice-free periods in the Chukchi Sea have increased and are projected to increase further in frequency and duration. We radio-tracked walruses to estimate areas of walrus foraging and occupancy in the Chukchi Sea from June to November of 2008 to 2011, years when sea ice was sparse over the continental shelf in comparison to historical records. The earlier and more extensive sea ice retreat in June to September, and delayed freeze-up of sea ice in October to November, created conditions for walruses to arrive earlier and stay later in the Chukchi Sea than in the past. The lack of sea ice over the continental shelf from September to October caused walruses to forage in nearshore areas instead of offshore areas as in the past. Walruses did not frequent the deep waters of the Arctic Basin when sea ice retreated off the shelf. Walruses foraged in most areas they occupied, and areas of concentrated foraging generally corresponded to regions of high benthic biomass, such as in the northeastern (Hanna Shoal) and southwestern Chukchi Sea. A notable exception was the occurrence of concentrated foraging in a nearshore area of northwestern Alaska that is apparently depauperate in walrus prey. With increasing sea ice loss, it is likely that walruses will increase their use of coastal haul-outs and nearshore foraging areas, with consequences to the population that are yet to be understood.
Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud
Cho, Seoungjae; Kim, Jonghyun; Ikram, Warda; Cho, Kyungeun; Sim, Sungdae
2014-01-01
A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame. PMID:25093204
Population influences on tornado reports in the United States
Anderson, C.J.; Wikle, C.K.; Zhou, Q.; Royle, J. Andrew
2007-01-01
The number of tornadoes reported in the United States is believed to be less than the actual incidence of tornadoes, especially prior to the 1990s, because tornadoes may be undetectable by human witnesses in sparsely populated areas and areas in which obstructions limit the line of sight. A hierarchical Bayesian model is used to simultaneously correct for population-based sampling bias and estimate tornado density using historical tornado report data. The expected result is that F2-F5 compared with F0-F1 tornado reports would vary less with population density. The results agree with this hypothesis for the following population centers: Atlanta, Georgia; Champaign, Illinois; and Des Moines, Iowa. However, the results indicated just the opposite in Oklahoma. It is hypothesized that the result is explained by the misclassification of tornadoes that were worthy of F2-F5 rating but were classified as F0-F1 tornadoes, thereby artificially decreasing the number of F2-F5 and increasing the number of F0-F1 reports in rural Oklahoma.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinson, R.A.; Marrs, R.W.; Crockell, F.
1979-06-30
LANDSAT satellite imagery and aerial photography can be used to map areas of altered sandstone associated with roll-front uranium deposits. Image data must be enhanced so that alteration spectral contrasts can be seen, and video image processing is a fast, low-cost, and efficient tool. For LANDSAT data, the 7/4 ratio produces the best enhancement of altered sandstone. The 6/4 ratio is most effective for color infrared aerial photography. Geochemical and mineralogical associations occur in unaltered, altered, and ore roll-front zones. Samples from Pumpkin Buttes show that iron is the primary coloring agent which makes alteration visually detectable. Eh and pHmore » changes associated with passage of a roll front cause oxidation of magnetite and pyrite to hematite, goethite, and limonite in the host sandstone, thereby producing the alteration. Statistical analysis show that the detectability of geochemical and color zonation in host sands is weakened by soil-forming processes. Alteration can only be mapped in areas of thin soil cover and moderate to sparse vegetative cover.« less
Jay, Chadwick V.; Grebmeier, Jacqueline M.; Fischbach, Anthony S.
2012-01-01
Arctic species such as the Pacific walrus (Odobenus rosmarus divergens) are facing a rapidly changing environment. Walruses are benthic foragers and may shift their spatial patterns of foraging in response to changes in prey distribution. We used data from satellite radio-tags attached to walruses in 2009-2010 to map walrus foraging locations with concurrent sampling of benthic infauna to examine relationships between distributions of dominant walrus prey and spatial patterns of walrus foraging. Walrus foraging was concentrated offshore in the NE Chukchi Sea, and coastal areas of northwestern Alaska when sea ice was sparse. Walrus foraging areas in August-September were coincident with the biomass of two dominant bivalve taxa (Tellinidae and Nuculidae) and sipunculid worms. Walrusforaging costs associated with increased travel time to higher biomass food patches from land may be significantly higher than the costs from sea ice haul-outs and result in reduced energy storesin walruses. Identifying what resources are selected by walruses and how those resources are distributed in space and time will improve our ability to forecast how walruses might respond to a changing climate.
NASA Astrophysics Data System (ADS)
Bilionis, I.; Koutsourelakis, P. S.
2012-05-01
The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.
Array signal recovery algorithm for a single-RF-channel DBF array
NASA Astrophysics Data System (ADS)
Zhang, Duo; Wu, Wen; Fang, Da Gang
2016-12-01
An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.
Recording 13C-15N HMQC 2D sparse spectra in solids in 30 s
NASA Astrophysics Data System (ADS)
Kupče, Ēriks; Trébosc, Julien; Perrone, Barbara; Lafon, Olivier; Amoureux, Jean-Paul
2018-03-01
We propose a dipolar HMQC Hadamard-encoded (D-HMQC-Hn) experiment for fast 2D correlations of abundant nuclei in solids. The main limitation of the Hadamard methods resides in the length of the encoding pulses, which results from a compromise between the selectivity and the sensitivity due to losses. For this reason, these methods should mainly be used with sparse spectra, and they profit from the increased separation of the resonances at high magnetic fields. In the case of the D-HMQC-Hn experiments, we give a simple rule that allows directly setting the optimum length of the selective pulses, versus the minimum separation of the resonances in the indirect dimension. The demonstration has been performed on a fully 13C,15N labelled f-MLF sample, and it allowed recording the build-up curves of the 13C-15N cross-peaks within 10 min. However, the method could also be used in the case of less sensitive samples, but with more accumulations.
Ortiz, Andrés; Munilla, Jorge; Álvarez-Illán, Ignacio; Górriz, Juan M; Ramírez, Javier
2015-01-01
Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. Its development has been shown to be closely related to changes in the brain connectivity network and in the brain activation patterns along with structural changes caused by the neurodegenerative process. Methods to infer dependence between brain regions are usually derived from the analysis of covariance between activation levels in the different areas. However, these covariance-based methods are not able to estimate conditional independence between variables to factor out the influence of other regions. Conversely, models based on the inverse covariance, or precision matrix, such as Sparse Gaussian Graphical Models allow revealing conditional independence between regions by estimating the covariance between two variables given the rest as constant. This paper uses Sparse Inverse Covariance Estimation (SICE) methods to learn undirected graphs in order to derive functional and structural connectivity patterns from Fludeoxyglucose (18F-FDG) Position Emission Tomography (PET) data and segmented Magnetic Resonance images (MRI), drawn from the ADNI database, for Control, MCI (Mild Cognitive Impairment Subjects), and AD subjects. Sparse computation fits perfectly here as brain regions usually only interact with a few other areas. The models clearly show different metabolic covariation patters between subject groups, revealing the loss of strong connections in AD and MCI subjects when compared to Controls. Similarly, the variance between GM (Gray Matter) densities of different regions reveals different structural covariation patterns between the different groups. Thus, the different connectivity patterns for controls and AD are used in this paper to select regions of interest in PET and GM images with discriminative power for early AD diagnosis. Finally, functional an structural models are combined to leverage the classification accuracy. The results obtained in this work show the usefulness of the Sparse Gaussian Graphical models to reveal functional and structural connectivity patterns. This information provided by the sparse inverse covariance matrices is not only used in an exploratory way but we also propose a method to use it in a discriminative way. Regression coefficients are used to compute reconstruction errors for the different classes that are then introduced in a SVM for classification. Classification experiments performed using 68 Controls, 70 AD, and 111 MCI images and assessed by cross-validation show the effectiveness of the proposed method.
Varying face occlusion detection and iterative recovery for face recognition
NASA Astrophysics Data System (ADS)
Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei
2017-05-01
In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.
2010-01-01
Background Nonparametric Bayesian techniques have been developed recently to extend the sophistication of factor models, allowing one to infer the number of appropriate factors from the observed data. We consider such techniques for sparse factor analysis, with application to gene-expression data from three virus challenge studies. Particular attention is placed on employing the Beta Process (BP), the Indian Buffet Process (IBP), and related sparseness-promoting techniques to infer a proper number of factors. The posterior density function on the model parameters is computed using Gibbs sampling and variational Bayesian (VB) analysis. Results Time-evolving gene-expression data are considered for respiratory syncytial virus (RSV), Rhino virus, and influenza, using blood samples from healthy human subjects. These data were acquired in three challenge studies, each executed after receiving institutional review board (IRB) approval from Duke University. Comparisons are made between several alternative means of per-forming nonparametric factor analysis on these data, with comparisons as well to sparse-PCA and Penalized Matrix Decomposition (PMD), closely related non-Bayesian approaches. Conclusions Applying the Beta Process to the factor scores, or to the singular values of a pseudo-SVD construction, the proposed algorithms infer the number of factors in gene-expression data. For real data the "true" number of factors is unknown; in our simulations we consider a range of noise variances, and the proposed Bayesian models inferred the number of factors accurately relative to other methods in the literature, such as sparse-PCA and PMD. We have also identified a "pan-viral" factor of importance for each of the three viruses considered in this study. We have identified a set of genes associated with this pan-viral factor, of interest for early detection of such viruses based upon the host response, as quantified via gene-expression data. PMID:21062443
NASA Astrophysics Data System (ADS)
Hsu, Charles; Viazanko, Michael; O'Looney, Jimmy; Szu, Harold
2009-04-01
Modularity Biometric System (MBS) is an approach to support AiTR of the cooperated and/or non-cooperated standoff biometric in an area persistent surveillance. Advanced active and passive EOIR and RF sensor suite is not considered here. Neither will we consider the ROC, PD vs. FAR, versus the standoff POT in this paper. Our goal is to catch the "most wanted (MW)" two dozens, separately furthermore ad hoc woman MW class from man MW class, given their archrivals sparse front face data basis, by means of various new instantaneous input called probing faces. We present an advanced algorithm: mini-Max classifier, a sparse sample realization of Cramer-Rao Fisher bound of the Maximum Likelihood classifier that minimize the dispersions among the same woman classes and maximize the separation among different man-woman classes, based on the simple feature space of MIT Petland eigen-faces. The original aspect consists of a modular structured design approach at the system-level with multi-level architectures, multiple computing paradigms, and adaptable/evolvable techniques to allow for achieving a scalable structure in terms of biometric algorithms, identification quality, sensors, database complexity, database integration, and component heterogenity. MBS consist of a number of biometric technologies including fingerprints, vein maps, voice and face recognitions with innovative DSP algorithm, and their hardware implementations such as using Field Programmable Gate arrays (FPGAs). Biometric technologies and the composed modularity biometric system are significant for governmental agencies, enterprises, banks and all other organizations to protect people or control access to critical resources.
WHAT IS NEW IN RURAL EDUCATION--NFIRE.
ERIC Educational Resources Information Center
KRAHMER, EDWARD; STURGES, A.W.
RURAL EDUCATION IS DEFINED AS THAT WHICH PREVAILS IN SPARSELY POPULATED AREAS AND SMALL RURAL COMMUNITIES (LESS THAN 2500 POPULATION). FACTORS USUALLY FOUND WITH SUCH SCHOOL OFFERINGS, INCLUDE SPARSITY OF POPULATION, SMALL SCHOOL ENROLLMENTS, ISOLATION FROM CULTURAL EVENTS, AND REMOTENESS FROM EDUCATIONAL OPPORTUNITIES. SUCH FACTORS AS THESE HELP…
Mapping impervious surfaces using object-oriented classification in a semiarid urban region
USDA-ARS?s Scientific Manuscript database
Mapping the expansion of impervious surfaces in urbanizing areas is important for monitoring and understanding the hydrologic impacts of land development. The most common approach using spectral vegetation indices, however, is difficult in arid and semiarid environments where vegetation is sparse an...
Assessment of Mercury in Fish Tissue from Select Lakes of Northeastern Oregon
A fish tissue study was conducted in five northeastern Oregon reservoirs to evaluate mercury concentrations in an area where elevated atmospheric mercury deposition had been predicted by a national EPA model, but where tissue data were sparse. The study targeted resident predator...
Reliable positioning in a sparse GPS network, eastern Ontario
NASA Astrophysics Data System (ADS)
Samadi Alinia, H.; Tiampo, K.; Atkinson, G. M.
2013-12-01
Canada hosts two regions that are prone to large earthquakes: western British Columbia, and the St. Lawrence River region in eastern Canada. Although eastern Ontario is not as seismically active as other areas of eastern Canada, such as the Charlevoix/Ottawa Valley seismic zone, it experiences ongoing moderate seismicity. In historic times, potentially damaging events have occurred in New York State (Attica, 1929, M=5.7; Plattsburg, 2002, M=5.0), north-central Ontario (Temiskaming, 1935, M=6.2; North Bay, 2000, M=5.0), eastern Ontario (Cornwall, 1944, M=5.8), Georgian Bay (2005, MN=4.3), and western Quebec (Val-Des-Bois,2010, M=5.0, MN=5.8). In eastern Canada, the analysis of detailed, high-precision measurements of surface deformation is a key component in our efforts to better characterize the associated seismic hazard. The data from precise, continuous GPS stations is necessary to adequately characterize surface velocities from which patterns and rates of stress accumulation on faults can be estimated (Mazzotti and Adams, 2005; Mazzotti et al., 2005). Monitoring of these displacements requires employing high accuracy GPS positioning techniques. Detailed strain measurements can determine whether the regional strain everywhere is commensurate with a large event occurring every few hundred years anywhere within this general area or whether large earthquakes are limited to specific areas (Adams and Halchuck, 2003; Mazzotti and Adams, 2005). In many parts of southeastern Ontario and western Québec, GPS stations are distributed quite sparsely, with spacings of approximately 100 km or more. The challenge is to provide accurate solutions for these sparse networks with an approach that is capable of achieving high-accuracy positioning. Here, various reduction techniques are applied to a sparse network installed with the Southern Ontario Seismic Network in eastern Ontario. Recent developments include the implementation of precise point positioning processing on acquired GPS raw data. These are based on precise GPS orbit and clock data products with centimeter accuracy computed beforehand. Here, the analysis of 1Hz GPS data is conducted in order to find the most reliable regional network from eight stations (STCO, TYNO, ACTO, INUQ, IVKQ, KLBO, MATQ and ALGO) that cover the study area in eastern Ontario. In this way, the estimated parameters are the total number of ambiguities and resolved ambiguities, posteriori rms of each baseline and the coordinates for each station and their differences with the known coordinates. The positioning accuracy, the corrections and the accuracy of interpolated corrections, and the initialization time required for precise positioning are presented for the various applications.
Lan, Ti-Yen; Wierman, Jennifer L.; Tate, Mark W.; Philipp, Hugh T.; Elser, Veit
2017-01-01
Recently, there has been a growing interest in adapting serial microcrystallography (SMX) experiments to existing storage ring (SR) sources. For very small crystals, however, radiation damage occurs before sufficient numbers of photons are diffracted to determine the orientation of the crystal. The challenge is to merge data from a large number of such ‘sparse’ frames in order to measure the full reciprocal space intensity. To simulate sparse frames, a dataset was collected from a large lysozyme crystal illuminated by a dim X-ray source. The crystal was continuously rotated about two orthogonal axes to sample a subset of the rotation space. With the EMC algorithm [expand–maximize–compress; Loh & Elser (2009). Phys. Rev. E, 80, 026705], it is shown that the diffracted intensity of the crystal can still be reconstructed even without knowledge of the orientation of the crystal in any sparse frame. Moreover, parallel computation implementations were designed to considerably improve the time and memory scaling of the algorithm. The results show that EMC-based SMX experiments should be feasible at SR sources. PMID:28808431
NASA Astrophysics Data System (ADS)
Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar
2018-01-01
We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.
Discriminative Bayesian Dictionary Learning for Classification.
Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal
2016-12-01
We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images.
Vahadane, Abhishek; Peng, Tingying; Sethi, Amit; Albarqouni, Shadi; Wang, Lichao; Baust, Maximilian; Steiger, Katja; Schlitter, Anna Melissa; Esposito, Irene; Navab, Nassir
2016-08-01
Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.
Okimoto, Gordon; Zeinalzadeh, Ashkan; Wenska, Tom; Loomis, Michael; Nation, James B; Fabre, Tiphaine; Tiirikainen, Maarit; Hernandez, Brenda; Chan, Owen; Wong, Linda; Kwee, Sandi
2016-01-01
Technological advances enable the cost-effective acquisition of Multi-Modal Data Sets (MMDS) composed of measurements for multiple, high-dimensional data types obtained from a common set of bio-samples. The joint analysis of the data matrices associated with the different data types of a MMDS should provide a more focused view of the biology underlying complex diseases such as cancer that would not be apparent from the analysis of a single data type alone. As multi-modal data rapidly accumulate in research laboratories and public databases such as The Cancer Genome Atlas (TCGA), the translation of such data into clinically actionable knowledge has been slowed by the lack of computational tools capable of analyzing MMDSs. Here, we describe the Joint Analysis of Many Matrices by ITeration (JAMMIT) algorithm that jointly analyzes the data matrices of a MMDS using sparse matrix approximations of rank-1. The JAMMIT algorithm jointly approximates an arbitrary number of data matrices by rank-1 outer-products composed of "sparse" left-singular vectors (eigen-arrays) that are unique to each matrix and a right-singular vector (eigen-signal) that is common to all the matrices. The non-zero coefficients of the eigen-arrays identify small subsets of variables for each data type (i.e., signatures) that in aggregate, or individually, best explain a dominant eigen-signal defined on the columns of the data matrices. The approximation is specified by a single "sparsity" parameter that is selected based on false discovery rate estimated by permutation testing. Multiple signals of interest in a given MDDS are sequentially detected and modeled by iterating JAMMIT on "residual" data matrices that result from a given sparse approximation. We show that JAMMIT outperforms other joint analysis algorithms in the detection of multiple signatures embedded in simulated MDDS. On real multimodal data for ovarian and liver cancer we show that JAMMIT identified multi-modal signatures that were clinically informative and enriched for cancer-related biology. Sparse matrix approximations of rank-1 provide a simple yet effective means of jointly reducing multiple, big data types to a small subset of variables that characterize important clinical and/or biological attributes of the bio-samples from which the data were acquired.
Local sparse bump hunting reveals molecular heterogeneity of colon tumors‡
Dazard, Jean-Eudes; Rao, J. Sunil; Markowitz, Sanford
2013-01-01
The question of molecular heterogeneity and of tumoral phenotype in cancer remains unresolved. To understand the underlying molecular basis of this phenomenon, we analyzed genome-wide expression data of colon cancer metastasis samples, as these tumors are the most advanced and hence would be anticipated to be the most likely heterogeneous group of tumors, potentially exhibiting the maximum amount of genetic heterogeneity. Casting a statistical net around such a complex problem proves difficult because of the high dimensionality and multi-collinearity of the gene expression space, combined with the fact that genes act in concert with one another and that not all genes surveyed might be involved. We devise a strategy to identify distinct subgroups of samples and determine the genetic/molecular signature that defines them. This involves use of the local sparse bump hunting algorithm, which provides a much more optimal and biologically faithful transformed space within which to search for bumps. In addition, thanks to the variable selection feature of the algorithm, we derived a novel sparse gene expression signature, which appears to divide all colon cancer patients into two populations: a population whose expression pattern can be molecularly encompassed within the bump and an outlier population that cannot be. Although all patients within any given stage of the disease, including the metastatic group, appear clinically homogeneous, our procedure revealed two subgroups in each stage with distinct genetic/molecular profiles. We also discuss implications of such a finding in terms of early detection, diagnosis and prognosis. PMID:22052459
Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.
Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen
2016-07-27
Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by everymore » processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.« less
Lesot, Philippe; Kazimierczuk, Krzysztof; Trébosc, Julien; Amoureux, Jean-Paul; Lafon, Olivier
2015-11-01
Unique information about the atom-level structure and dynamics of solids and mesophases can be obtained by the use of multidimensional nuclear magnetic resonance (NMR) experiments. Nevertheless, the acquisition of these experiments often requires long acquisition times. We review here alternative sampling methods, which have been proposed to circumvent this issue in the case of solids and mesophases. Compared to the spectra of solutions, those of solids and mesophases present some specificities because they usually display lower signal-to-noise ratios, non-Lorentzian line shapes, lower spectral resolutions and wider spectral widths. We highlight herein the advantages and limitations of these alternative sampling methods. A first route to accelerate the acquisition time of multidimensional NMR spectra consists in the use of sparse sampling schemes, such as truncated, radial or random sampling ones. These sparsely sampled datasets are generally processed by reconstruction methods differing from the Discrete Fourier Transform (DFT). A host of non-DFT methods have been applied for solids and mesophases, including the G-matrix Fourier transform, the linear least-square procedures, the covariance transform, the maximum entropy and the compressed sensing. A second class of alternative sampling consists in departing from the Jeener paradigm for multidimensional NMR experiments. These non-Jeener methods include Hadamard spectroscopy as well as spatial or orientational encoding of the evolution frequencies. The increasing number of high field NMR magnets and the development of techniques to enhance NMR sensitivity will contribute to widen the use of these alternative sampling methods for the study of solids and mesophases in the coming years. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Saha, Abhijit; Vivas, A. Katherina
2017-12-01
Ongoing and future surveys with repeat imaging in multiple bands are producing (or will produce) time-spaced measurements of brightness, resulting in the identification of large numbers of variable sources in the sky. A large fraction of these are periodic variables: compilations of these are of scientific interest for a variety of purposes. Unavoidably, the data sets from many such surveys not only have sparse sampling, but also have embedded frequencies in the observing cadence that beat against the natural periodicities of any object under investigation. Such limitations can make period determination ambiguous and uncertain. For multiband data sets with asynchronous measurements in multiple passbands, we wish to maximally use the information on periodicity in a manner that is agnostic of differences in the light-curve shapes across the different channels. Given large volumes of data, computational efficiency is also at a premium. This paper develops and presents a computationally economic method for determining periodicity that combines the results from two different classes of period-determination algorithms. The underlying principles are illustrated through examples. The effectiveness of this approach for combining asynchronously sampled measurements in multiple observables that share an underlying fundamental frequency is also demonstrated.
MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis.
Yang, Wanqi; Gao, Yang; Shi, Yinghuan; Cao, Longbing
2015-11-01
Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-01-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations. PMID:28845484
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2015-10-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations.
Rank preserving sparse learning for Kinect based scene classification.
Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong
2013-10-01
With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.
Rainfall interception, and its modeling, in Pine and Eucalypt stands in Portugal
NASA Astrophysics Data System (ADS)
de Coninck, H. L.; Keizer, J. J.; Coelho, C. O. A.; van Dijck, S. J. E.; Jetten, V. G.; Warmerdam, P. M. M.; Ferreira, A. J. D.; Boulet, A. K.
2003-04-01
Within the framework of the EU-funded CLIMED project (ICA3-2000-30005), concerning the water management implications of foreseeable climate and land-use changes in central Portugal and northern Africa, the event-based Limburg Soil Erosion Model (LISEM; www.geog.uu.nl/lisem) is intended to provide further insight into water yields, peak flow and timing under possible future rainfall regimes. In the Portuguese study area, LISEM is being applied to two small (< 1km2) catchments with contrasting land covers, dominated by Pinus pinaster Ait. and Eucalyptus globulus Labill. tree stands, respectively. In LISEM, cumulative interception is modelled using the empirical formula by Ashton (1979), i.e. as a function of vegetation cover and canopy storage capacity, which in turn is estimated from the Leaf Area Index using the Von Hoyningen-Huenes (1981) formula. Besides that the appropriateness of the LISEM interception module for forested areas may be questioned, its (optional) substitution in LISEM by a more process-based model like that of Rutter would be more in line with LISEM’s overall model structure. This study has as main aims to assess the suitability of (1) the Ashton formula and (2) the sparse variants of the Gash and Rutter interception models to model rainfall interception measurements carried out in a Pinus pinaster Ait. stand as well as a Eucalyptus globulus Labill. stand. Unlike in the bulk of published studies on forest interception, the experimental set-up structures the sampling space in below-canopy and gaps. The below-canopy sampling space is further divided into two classes on the basis of dendrometric data from a prior inventory of 20x20 m. The two stands are equipped with 15 below-canopy and 5 gap rainfall collectors, 3 of which are automated tipping-buckets gauges. Stemflow is measured for 10 trees per stand, which includes 2 trees with automated tipping-bucket (0.5 l/tip). Between November 2002 and the present time, 31 rainfall events totaling about 850 mm were recorded. Interestingly, these preliminary results reveal that below-canopy rainfall may exceed gap rainfall. This phenomenon can be explained by non-vertical rainfall, increasing the probability of droplets hitting the tree canopy instead of the forest floor. If further measurements confirm it to occur regularly, the suitability of not only the LISEM interception module but also the sparse Rutter and Gash models will, at least conceptually, be in doubt.
A Study of the Feasibility of Vocational Modules.
ERIC Educational Resources Information Center
Platero, Dillon; And Others
Educational consultants, town residents, Navajo tribal chapters and councils, the Bureau of Indian Affairs, the Navajo Division of Education, and local public school districts in Navajo, New Mexico, worked together to design an effective vocational program for an unskilled labor force sparsely settled within a large geographic area. The concept of…
Rural Inservice Education: Staples Teacher Center Style.
ERIC Educational Resources Information Center
Krueger, Rick
In its two-year existence, the federally funded Staples Teacher Center (STC) in Minnesota has had a significant impact on improving classroom instruction and staff development activities in a rural setting, proving that teacher centers are a most effective delivery system for inservice education in sparsely populated areas. Services are rendered…
Poverty and Rural Schools. Research Brief
ERIC Educational Resources Information Center
Johnston, Howard
2009-01-01
Impoverished populations and schools in rural areas face special challenges that are different from other settings. Among these are the distances from social services, the sparse availability of assistance programs, and the shortage of resources to support educational programs and student learning. Rural schools, do, however, have assets that can…
ERIC Educational Resources Information Center
de Guibert, Clement; Maumet, Camille; Jannin, Pierre; Ferre, Jean-Christophe; Treguier, Catherine; Barillot, Christian; Le Rumeur, Elisabeth; Allaire, Catherine; Biraben, Arnaud
2011-01-01
Atypical functional lateralization and specialization for language have been proposed to account for developmental language disorders, yet results from functional neuroimaging studies are sparse and inconsistent. This functional magnetic resonance imaging study compared children with a specific subtype of specific language impairment affecting…
Industrialization in the Rural Southeast: The Role of Education.
ERIC Educational Resources Information Center
Fratoe, Frank A.
Focusing on contributions of educational resources to the growth of nonfarm enterprises in small towns and sparsely populated areas of the Southeast, this paper discusses four major problems which are at least partly amenable to educational solutions and examines various educational means for overcoming those problems. Research and development…
Educating the Citizen of Academia Online?
ERIC Educational Resources Information Center
Solberg, Mariann
2011-01-01
The Arctic is a vast, sparsely populated area. The demographic situation points to online distance education as a solution to support lifelong learning and to build competence in the region. An overall aim of all university education is what Hans Georg Gadamer calls "Bildung", what we in Norwegian call "dannelse" and what…
ERIC Educational Resources Information Center
Thanheiser, Eva; Browning, Christine; Edson, Alden J.; Kastberg, Signe; Lo, Jane-Jane
2013-01-01
This survey of the literature summarizes and reflects on research findings regarding elementary preservice teachers' (PSTs') mathematics conceptions and the development thereof. Despite the current focus on teacher education, peer-reviewed journals offer a surprisingly sparse insight in these areas. The limited research that exists…
Seals map bathymetry of the Antarctic continental shelf
NASA Astrophysics Data System (ADS)
Padman, Laurie; Costa, Daniel P.; Bolmer, S. Thompson; Goebel, Michael E.; Huckstadt, Luis A.; Jenkins, Adrian; McDonald, Birgitte I.; Shoosmith, Deborah R.
2010-11-01
We demonstrate the first use of marine mammal dive-depth data to improve maps of bathymetry in poorly sampled regions of the continental shelf. A group of 57 instrumented elephant seals made on the order of 2 × 105 dives over and near the continental shelf on the western side of the Antarctic Peninsula during five seasons, 2005-2009. Maximum dive depth exceeded 2000 m. For dives made near existing ship tracks with measured water depths H<700 m, ˜30% of dive depths were to the seabed, consistent with expected benthic foraging behavior. By identifying the deepest of multiple dives within small areas as a dive to the seabed, we have developed a map of seal-derived bathymetry. Our map fills in several regions for which trackline data are sparse, significantly improving delineation of troughs crossing the continental shelf of the southern Bellingshausen Sea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodman, Jordan A.
2008-12-24
The Milagro Gamma-Ray Observatory is the world's first large-area water Cherenkov detector capable of continuously monitoring the overhead sky for sources of TeV gamma rays. The detector's unique design provides for unprecedented sensitivity compared to traditional sparse sampling arrays. As a result, Milagro has made a host of discoveries including the detection of several new gamma-ray sources and the detection of diffuse emission from the Galactic plane. The HAWC detector is a natural extension of the Milagro design. HAWC will be constructed as a joint Mexican-US collaboration on the Sierra Negra Mountain in Mexico at an elevation of 4100 m.more » The design and location of HAWC was optimized using the lessons learned from Milagro and will be 15 times more sensitive than Milagro when completed. In this paper, we briefly review Milagro results and discuss the physics we can do with HAWC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Kai; Ma, Ying -Zhong; Simpson, Mary Jane
Charge carrier trapping degrades the performance of organometallic halide perovskite solar cells. To characterize the locations of electronic trap states in a heterogeneous photoactive layer, a spatially resolved approach is essential. Here, we report a comparative study on methylammonium lead tri-iodide perovskite thin films subject to different thermal annealing times using a combined photoluminescence (PL) and femtosecond transient absorption microscopy (TAM) approach to spatially map trap states. This approach coregisters the initially populated electronic excited states with the regions that recombine radiatively. Although the TAM images are relatively homogeneous for both samples, the corresponding PL images are highly structured. Themore » remarkable variation in the PL intensities as compared to transient absorption signal amplitude suggests spatially dependent PL quantum efficiency, indicative of trapping events. Furthermore, detailed analysis enables identification of two trapping regimes: a densely packed trapping region and a sparse trapping area that appear as unique spatial features in scaled PL maps.« less
Sparse and redundant representations for inverse problems and recognition
NASA Astrophysics Data System (ADS)
Patel, Vishal M.
Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.
Low photon count based digital holography for quadratic phase cryptography.
Muniraj, Inbarasan; Guo, Changliang; Malallah, Ra'ed; Ryle, James P; Healy, John J; Lee, Byung-Geun; Sheridan, John T
2017-07-15
Recently, the vulnerability of the linear canonical transform-based double random phase encryption system to attack has been demonstrated. To alleviate this, we present for the first time, to the best of our knowledge, a method for securing a two-dimensional scene using a quadratic phase encoding system operating in the photon-counted imaging (PCI) regime. Position-phase-shifting digital holography is applied to record the photon-limited encrypted complex samples. The reconstruction of the complex wavefront involves four sparse (undersampled) dataset intensity measurements (interferograms) at two different positions. Computer simulations validate that the photon-limited sparse-encrypted data has adequate information to authenticate the original data set. Finally, security analysis, employing iterative phase retrieval attacks, has been performed.
An analysis of the lithology to resistivity relationships using airborne EM and boreholes
NASA Astrophysics Data System (ADS)
Barfod, Adrian A. S.; Christiansen, Anders V.; Møller, Ingelise
2014-05-01
We present a study of the relationship between dense airborne SkyTEM resistivity data and sparse lithological borehole data. Understanding the geological structures of the subsurface is of great importance to hydrogeological surveys. Large scale geological information can be gathered directly from boreholes or indirectly from large geophysical surveys. Borehole data provides detailed lithological information only at the position of the borehole and, due to the sparse nature of boreholes, they rarely provide sufficient information needed for high-accuracy groundwater models. Airborne geophysical data, on the other hand, provide dense spatial coverage, but are only indirectly bearing information on lithology through the resistivity models. Hitherherto, the integration of the geophysical data into geological and hydrogeological models has been often subjective, largely un-documented and painstakingly manual. This project presents a detailed study of the relationships between resistivity data and lithological borehole data. The purpose is to objectively describe the relationships between lithology and geophysical parameters and to document these relationships. This project has focused on utilizing preexisting datasets from the Danish national borehole database (JUPITER) and national geophysical database (GERDA). The study presented here is from the Norsminde catchment area (208 sq. km), situated in the municipality of Odder, Denmark. The Norsminde area contains a total of 758 boreholes and 106,770 SkyTEM soundings. The large amounts of data make the Norsminde area ideal for studying the relationship between geophysical data and lithological data. The subsurface is discretized into 20 cm horizontal sampling intervals from the highest elevation point to the depth of the deepest borehole. For each of these intervals a resistivity value is calculated at the position of the boreholes using a kriging formulation. The lithology data from the boreholes are then used to categorize the interpolated resistivity values according to lithology. The end result of this comparison is resistivity distributions for different lithology categories. The distributions provide detailed objective information of the resistivity properties of the subsurface and are a documentation of the resistivity imaging of the geological lithologies. We show that different lithologies are mapped at distinctively different resistivities but also that the geophysical inversion strategies influences the resulting distributions significantly.
Hogerwerf, Lenny; Still, Kelly; Heederik, Dick; van Rotterdam, Bart; de Bruin, Arnout; Nielen, Mirjam; Wouters, Inge M.
2012-01-01
Coxiella burnetii is thought to infect humans primarily via airborne transmission. However, air measurements of C. burnetii are sparse. We detected C. burnetii DNA in inhalable and PM10 (particulate matter with an aerodynamic size of 10 μm or less) dust samples collected at three affected goat farms, demonstrating that low levels of C. burnetii DNA are present in inhalable size fractions. PMID:22582072
Rosenberg, Justin F; Haulena, Martin; Phillips, Brianne E; Harms, Craig A; Lewbart, Gregory A; Lahner, Lesanna L; Papich, Mark G
2016-11-01
OBJECTIVE To determine population pharmacokinetics of enrofloxacin in purple sea stars (Pisaster ochraceus) administered an intracoelomic injection of enrofloxacin (5 mg/kg) or immersed in an enrofloxacin solution (5 mg/L) for 6 hours. ANIMALS 28 sea stars of undetermined age and sex. PROCEDURES The study had 2 phases. Twelve sea stars received an intracoelomic injection of enrofloxacin (5 mg/kg) or were immersed in an enrofloxacin solution (5 mg/L) for 6 hours during the injection and immersion phases, respectively. Two untreated sea stars were housed with the treated animals following enrofloxacin administration during both phases. Water vascular system fluid samples were collected from 4 sea stars and all controls at predetermined times during and after enrofloxacin administration. The enrofloxacin concentration in those samples was determined by high-performance liquid chromatography. For each phase, noncompartmental analysis of naïve averaged pooled samples was used to obtain initial parameter estimates; then, population pharmacokinetic analysis was performed that accounted for the sparse sampling technique used. RESULTS Injection phase data were best fit with a 2-compartment model; elimination half-life, peak concentration, area under the curve, and volume of distribution were 42.8 hours, 18.9 μg/mL, 353.8 μg•h/mL, and 0.25 L/kg, respectively. Immersion phase data were best fit with a 1-compartment model; elimination half-life, peak concentration, and area under the curve were 56 hours, 36.3 μg•h/mL, and 0.39 μg/mL, respectively. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that the described enrofloxacin administration resulted in water vascular system fluid drug concentrations expected to exceed the minimum inhibitory concentration for many bacterial pathogens.
Generation of complementary sampled phase-only holograms.
Tsang, P W M; Chow, Y T; Poon, T-C
2016-10-03
If an image is uniformly down-sampled into a sparse form and converted into a hologram, the phase component alone will be adequate to reconstruct the image. However, the appearance of the reconstructed image is degraded with numerous empty holes. In this paper, we present a low complexity and non-iterative solution to this problem. Briefly, two phase-only holograms are generated for an image, each based on a different down-sampling lattice. Subsequently, the holograms are displayed alternately at high frame rate. The reconstructed images of the 2 holograms will appear to be a single, densely sampled image with enhance visual quality.
NASA Astrophysics Data System (ADS)
Gómez-Nieto, Israel; Martín, María del Pilar; Salas, Francisco Javier; Gallardo, Marta
2013-04-01
Understanding the interaction between natural and socio-economic factors that determine fire regime is essential to make accurate projections and impact assessments. However, this requires having accurate historical, systematic, homogeneous and spatially explicit information on fire occurrence. Fire databases usually have serious limitations in this regard; therefore other sources of information, such as remote sensing, have emerged as alternatives to generate optimal fire maps on various spatial and temporal scales. Several national and international projects work in order to generate information to study the factors that determine the current fire regime and its future evolution. This work is included in the framework of the project "Forest fires under climate, social and economic Changes in Europe, the Mediterranean and other fire-affected areas of the World" (FUME http://www.fumeproject.eu), which aims to study the changes and factors related to fire regimes through time to determine the potential impacts on vegetation in Mediterranean regions and concrete steps to address future risk scenarios. We analyzed the changes in the fire regime in Madrid region (Spain) in the past three decades (1984-2010) and its relation to land use changes. We identified and mapped fires that have occurred in the region during those years using Landsat satellite images by combining digital techniques and visual analysis. The results show a clear cyclical behaviour of the fire, with years of high incidence (as 1985, 2000 and 2003, highlighted by the number of fires and the area concerned, over 2000 ha) followed by another with a clear occurrence decrease. At the same time, we analyzed the land use changes that have occurred in Madrid region between the early 80s and mid-2000s using as reference the CORINE Land-cover maps (1990, 2000 and 2006) and the Vegetation and Land Use map of the Community of Madrid, 1982. We studied the relationship between fire regimes and observed land-use and land-cover changes in the periods analyzed, it was determined that between years 1984 and 2006 most of the burned area remained pre-fire cover type (above 80% of the area). However, in areas that experienced change, the most important transitions were recorded in wooded areas, especially conifers, which became shrubs or sparsely vegetated areas, followed by non-irrigated crops, which were replaced by grasslands or industrial areas, and sparse vegetation which changed to shrubs. Finally, the analysis of land-use changes over burned areas situated shrubland as the most favored type of cover, either as a result of a vegetative degradation process after intense burning of wooded areas, especially conifers, or as stage of natural increase in areas previously covered by sparsely vegetation.
NASA Astrophysics Data System (ADS)
Heim, B.; Beamish, A. L.; Walker, D. A.; Epstein, H. E.; Sachs, T.; Chabrillat, S.; Buchhorn, M.; Prakash, A.
2016-12-01
Ground data for the validation of satellite-derived terrestrial Essential Climate Variables (ECVs) at high latitudes are sparse. Also for regional model evaluation (e.g. climate models, land surface models, permafrost models), we lack accurate ranges of terrestrial ground data and face the problem of a large mismatch in scale. Within the German research programs `Regional Climate Change' (REKLIM) and the Environmental Mapping and Analysis Program (EnMAP), we conducted a study on ground data representativeness for vegetation-related variables within a monitoring grid at the Toolik Lake Long-Term Ecological Research station; the Toolik Lake station lies in the Kuparuk River watershed on the North Slope of the Brooks Mountain Range in Alaska. The Toolik Lake grid covers an area of 1 km2 containing Eight five grid points spaced 100 meters apart. Moist acidic tussock tundra is the most dominant vegetation type within the grid. Eight five permanent 1 m2 plots were also established to be representative of the individual gridpoints. Researchers from the University of Alaska Fairbanks have undertaken assessments at these plots, including Leaf Area Index (LAI) and field spectrometry to derive the Normalized Difference Vegetation Index (NDVI). During summer 2016, we conducted field spectrometry and LAI measurements at selected plots during early, peak and late summer. We experimentally measured LAI on more spatially extensive Elementary Sampling Units (ESUs) to investigate the spatial representativeness of the permanent 1 m2 plots and to map ESUs for various tundra types. LAI measurements are potentially influenced by landscape-inherent microtopography, sparse vascular plant cover, and dead woody matter. From field spectrometer measurements, we derived a clear-sky mid-day Fraction of Absorbed Photosynthetically Active Radiation (FAPAR). We will present the first data analyses comparing FAPAR and LAI, and maps of biophysically-focused ESUs for evaluation of the use of remote sensing data to estimate these ecosystem properties.
Cao, Hongbao; Duan, Junbo; Lin, Dongdong; Shugart, Yin Yao; Calhoun, Vince; Wang, Yu-Ping
2014-11-15
Integrative analysis of multiple data types can take advantage of their complementary information and therefore may provide higher power to identify potential biomarkers that would be missed using individual data analysis. Due to different natures of diverse data modality, data integration is challenging. Here we address the data integration problem by developing a generalized sparse model (GSM) using weighting factors to integrate multi-modality data for biomarker selection. As an example, we applied the GSM model to a joint analysis of two types of schizophrenia data sets: 759,075 SNPs and 153,594 functional magnetic resonance imaging (fMRI) voxels in 208 subjects (92 cases/116 controls). To solve this small-sample-large-variable problem, we developed a novel sparse representation based variable selection (SRVS) algorithm, with the primary aim to identify biomarkers associated with schizophrenia. To validate the effectiveness of the selected variables, we performed multivariate classification followed by a ten-fold cross validation. We compared our proposed SRVS algorithm with an earlier sparse model based variable selection algorithm for integrated analysis. In addition, we compared with the traditional statistics method for uni-variant data analysis (Chi-squared test for SNP data and ANOVA for fMRI data). Results showed that our proposed SRVS method can identify novel biomarkers that show stronger capability in distinguishing schizophrenia patients from healthy controls. Moreover, better classification ratios were achieved using biomarkers from both types of data, suggesting the importance of integrative analysis. Copyright © 2014 Elsevier Inc. All rights reserved.
Sparse sampling and reconstruction for electron and scanning probe microscope imaging
Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.
2015-07-28
Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.
NASA Astrophysics Data System (ADS)
Ono, Atsushi; Moteki, Masato
2017-06-01
The salp Salpa thompsoni has the potential to alter the Southern Ocean ecosystem through competition with krill Euphausia superba. Information on the reproductive status of S. thompsoni in the high Southern Ocean is thus essential to understanding salp population growth and predicting changes in the Southern Ocean ecosystem. We carried out stratified and quantitative sampling from the surface to a depth of 2000 m during the austral summer of 2008 to determine the spatial distribution and population structure of S. thompsoni in the Southern Ocean off Adélie Land. We found two salp species, S. thompsoni and Ihlea racovitzai, with the former being dominant. S. thompsoni was distributed north of the continental slope area, while I. racovitzai was observed in the neritic zone. Mature aggregates and solitary specimens of S. thompsoni were found south of the Southern Boundary of the Antarctic Circumpolar Current, suggesting that S. thompsoni is able to complete its life cycle in high Antarctic waters during the austral summer. However, S. thompsoni was sparsely distributed in the continental slope area, and absent south of the Antarctic Slope Front, suggesting that it is less competitive with krill for food in the slope area off Adélie Land, where krill is densely distributed during the austral summer.
A new, multi-resolution bedrock elevation map of the Greenland ice sheet
NASA Astrophysics Data System (ADS)
Griggs, J. A.; Bamber, J. L.; Grisbed Consortium
2010-12-01
Gridded bedrock elevation for the Greenland ice sheet has previously been constructed with a 5 km posting. The true resolution of the data set was, in places, however, considerably coarser than this due to the across-track spacing of ice-penetrating radar transects. Errors were estimated to be on the order of a few percent in the centre of the ice sheet, increasing markedly in relative magnitude near the margins, where accurate thickness is particularly critical for numerical modelling and other applications. We use new airborne and satellite estimates of ice thickness and surface elevation to determine the bed topography for the whole of Greenland. This is a dynamic product, which will be updated frequently as new data, such as that from NASA’s Operation Ice Bridge, becomes available. The University of Kansas has in recent years, flown an airborne ice-penetrating radar system with close flightline spacing over several key outlet glacier systems. This allows us to produce a multi-resolution bedrock elevation dataset with the high spatial resolution needed for ice dynamic modelling over these key outlet glaciers and coarser resolution over the more sparsely sampled interior. Airborne ice thickness and elevation from CReSIS obtained between 1993 and 2009 are combined with JPL/UCI/Iowa data collected by the WISE (Warm Ice Sounding Experiment) covering the marginal areas along the south west coast from 2009. Data collected in the 1970’s by the Technical University of Denmark were also used in interior areas with sparse coverage from other sources. Marginal elevation data from the ICESat laser altimeter and the Greenland Ice Mapping Program were used to help constrain the ice thickness and bed topography close to the ice sheet margin where, typically, the terrestrial observations have poor sampling between flight tracks. The GRISBed consortium currently consists of: W. Blake, S. Gogineni, A. Hoch, C. M. Laird, C. Leuschen, J. Meisel, J. Paden, J. Plummer, F. Rodriguez-Morales and L. Smith, CReSIS, University of Kansas; E. Rignot, JPL and University of California, Irvine; Y. Gim, JPL; J. Mouginot, University of California, Irvine; D. Kirchner, University of Iowa; I. Howat, Byrd Polar Research Center, Ohio State University; I. Joughin and B. Smith, University of Washington; T. Scambos, NSIDC; S. Martin, University of Washington; T. Wagner, NASA.
Network dynamics underlying the formation of sparse, informative representations in the hippocampus.
Karlsson, Mattias P; Frank, Loren M
2008-12-24
During development, activity-dependent processes increase the specificity of neural responses to stimuli, but the role that this type of process plays in adult plasticity is unclear. We examined the dynamics of hippocampal activity as animals learned about new environments to understand how neural selectivity changes with experience. Hippocampal principal neurons fire when the animal is located in a particular subregion of its environment, and in any given environment the hippocampal representation is sparse: less than half of the neurons in areas CA1 and CA3 are active whereas the rest are essentially silent. Here we show that different dynamics govern the evolution of this sparsity in CA1 and upstream area CA3. CA1, but not CA3, produces twice as many spikes in novel compared with familiar environments. This high rate firing continues during sharp wave ripple events in a subsequent rest period. The overall CA1 population rate declines and the number of active cells decreases as the environment becomes familiar and task performance improves, but the decline in rate is not uniform across neurons. Instead, the activity of cells with initial peak spatial rates above approximately 12 Hz is enhanced, whereas the activity of cells with lower initial peak rates is suppressed. The result of these changes is that the active CA1 population comes to consist of a relatively small group of cells with strong spatial tuning. This process is not evident in CA3, indicating that a region-specific and long timescale process operates in CA1 to create a sparse, spatially informative population of neurons.
Preserving sparseness in multivariate polynominal factorization
NASA Technical Reports Server (NTRS)
Wang, P. S.
1977-01-01
Attempts were made to factor these ten polynomials on MACSYMA. However it did not get very far with any of the larger polynomials. At that time, MACSYMA used an algorithm created by Wang and Rothschild. This factoring algorithm was also implemented for the symbolic manipulation system, SCRATCHPAD of IBM. A closer look at this old factoring algorithm revealed three problem areas, each of which contribute to losing sparseness and intermediate expression growth. This study led to effective ways of avoiding these problems and actually to a new factoring algorithm. The three problems are known as the extraneous factor problem, the leading coefficient problem, and the bad zero problem. These problems are examined separately. Their causes and effects are set forth in detail; the ways to avoid or lessen these problems are described.
Sparse pre-Columbian human habitation in western Amazonia.
McMichael, C H; Piperno, D R; Bush, M B; Silman, M R; Zimmerman, A R; Raczka, M F; Lobato, L C
2012-06-15
Locally extensive pre-Columbian human occupation and modification occurred in the forests of the central and eastern Amazon Basin, but whether comparable impacts extend westward and into the vast terra firme (interfluvial) zones, remains unclear. We analyzed soils from 55 sites across central and western Amazonia to assess the history of human occupation. Sparse occurrences of charcoal and the lack of phytoliths from agricultural and disturbance species in the soils during pre-Columbian times indicated that human impacts on interfluvial forests were small, infrequent, and highly localized. No human artifacts or modified soils were found at any site surveyed. Riverine bluff areas also appeared less heavily occupied and disturbed than similar settings elsewhere. Our data indicate that human impacts on Amazonian forests were heterogeneous across this vast landscape.
Spatial and diurnal below canopy evaporation in a desert vineyard: measurements and modeling
USDA-ARS?s Scientific Manuscript database
Evaporation from the soil surface (E) can be a significant source of water loss in arid areas. In sparsely vegetated systems, E is expected to be a function of soil, climate, irrigation regime, precipitation patterns, and plant canopy development, and will therefore change dynamically at both daily ...
Pre-School Education for Children Living in Sparsely Populated Areas.
ERIC Educational Resources Information Center
Council for Cultural Cooperation, Strasbourg (France). Committee for General and Technical Education.
This report describes the proceedings of a symposium on rural preschool education, one of four conferences in a Council of Europe preschool project relating to equal educational opportunity. Delegates from 19 European countries participated. Included in this report are summaries of a paper on the ecology of childhood (discussing the background of…
Comparative Policy Brief: Status of Intellectual Disabilities in Nepal
ERIC Educational Resources Information Center
Crishna, Brinda; Prajapati, Surya Bhakta
2008-01-01
In Nepal, the estimates of the prevalence of disabilities vary, and there is sparse information specifically about people with intellectual disabilities (ID). Existing data suggest higher rates of prevalence of ID in the more remote northern area due to use of non-iodized salt, lack of health facilities, and extreme poverty. Superstitious beliefs…
Planning for Computer-Based Distance Education: A Review of Administrative Issues.
ERIC Educational Resources Information Center
Lever-Duffy, Judy C.
The Homestead Campus of Miami-Dade Community College, in Florida, serves a sparsely populated area with a culturally diverse population including migrant farm workers, prison inmates, and U.S. Air Force personnel. To increase access to college services, the campus focused on implementing a computer-based distance education program as its primary…
ERIC Educational Resources Information Center
Chaib, Mohamed
1988-01-01
Two studies in southeastern Sweden examined rural children's conflicting attitudes toward environmental change in the local community. Following a yearlong curriculum in environmental studies, 14 fifth and sixth graders in Ramkvilla were presented with an imaginary scenario involving the construction of a new factory. Their small, somewhat idyllic…
The Sound of Feedback in Higher Education
ERIC Educational Resources Information Center
Savin-Baden, Maggi
2010-01-01
Whilst there is considerable literature on feedback for students and on the use of audio feedback, literature in the area of podcasting assignment feedback (PAF) remains sparse. Partly, this may be due to a lack of clarity about what counts as feedback, the way in which feedback is located pedagogically and the relationship between feedback…
Habitat Characteristics of Active and Abandoned Red-Cockaded Woodpecker Colonies
Susan C. Loeb; William D. Pepper; Arlene T. Doyle
1992-01-01
Active red-cockaded woodpecker (Picoides borealis) colonies in the Piedmont of Georgia are mature pine stands (mean age = 87 ± 1 yr old) with relatively sparse midstories (mean basal area = 31 ± 3 ft2/ac). Active and abandoned colony sites have similar overstory characteristics, but midstories are significantly denser in...
Development of Competency Based Credential Programs in Southern California's High Desert Region.
ERIC Educational Resources Information Center
Burton, Louise F.; And Others
In the northern high desert region of San Bernardino County (California), about half of special education teachers do not hold special education credentials. In September 1988, the Desert-Mountain Rural Training Program began to provide appropriate training to uncredentialed special education teachers in this sparsely populated area. The program…
IMPLEMENTATION OF THE SMOKE EMISSION DATA PROCESSOR AND SMOKE TOOL INPUT DATA PROCESSOR IN MODELS-3
The U.S. Environmental Protection Agency has implemented Version 1.3 of SMOKE (Sparse Matrix Object Kernel Emission) processor for preparation of area, mobile, point, and biogenic sources emission data within Version 4.1 of the Models-3 air quality modeling framework. The SMOK...
Student Reflective Writing: Cognition and Affect before, during, and after Study Abroad
ERIC Educational Resources Information Center
Savicki, Victor; Price, Michele V.
2015-01-01
Reflective thinking is an important feature of study-abroad learning, yet research on reflection in this context is sparse. The current study examined student reflection on 3 content areas (Academic Expectations, Cultural Expectations, and Psychological Issues) at 3 times (before, during, and after study abroad). A content analysis approach with…
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1976-01-01
The functions and operating rules of the SPAR system, which is a group of computer programs used primarily to perform stress, buckling, and vibrational analyses of linear finite element systems, were given. The following subject areas were discussed: basic information, structure definition, format system matrix processors, utility programs, static solutions, stresses, sparse matrix eigensolver, dynamic response, graphics, and substructure processors.
Service Coordination and Children's Functioning in a School-Based Intensive Mental Health Program
ERIC Educational Resources Information Center
Puddy, Richard W.; Roberts, Michael C.; Vernberg, Eric M.; Hambrick, Erin P.
2012-01-01
Coordination of mental health services in children with serious emotional disturbance (SED) has shown a preliminary relationship to positive outcomes in children. Yet, research in this area is sparse. Therefore, the relation between service coordination activities and adaptive functioning was examined for 51 children SED who were treated in the…
Family Involvement in Creative Teaching Practices for All in Small Rural Schools
ERIC Educational Resources Information Center
Vigo Arrazola, Begoña; Soriano Bozalongo, Juana
2015-01-01
Parental involvement is interpreted as a key form of support that can contribute to the establishment of inclusive practices in schools, but this can be difficult in sparsely populated areas. Using ethnographic methods of participant observation, informal conversations and document analysis, this article therefore focuses on family involvement…
People's perceptions of managed and natural landscapes
Arthur W. Magill
1995-01-01
Research was undertaken to identify the opinions of what people saw in slides of managed and unmanaged landscapes. Most people were attracted by natural landscape features. Clearcuts were reported less frequently than roads, but dislike of them was more than 30 percent greater. Natural openings, bare areas, and sparse tree cover also were disliked.
Predictors of Asian American Adolescents' Suicide Attempts: A Latent Class Regression Analysis
ERIC Educational Resources Information Center
Wong, Y. Joel; Maffini, Cara S.
2011-01-01
Although suicide-related outcomes among Asian American adolescents are a serious public health problem in the United States, research in this area has been relatively sparse. To address this gap in the empirical literature, this study examined subgroups of Asian American adolescents for whom family, school, and peer relationships exerted…
Widening Access through Partnerships with Working Life
ERIC Educational Resources Information Center
Casson, Andrew
2006-01-01
Dalarna University has doubled its student numbers during the past five years, and now has the highest proportion of students from non-academic backgrounds of Swedish universities (37%). The province of Dalarna combines steel and paper industry in a number of relatively small towns with large areas of sparsely populated countryside. By tradition,…
USDA-ARS?s Scientific Manuscript database
The annual flood cycle of the Sudd wetland in South Sudan plays an important role in the Nile River Basin water balance. The wetland, however, is extensive and sparsely instrumented, which has inhibited credible understanding of regional flooding across space and time. Here we explore the potential ...
NASA Technical Reports Server (NTRS)
Schmugge, T. J.; Rango, A.; Neff, R.
1975-01-01
The electrically scanning microwave radiometer (ESMR) on the Nimbus 5 satellite was used to observe microwave emissions from vegetated and soil surfaces over an Illinois-Indiana study area, the Mississippi Valley, and the Great Salt Lake Desert in Utah. Analysis of microwave brightness temperatures (T sub B) and antecedent rainfall over these areas provided a way to monitor variations of near-surface soil moisture. Because vegetation absorbs microwave emission from the soil at the 1.55 cm wavelength of ESMR, relative soil moisture measurements can only be obtained over bare or sparsely vegetated soil. In general T sub B increased during rainfree periods as evaporation of water and drying of the surface soil occurs, and drops in T sub B are experienced after significant rainfall events wet the soil. Microwave observations from space are limited to coarse resolutions (10-25 km), but it may be possible in regions with sparse vegetation cover to estimate soil moisture conditions on a watershed or agricultural district basis, particularly since daily observations can be obtained.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
NASA Astrophysics Data System (ADS)
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D
2013-02-01
Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.
Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction
2016-01-01
reconstruction. The array topology samples the scene on a regular grid of phase centers, using a tiling of Boundary Arrays (BAs). Following a simple correction...hardware. Fig. 1 depicts the multistatic array topology. As seen, the topology is a tiled arrangement of Boundary Arrays (BAs). The BA is a well-known...sparse array layout comprised of two linear transmit arrays, and two linear receive arrays [6]. A slightly different tiled arrangement of BAs was used
NASA Astrophysics Data System (ADS)
Huijse, Pablo; Estévez, Pablo A.; Förster, Francisco; Daniel, Scott F.; Connolly, Andrew J.; Protopapas, Pavlos; Carrasco, Rodrigo; Príncipe, José C.
2018-05-01
The Large Synoptic Survey Telescope (LSST) will produce an unprecedented amount of light curves using six optical bands. Robust and efficient methods that can aggregate data from multidimensional sparsely sampled time-series are needed. In this paper we present a new method for light curve period estimation based on quadratic mutual information (QMI). The proposed method does not assume a particular model for the light curve nor its underlying probability density and it is robust to non-Gaussian noise and outliers. By combining the QMI from several bands the true period can be estimated even when no single-band QMI yields the period. Period recovery performance as a function of average magnitude and sample size is measured using 30,000 synthetic multiband light curves of RR Lyrae and Cepheid variables generated by the LSST Operations and Catalog simulators. The results show that aggregating information from several bands is highly beneficial in LSST sparsely sampled time-series, obtaining an absolute increase in period recovery rate up to 50%. We also show that the QMI is more robust to noise and light curve length (sample size) than the multiband generalizations of the Lomb–Scargle and AoV periodograms, recovering the true period in 10%–30% more cases than its competitors. A python package containing efficient Cython implementations of the QMI and other methods is provided.
High-dimensional statistical inference: From vector to matrix
NASA Astrophysics Data System (ADS)
Zhang, Anru
Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA < 1/3, deltak A+ thetak,kA < 1, or deltatkA < √( t - 1)/t for any given constant t ≥ 4/3 guarantee the exact recovery of all k sparse signals in the noiseless case through the constrained ℓ1 minimization, and similarly in affine rank minimization delta rM < 1/3, deltar M + thetar, rM < 1, or deltatrM< √( t - 1)/t ensure the exact reconstruction of all matrices with rank at most r in the noiseless case via the constrained nuclear norm minimization. Moreover, for any epsilon > 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
DuFour, Mark R.; Mayer, Christine M.; Kocovsky, Patrick; Qian, Song; Warner, David M.; Kraus, Richard T.; Vandergoot, Christopher
2017-01-01
Hydroacoustic sampling of low-density fish in shallow water can lead to low sample sizes of naturally variable target strength (TS) estimates, resulting in both sparse and variable data. Increasing maximum beam compensation (BC) beyond conventional values (i.e., 3 dB beam width) can recover more targets during data analysis; however, data quality decreases near the acoustic beam edges. We identified the optimal balance between data quantity and quality with increasing BC using a standard sphere calibration, and we quantified the effect of BC on fish track variability, size structure, and density estimates of Lake Erie walleye (Sander vitreus). Standard sphere mean TS estimates were consistent with theoretical values (−39.6 dB) up to 18-dB BC, while estimates decreased at greater BC values. Natural sources (i.e., residual and mean TS) dominated total fish track variation, while contributions from measurement related error (i.e., number of single echo detections (SEDs) and BC) were proportionally low. Increasing BC led to more fish encounters and SEDs per fish, while stability in size structure and density were observed at intermediate values (e.g., 18 dB). Detection of medium to large fish (i.e., age-2+ walleye) benefited most from increasing BC, as proportional changes in size structure and density were greatest in these size categories. Therefore, when TS data are sparse and variable, increasing BC to an optimal value (here 18 dB) will maximize the TS data quantity while limiting lower-quality data near the beam edges.
Shape models of asteroids reconstructed from WISE data and sparse photometry
NASA Astrophysics Data System (ADS)
Durech, Josef; Hanus, Josef; Ali-Lagoa, Victor
2017-10-01
By combining sparse-in-time photometry from the Lowell Observatory photometry database with WISE observations, we reconstructed convex shape models for about 700 new asteroids and for other ~850 we derived 'partial' models with unconstrained ecliptic longitude of the spin axis direction. In our approach, the WISE data were treated as reflected light, which enabled us to directly join them with sparse photometry into one dataset that was processed by the lightcurve inversion method. This simplified treatment of thermal infrared data turned out to provide correct results, because in most cases the phase offset between optical and thermal lightcurves was small and the correct sidereal rotation period was determined. The spin and shape parameters derived from only optical data and from a combination of optical and WISE data were very similar. The new models together with those already available in the Database of Asteroid Models from Inversion Techniques (DAMIT) represent a sample of ~1650 asteroids. When including also partial models, the total sample is about 2500 asteroids, which significantly increases the number of models with respect to those that have been available so far. We will show the distribution of spin axes for different size groups and also for several collisional families. These observed distributions in general agree with theoretical expectations proving that smaller asteroids are more affected by YORP/Yarkovsky evolution. In asteroid families, we see a clear bimodal distribution of prograde/retrograde rotation that correlates with the position to the right/left from the center of the family measured by the semimajor axis.
NASA Astrophysics Data System (ADS)
Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro
2017-02-01
Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.
Sparse and stable Markowitz portfolios.
Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace
2009-07-28
We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.
NASA Astrophysics Data System (ADS)
Gao, Yi; Zhu, Liangjia; Norton, Isaiah; Agar, Nathalie Y. R.; Tannenbaum, Allen
2014-03-01
Desorption electrospray ionization mass spectrometry (DESI-MS) provides a highly sensitive imaging technique for differentiating normal and cancerous tissue at the molecular level. This can be very useful, especially under intra-operative conditions where the surgeon has to make crucial decision about the tumor boundary. In such situations, the time it takes for imaging and data analysis becomes a critical factor. Therefore, in this work we utilize compressive sensing to perform the sparse sampling of the tissue, which halves the scanning time. Furthermore, sparse feature selection is performed, which not only reduces the dimension of data from about 104 to less than 50, and thus significantly shortens the analysis time. This procedure also identifies biochemically important molecules for further pathological analysis. The methods are validated on brain and breast tumor data sets.
Quality of ground water in Idaho
Yee, Johnson J.; Souza, William R.
1987-01-01
The major aquifers in Idaho are categorized under two rock types, sedimentary and volcanic, and are grouped into six hydrologic basins. Areas with adequate, minimally adequate, or deficient data available for groundwater-quality evaluations are described. Wide variations in chemical concentrations in the water occur within individual aquifers, as well as among the aquifers. The existing data base is not sufficient to describe fully the ground-water quality throughout the State; however, it does indicate that the water is generally suitable for most uses. In some aquifers, concentrations of fluoride, cadmium, and iron in the water exceed the U.S. Environmental Protection Agency's drinking-water standards. Dissolved solids, chloride, and sulfate may cause problems in some local areas. Water-quality data are sparse in many areas, and only general statements can be made regarding the areal distribution of chemical constituents. Few data are available to describe temporal variations of water quality in the aquifers. Primary concerns related to special problem areas in Idaho include (1) protection of water quality in the Rathdrum Prairie aquifer, (2) potential degradation of water quality in the Boise-Nampa area, (3) effects of widespread use of drain wells overlying the eastern Snake River Plain basalt aquifer, and (4) disposal of low-level radioactive wastes at the Idaho National Engineering Laboratory. Shortcomings in the ground-water-quality data base are categorized as (1) multiaquifer sample inadequacy, (2) constituent coverage limitations, (3) baseline-data deficiencies, and (4) data-base nonuniformity.
Koshkina, Vira; Wang, Yang; Gordon, Ascelin; Dorazio, Robert; White, Matthew; Stone, Lewi
2017-01-01
Two main sources of data for species distribution models (SDMs) are site-occupancy (SO) data from planned surveys, and presence-background (PB) data from opportunistic surveys and other sources. SO surveys give high quality data about presences and absences of the species in a particular area. However, due to their high cost, they often cover a smaller area relative to PB data, and are usually not representative of the geographic range of a species. In contrast, PB data is plentiful, covers a larger area, but is less reliable due to the lack of information on species absences, and is usually characterised by biased sampling. Here we present a new approach for species distribution modelling that integrates these two data types.We have used an inhomogeneous Poisson point process as the basis for constructing an integrated SDM that fits both PB and SO data simultaneously. It is the first implementation of an Integrated SO–PB Model which uses repeated survey occupancy data and also incorporates detection probability.The Integrated Model's performance was evaluated, using simulated data and compared to approaches using PB or SO data alone. It was found to be superior, improving the predictions of species spatial distributions, even when SO data is sparse and collected in a limited area. The Integrated Model was also found effective when environmental covariates were significantly correlated. Our method was demonstrated with real SO and PB data for the Yellow-bellied glider (Petaurus australis) in south-eastern Australia, with the predictive performance of the Integrated Model again found to be superior.PB models are known to produce biased estimates of species occupancy or abundance. The small sample size of SO datasets often results in poor out-of-sample predictions. Integrated models combine data from these two sources, providing superior predictions of species abundance compared to using either data source alone. Unlike conventional SDMs which have restrictive scale-dependence in their predictions, our Integrated Model is based on a point process model and has no such scale-dependency. It may be used for predictions of abundance at any spatial-scale while still maintaining the underlying relationship between abundance and area.
Suydam, Robert; Quakenbush, Lori; Potgieter, Brooke; Harwood, Lois; Litovka, Dennis; Ferrer, Tatiana; Citta, John; Burkanov, Vladimir; Frost, Kathy; Mahoney, Barbara
2018-01-01
The annual return of beluga whales, Delphinapterus leucas, to traditional seasonal locations across the Arctic may involve migratory culture, while the convergence of discrete summering aggregations on common wintering grounds may facilitate outbreeding. Natal philopatry and cultural inheritance, however, has been difficult to assess as earlier studies were of too short a duration, while genetic analyses of breeding patterns, especially across the beluga’s Pacific range, have been hampered by inadequate sampling and sparse information on wintering areas. Using a much expanded sample and genetic marker set comprising 1,647 whales, spanning more than two decades and encompassing all major coastal summering aggregations in the Pacific Ocean, we found evolutionary-level divergence among three geographic regions: the Gulf of Alaska, the Bering-Chukchi-Beaufort Seas, and the Sea of Okhotsk (Φst = 0.11–0.32, Rst = 0.09–0.13), and likely demographic independence of (Fst-mtDNA = 0.02–0.66), and in many cases limited gene flow (Fst-nDNA = 0.0–0.02; K = 5–6) among, summering groups within regions. Assignment tests identified few immigrants within summering aggregations, linked migrating groups to specific summering areas, and found that some migratory corridors comprise whales from multiple subpopulations (PBAYES = 0.31:0.69). Further, dispersal is male-biased and substantial numbers of closely related whales congregate together at coastal summering areas. Stable patterns of heterogeneity between areas and consistently high proportions (~20%) of close kin (including parent-offspring) sampled up to 20 years apart within areas (G = 0.2–2.9, p>0.5) is the first direct evidence of natal philopatry to migration destinations in belugas. Using recent satellite telemetry findings on belugas we found that the spatial proximity of winter ranges has a greater influence on the degree of both individual and genetic exchange than summer ranges (rwinter-Fst-mtDNA = 0.9, rsummer-Fst-nDNA = 0.1). These findings indicate widespread natal philopatry to summering aggregation and entire migratory circuits, and provide compelling evidence that migratory culture and kinship helps maintain demographically discrete beluga stocks that can overlap in time and space. PMID:29566001
O'Corry-Crowe, Greg; Suydam, Robert; Quakenbush, Lori; Potgieter, Brooke; Harwood, Lois; Litovka, Dennis; Ferrer, Tatiana; Citta, John; Burkanov, Vladimir; Frost, Kathy; Mahoney, Barbara
2018-01-01
The annual return of beluga whales, Delphinapterus leucas, to traditional seasonal locations across the Arctic may involve migratory culture, while the convergence of discrete summering aggregations on common wintering grounds may facilitate outbreeding. Natal philopatry and cultural inheritance, however, has been difficult to assess as earlier studies were of too short a duration, while genetic analyses of breeding patterns, especially across the beluga's Pacific range, have been hampered by inadequate sampling and sparse information on wintering areas. Using a much expanded sample and genetic marker set comprising 1,647 whales, spanning more than two decades and encompassing all major coastal summering aggregations in the Pacific Ocean, we found evolutionary-level divergence among three geographic regions: the Gulf of Alaska, the Bering-Chukchi-Beaufort Seas, and the Sea of Okhotsk (Φst = 0.11-0.32, Rst = 0.09-0.13), and likely demographic independence of (Fst-mtDNA = 0.02-0.66), and in many cases limited gene flow (Fst-nDNA = 0.0-0.02; K = 5-6) among, summering groups within regions. Assignment tests identified few immigrants within summering aggregations, linked migrating groups to specific summering areas, and found that some migratory corridors comprise whales from multiple subpopulations (PBAYES = 0.31:0.69). Further, dispersal is male-biased and substantial numbers of closely related whales congregate together at coastal summering areas. Stable patterns of heterogeneity between areas and consistently high proportions (~20%) of close kin (including parent-offspring) sampled up to 20 years apart within areas (G = 0.2-2.9, p>0.5) is the first direct evidence of natal philopatry to migration destinations in belugas. Using recent satellite telemetry findings on belugas we found that the spatial proximity of winter ranges has a greater influence on the degree of both individual and genetic exchange than summer ranges (rwinter-Fst-mtDNA = 0.9, rsummer-Fst-nDNA = 0.1). These findings indicate widespread natal philopatry to summering aggregation and entire migratory circuits, and provide compelling evidence that migratory culture and kinship helps maintain demographically discrete beluga stocks that can overlap in time and space.
Mind Wandering and the Incubation Effect in Insight Problem Solving
ERIC Educational Resources Information Center
Tan, Tengteng; Zou, Hong; Chen, Chuansheng; Luo, Jin
2015-01-01
Although many anecdotes suggest that creative insights often arise during mind wandering, empirical research is still sparse. In this study, the number reduction task (NRT) was used to assess whether insightful solutions were related to mind wandering during the incubation stage of the creative process. An experience sampling paradigm was used to…
Racial Differences in Attitudes toward Aging, Aging Knowledge, and Contact
ERIC Educational Resources Information Center
Intrieri, Robert C.; Kurth, Maria L.
2018-01-01
The present study assessed knowledge of aging, attitudes toward aging, ageism, and contact with older adults in a sample of 271 Non-Hispanic White and African-American undergraduates. Research examining racial differences in knowledge of aging, attitudes toward aging, ageism, and contact with older adults has been sparse. Results for the current…
Parental Employment and the Effects on Student Attendance.
ERIC Educational Resources Information Center
Ellis, Laura J.
Research on the effect of parent employment on student attendance has been sparse and inconclusive. This paper presents findings of a study that tested the following hypothesis--that students whose parents were at home during school hours would be absent more frequently than students whose parents worked during school hours. The sample was…
Individual snag detection using neighborhood attribute filtered airborne lidar data
Brian M. Wing; Martin W. Ritchie; Kevin Boston; Warren B. Cohen; Michael J. Olsen
2015-01-01
The ability to estimate and monitor standing dead trees (snags) has been difficult due to their irregular and sparse distribution, often requiring intensive sampling methods to obtain statistically significant estimates. This study presents a new method for estimating and monitoring snags using neighborhood attribute filtered airborne discrete-return lidar data. The...
Characteristics of Academically Excellent Business Studies Students in a Post-1992 University
ERIC Educational Resources Information Center
Bennett, Roger; Barkensjo, Anna
2005-01-01
In contrast to the extensive investigation of the characteristics of students who fail or perform badly in "new" universities, research into the factors associated with academic excellence within post-1992 institutions has been sparse. This empirical study examined the profile of a sample of 81 high-flying business studies undergraduates…
ERIC Educational Resources Information Center
Tayyaba, Saadia
2012-01-01
Purpose: Recent educational research has demonstrated rural-urban gaps in achievement and schooling conditions. Evidence from developing countries is still sparse. This study seeks to report rural-urban disparities in achievement, student, teacher, and school characteristics based on a nationally representative sample of grade four students from…
Large Covariance Estimation by Thresholding Principal Orthogonal Complements
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088
Large Covariance Estimation by Thresholding Principal Orthogonal Complements.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2013-09-01
This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.
Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images
NASA Astrophysics Data System (ADS)
Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang
2016-10-01
Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.
FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry
NASA Astrophysics Data System (ADS)
Dai, Jisheng; Liu, An; Lau, Vincent K. N.
2018-05-01
This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.
Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai
2016-02-19
In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver's EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver's vigilance level. Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
Integrative sparse principal component analysis of gene expression data.
Liu, Mengque; Fan, Xinyan; Fang, Kuangnan; Zhang, Qingzhao; Ma, Shuangge
2017-12-01
In the analysis of gene expression data, dimension reduction techniques have been extensively adopted. The most popular one is perhaps the PCA (principal component analysis). To generate more reliable and more interpretable results, the SPCA (sparse PCA) technique has been developed. With the "small sample size, high dimensionality" characteristic of gene expression data, the analysis results generated from a single dataset are often unsatisfactory. Under contexts other than dimension reduction, integrative analysis techniques, which jointly analyze the raw data of multiple independent datasets, have been developed and shown to outperform "classic" meta-analysis and other multidatasets techniques and single-dataset analysis. In this study, we conduct integrative analysis by developing the iSPCA (integrative SPCA) method. iSPCA achieves the selection and estimation of sparse loadings using a group penalty. To take advantage of the similarity across datasets and generate more accurate results, we further impose contrasted penalties. Different penalties are proposed to accommodate different data conditions. Extensive simulations show that iSPCA outperforms the alternatives under a wide spectrum of settings. The analysis of breast cancer and pancreatic cancer data further shows iSPCA's satisfactory performance. © 2017 WILEY PERIODICALS, INC.
Multimode waveguide speckle patterns for compressive sensing.
Valley, George C; Sefler, George A; Justin Shaw, T
2016-06-01
Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.
NASA Astrophysics Data System (ADS)
Han-Ming, Zhang; Lin-Yuan, Wang; Lei, Li; Bin, Yan; Ai-Long, Cai; Guo-En, Hu
2016-07-01
The additional sparse prior of images has been the subject of much research in problems of sparse-view computed tomography (CT) reconstruction. A method employing the image gradient sparsity is often used to reduce the sampling rate and is shown to remove the unwanted artifacts while preserve sharp edges, but may cause blocky or patchy artifacts. To eliminate this drawback, we propose a novel sparsity exploitation-based model for CT image reconstruction. In the presented model, the sparse representation and sparsity exploitation of both gradient and nonlocal gradient are investigated. The new model is shown to offer the potential for better results by introducing a similarity prior information of the image structure. Then, an effective alternating direction minimization algorithm is developed to optimize the objective function with a robust convergence result. Qualitative and quantitative evaluations have been carried out both on the simulation and real data in terms of accuracy and resolution properties. The results indicate that the proposed method can be applied for achieving better image-quality potential with the theoretically expected detailed feature preservation. Project supported by the National Natural Science Foundation of China (Grant No. 61372172).
Zhang, Zutao; Luo, Dianyuan; Rasim, Yagubov; Li, Yanjun; Meng, Guanjun; Xu, Jian; Wang, Chunbai
2016-01-01
In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed for collecting the driver’s EEG signal. Second, wavelet de-noising and down-sample algorithms are utilized to enhance the quality of EEG data, and Fast Fourier Transformation (FFT) is adopted to extract the EEG power spectrum density (PSD). In this step, sparse representation classification combined with k-singular value decomposition (KSVD) is firstly introduced in PSD to estimate the driver’s vigilance level . Finally, a novel safety strategy of vehicle speed control, which controls the electronic throttle opening and automatic braking after driver fatigue detection using the above method, is presented to avoid serious collisions and traffic accidents. The simulation and practical testing results demonstrate the feasibility of the vehicle active safety model. PMID:26907278
An embedded system for face classification in infrared video using sparse representation
NASA Astrophysics Data System (ADS)
Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel
2017-09-01
We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Ghysels, Pieter; Li, Xiaoye S.; Rouet, Francois -Henry; ...
2016-10-27
Here, we present a sparse linear system solver that is based on a multifrontal variant of Gaussian elimination and exploits low-rank approximation of the resulting dense frontal matrices. We use hierarchically semiseparable (HSS) matrices, which have low-rank off-diagonal blocks, to approximate the frontal matrices. For HSS matrix construction, a randomized sampling algorithm is used together with interpolative decompositions. The combination of the randomized compression with a fast ULV HSS factoriz ation leads to a solver with lower computational complexity than the standard multifrontal method for many applications, resulting in speedups up to 7 fold for problems in our test suite.more » The implementation targets many-core systems by using task parallelism with dynamic runtime scheduling. Numerical experiments show performance improvements over state-of-the-art sparse direct solvers. The implementation achieves high performance and good scalability on a range of modern shared memory parallel systems, including the Intel Xeon Phi (MIC). The code is part of a software package called STRUMPACK - STRUctured Matrices PACKage, which also has a distributed memory component for dense rank-structured matrices.« less
Meng, Yuguang; Lei, Hao
2010-06-01
An efficient iterative gridding reconstruction method with correction of off-resonance artifacts was developed, which is especially tailored for multiple-shot non-Cartesian imaging. The novelty of the method lies in that the transformation matrix for gridding (T) was constructed as the convolution of two sparse matrices, among which the former is determined by the sampling interval and the spatial distribution of the off-resonance frequencies and the latter by the sampling trajectory and the target grid in the Cartesian space. The resulting T matrix is also sparse and can be solved efficiently with the iterative conjugate gradient algorithm. It was shown that, with the proposed method, the reconstruction speed in multiple-shot non-Cartesian imaging can be improved significantly while retaining high reconstruction fidelity. More important, the method proposed allows tradeoff between the accuracy and the computation time of reconstruction, making customization of the use of such a method in different applications possible. The performance of the proposed method was demonstrated by numerical simulation and multiple-shot spiral imaging on rat brain at 4.7 T. (c) 2010 Wiley-Liss, Inc.
Sparse Matrices in MATLAB: Design and Implementation
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Moler, Cleve; Schreiber, Robert
1992-01-01
The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.
Carim, Kellie J; Christianson, Kyle R; McKelvey, Kevin M; Pate, William M; Silver, Douglas B; Johnson, Brett M; Galloway, Benjamin T; Young, Michael K; Schwartz, Michael K
2016-01-01
The spread of Mysis diluviana, a small glacial relict crustacean, outside its native range has led to unintended shifts in the composition of native fish communities throughout western North America. As a result, biologists seek accurate methods of determining the presence of M. diluviana, especially at low densities or during the initial stages of an invasion. Environmental DNA (eDNA) provides one solution for detecting M. diluviana, but building eDNA markers that are both sensitive and species-specific is challenging when the distribution and taxonomy of closely related non-target taxa are poorly understood, published genetic data are sparse, and tissue samples are difficult to obtain. To address these issues, we developed a pair of independent eDNA markers to increase the likelihood of a positive detection of M. diluviana when present and reduce the probability of false positive detections from closely related non-target species. Because tissue samples of closely-related and possibly sympatric, non-target taxa could not be obtained, we used synthetic DNA sequences of closely related non-target species to test the specificity of eDNA markers. Both eDNA markers yielded positive detections from five waterbodies where M. diluviana was known to be present, and no detections in five others where this species was thought to be absent. Daytime samples from varying depths in one waterbody occupied by M. diluviana demonstrated that samples near the lake bottom produced 5 to more than 300 times as many eDNA copies as samples taken at other depths, but all samples tested positive regardless of depth.
Holton, Chase; Luo, Hong; Dahlen, Paul; Gorder, Kyle; Dettenmaier, Erik; Johnson, Paul C
2013-01-01
Current vapor intrusion (VI) pathway assessment heavily weights concentrations from infrequent (monthly-seasonal) 24 h indoor air samples. This study collected a long-term and high-frequency data set that can be used to assess indoor air sampling strategies for answering key pathway assessment questions like: "Is VI occurring?", and "Will VI impacts exceed thresholds of concern?". Indoor air sampling was conducted for 2.5 years at 2-4 h intervals in a house overlying a dilute chlorinated solvent plume (10-50 μg/L TCE). Indoor air concentrations varied by 3 orders of magnitude (<0.01-10 ppbv TCE) with two recurring behaviors. The VI-active behavior, which was prevalent in fall, winter, and spring involved time-varying impacts intermixed with sporadic periods of inactivity; the VI-dormant behavior, which was prevalent in the summer, involved long periods of inactivity with sporadic VI impacts. These data were used to study outcomes of three simple sparse data sampling plans; the probabilities of false-negative and false-positive decisions were dependent on the ratio of the (action level/true mean of the data), the number of exceedances needed, and the sampling strategy. The analysis also suggested a significant potential for poor characterization of long-term mean concentrations with sparse sampling plans. The results point to a need for additional dense data sets and further investigation into the robustness of possible VI assessment paradigms. As this is the first data set of its kind, it is unknown if the results are representative of other VI-sites.
Color normalization of histology slides using graph regularized sparse NMF
NASA Astrophysics Data System (ADS)
Sha, Lingdao; Schonfeld, Dan; Sethi, Amit
2017-03-01
Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.
Nakamura, Kengo; Yasutaka, Tetsuo; Kuwatani, Tatsu; Komai, Takeshi
2017-11-01
In this study, we applied sparse multiple linear regression (SMLR) analysis to clarify the relationships between soil properties and adsorption characteristics for a range of soils across Japan and identify easily-obtained physical and chemical soil properties that could be used to predict K and n values of cadmium, lead and fluorine. A model was first constructed that can easily predict the K and n values from nine soil parameters (pH, cation exchange capacity, specific surface area, total carbon, soil organic matter from loss on ignition and water holding capacity, the ratio of sand, silt and clay). The K and n values of cadmium, lead and fluorine of 17 soil samples were used to verify the SMLR models by the root mean square error values obtained from 512 combinations of soil parameters. The SMLR analysis indicated that fluorine adsorption to soil may be associated with organic matter, whereas cadmium or lead adsorption to soil is more likely to be influenced by soil pH, IL. We found that an accurate K value can be predicted from more than three soil parameters for most soils. Approximately 65% of the predicted values were between 33 and 300% of their measured values for the K value; 76% of the predicted values were within ±30% of their measured values for the n value. Our findings suggest that adsorption properties of lead, cadmium and fluorine to soil can be predicted from the soil physical and chemical properties using the presented models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Storage of sparse files using parallel log-structured file system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Grider, Gary
A sparse file is stored without holes by storing a data portion of the sparse file using a parallel log-structured file system; and generating an index entry for the data portion, the index entry comprising a logical offset, physical offset and length of the data portion. The holes can be restored to the sparse file upon a reading of the sparse file. The data portion can be stored at a logical end of the sparse file. Additional storage efficiency can optionally be achieved by (i) detecting a write pattern for a plurality of the data portions and generating a singlemore » patterned index entry for the plurality of the patterned data portions; and/or (ii) storing the patterned index entries for a plurality of the sparse files in a single directory, wherein each entry in the single directory comprises an identifier of a corresponding sparse file.« less
Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets
2015-04-24
Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Learning sparse feature representations is a useful instru- ment for solving an...novel framework for the classifi cation of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets... Learning Sparse Feature Representations usingProbabilistic Quadtrees and Deep Belief Nets Report Title Learning sparse feature representations is a useful
Taft, L M; Evans, R S; Shyu, C R; Egger, M J; Chawla, N; Mitchell, J A; Thornton, S N; Bray, B; Varner, M
2009-04-01
The IOM report, Preventing Medication Errors, emphasizes the overall lack of knowledge of the incidence of adverse drug events (ADE). Operating rooms, emergency departments and intensive care units are known to have a higher incidence of ADE. Labor and delivery (L&D) is an emergency care unit that could have an increased risk of ADE, where reported rates remain low and under-reporting is suspected. Risk factor identification with electronic pattern recognition techniques could improve ADE detection rates. The objective of the present study is to apply Synthetic Minority Over Sampling Technique (SMOTE) as an enhanced sampling method in a sparse dataset to generate prediction models to identify ADE in women admitted for labor and delivery based on patient risk factors and comorbidities. By creating synthetic cases with the SMOTE algorithm and using a 10-fold cross-validation technique, we demonstrated improved performance of the Naïve Bayes and the decision tree algorithms. The true positive rate (TPR) of 0.32 in the raw dataset increased to 0.67 in the 800% over-sampled dataset. Enhanced performance from classification algorithms can be attained with the use of synthetic minority class oversampling techniques in sparse clinical datasets. Predictive models created in this manner can be used to develop evidence based ADE monitoring systems.
NASA Technical Reports Server (NTRS)
Alpan, S. (Principal Investigator)
1976-01-01
The author has identified the following significant results. It is observed that LANDSAT images can be used in preparing an accurate tectonic map of the study areas. These images are most useful in geological mapping areas where vegetation cover is sparse. LANDSAT images can be used to identify and separate evergreens and trees with leaves, and they can successfully delineate boundaries of forestry areas. Water holding capacity of the soil, internal and external drainage, vegetation pattern, irrigated and nonirrigated land, and fallow and planted fields are also detected on the LANDSAT imagery.
Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu
2017-11-01
This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.
User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.
NASA Technical Reports Server (NTRS)
Reddy, C. J.
2000-01-01
PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.
NASA Astrophysics Data System (ADS)
Ruthven, R. C.; Ketcham, R. A.; Kelly, E. D.
2015-12-01
Three-dimensional textural analysis of garnet porphyroblasts and electron microprobe analyses can, in concert, be used to pose novel tests that challenge and ultimately increase our understanding of metamorphic crystallization mechanisms. Statistical analysis of high-resolution X-ray computed tomography (CT) data of garnet porphyroblasts tells us the degree of ordering or randomness of garnets, which can be used to distinguish the rate-limiting factors behind their nucleation and growth. Electron microprobe data for cores, rims, and core-to-rim traverses are used as proxies to ascertain porphyroblast nucleation and growth rates, and the evolution of sample composition during crystallization. MnO concentrations in garnet cores serve as a proxy for the relative timing of nucleation, and rim concentrations test the hypothesis that MnO is in equilibrium sample-wide during the final stages of crystallization, and that concentrations have not been greatly altered by intracrystalline diffusion. Crystal size distributions combined with compositional data can be used to quantify the evolution of nucleation rates and sample composition during crystallization. This study focuses on quartzite schists from the Picuris Mountains with heterogeneous garnet distributions consisting of dense and sparse layers. 3D data shows that the sparse layers have smaller, less euhedral garnets, and petrographic observations show that sparse layers have more quartz and less mica than dense layers. Previous studies on rocks with homogeneously distributed garnet have shown that crystallization rates are diffusion-controlled, meaning that they are limited by diffusion of nutrients to growth and nucleation sites. This research extends this analysis to heterogeneous rocks to determine nucleation and growth rates, and test the assumption of rock-wide equilibrium for some major elements, among a set of compositionally distinct domains evolving in mm- to cm-scale proximity under identical P-T conditions.
Improved Estimation and Interpretation of Correlations in Neural Circuits
Yatsenko, Dimitri; Josić, Krešimir; Ecker, Alexander S.; Froudarakis, Emmanouil; Cotton, R. James; Tolias, Andreas S.
2015-01-01
Ambitious projects aim to record the activity of ever larger and denser neuronal populations in vivo. Correlations in neural activity measured in such recordings can reveal important aspects of neural circuit organization. However, estimating and interpreting large correlation matrices is statistically challenging. Estimation can be improved by regularization, i.e. by imposing a structure on the estimate. The amount of improvement depends on how closely the assumed structure represents dependencies in the data. Therefore, the selection of the most efficient correlation matrix estimator for a given neural circuit must be determined empirically. Importantly, the identity and structure of the most efficient estimator informs about the types of dominant dependencies governing the system. We sought statistically efficient estimators of neural correlation matrices in recordings from large, dense groups of cortical neurons. Using fast 3D random-access laser scanning microscopy of calcium signals, we recorded the activity of nearly every neuron in volumes 200 μm wide and 100 μm deep (150–350 cells) in mouse visual cortex. We hypothesized that in these densely sampled recordings, the correlation matrix should be best modeled as the combination of a sparse graph of pairwise partial correlations representing local interactions and a low-rank component representing common fluctuations and external inputs. Indeed, in cross-validation tests, the covariance matrix estimator with this structure consistently outperformed other regularized estimators. The sparse component of the estimate defined a graph of interactions. These interactions reflected the physical distances and orientation tuning properties of cells: The density of positive ‘excitatory’ interactions decreased rapidly with geometric distances and with differences in orientation preference whereas negative ‘inhibitory’ interactions were less selective. Because of its superior performance, this ‘sparse+latent’ estimator likely provides a more physiologically relevant representation of the functional connectivity in densely sampled recordings than the sample correlation matrix. PMID:25826696