Gratz, Marcel; Schlamann, Marc; Goericke, Sophia; Maderwald, Stefan; Quick, Harald H
2017-03-01
To assess the image quality of sparsely sampled contrast-enhanced MR angiography (sparse CE-MRA) providing high spatial resolution and whole-head coverage. Twenty-three patients scheduled for contrast-enhanced MR imaging of the head, (N = 19 with intracranial pathologies, N = 9 with vascular diseases), were included. Sparse CE-MRA at 3 Tesla was conducted using a single dose of contrast agent. Two neuroradiologists independently evaluated the data regarding vascular visibility and diagnostic value of overall 24 parameters and vascular segments on a 5-point ordinary scale (5 = very good, 1 = insufficient vascular visibility). Contrast bolus timing and the resulting arterio-venous overlap was also evaluated. Where available (N = 9), sparse CE-MRA was compared to intracranial Time-of-Flight MRA. The overall rating across all patients for sparse CE-MRA was 3.50 ± 1.07. Direct influence of the contrast bolus timing on the resulting image quality was observed. Overall mean vascular visibility and image quality across different features was rated good to intermediate (3.56 ± 0.95). The average performance of intracranial Time-of-Flight was rated 3.84 ± 0.87 across all patients and 3.54 ± 0.62 across all features. Sparse CE-MRA provides high-quality 3D MRA with high spatial resolution and whole-head coverage within short acquisition time. Accurate contrast bolus timing is mandatory. • Sparse CE-MRA enables fast vascular imaging with full brain coverage. • Volumes with sub-millimetre resolution can be acquired within 10 seconds. • Reader's ratings are good to intermediate and dependent on contrast bolus timing. • The method provides an excellent overview and allows screening for vascular pathologies.
High-frame-rate full-vocal-tract 3D dynamic speech imaging.
Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P
2017-04-01
To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Southern Ocean Seasonal Net Production from Satellite, Atmosphere, and Ocean Data Sets
NASA Technical Reports Server (NTRS)
Keeling, Ralph F.; Campbell, J. (Technical Monitor)
2002-01-01
A new climatology of monthly air-sea O2 flux was developed using the net air-sea heat flux as a template for spatial and temporal interpolation of sparse hydrographic data. The climatology improves upon the previous climatology of Najjar and Keeling in the Southern Hemisphere, where the heat-based approach helps to overcome limitations due to sparse data coverage. The climatology is used to make comparisons with productivity derived from CZCS images. The climatology is also used in support of an investigation of the plausible impact of recent global warming an oceanic O2 inventories.
Hierarchical spatial models of abundance and occurrence from imperfect survey data
Royle, J. Andrew; Kery, M.; Gautier, R.; Schmid, Hans
2007-01-01
Many estimation and inference problems arising from large-scale animal surveys are focused on developing an understanding of patterns in abundance or occurrence of a species based on spatially referenced count data. One fundamental challenge, then, is that it is generally not feasible to completely enumerate ('census') all individuals present in each sample unit. This observation bias may consist of several components, including spatial coverage bias (not all individuals in the Population are exposed to sampling) and detection bias (exposed individuals may go undetected). Thus, observations are biased for the state variable (abundance, occupancy) that is the object of inference. Moreover, data are often sparse for most observation locations, requiring consideration of methods for spatially aggregating or otherwise combining sparse data among sample units. The development of methods that unify spatial statistical models with models accommodating non-detection is necessary to resolve important spatial inference problems based on animal survey data. In this paper, we develop a novel hierarchical spatial model for estimation of abundance and occurrence from survey data wherein detection is imperfect. Our application is focused on spatial inference problems in the Swiss Survey of Common Breeding Birds. The observation model for the survey data is specified conditional on the unknown quadrat population size, N(s). We augment the observation model with a spatial process model for N(s), describing the spatial variation in abundance of the species. The model includes explicit sources of variation in habitat structure (forest, elevation) and latent variation in the form of a correlated spatial process. This provides a model-based framework for combining the spatially referenced samples while at the same time yielding a unified treatment of estimation problems involving both abundance and occurrence. We provide a Bayesian framework for analysis and prediction based on the integrated likelihood, and we use the model to obtain estimates of abundance and occurrence maps for the European Jay (Garrulus glandarius), a widespread, elusive, forest bird. The naive national abundance estimate ignoring imperfect detection and incomplete quadrat coverage was 77 766 territories. Accounting for imperfect detection added approximately 18 000 territories, and adjusting for coverage bias added another 131 000 territories to yield a fully corrected estimate of the national total of about 227 000 territories. This is approximately three times as high as previous estimates that assume every territory is detected in each quadrat.
Measuring suspended sediment: Chapter 10
Gray, J.R.; Landers, M.N.
2013-01-01
Suspended sediment in streams and rivers can be measured using traditional instruments and techniques and (or) surrogate technologies. The former, as described herein, consists primarily of both manually deployed isokinetic samplers and their deployment protocols developed by the Federal Interagency Sedimentation Project. They are used on all continents other than Antarctica. The reliability of the typically spatially rich but temporally sparse data produced by traditional means is supported by a broad base of scientific literature since 1940. However, the suspended sediment surrogate technologies described herein – based on hydroacoustic, nephelometric, laser, and pressure difference principles – tend to produce temporally rich but in some cases spatially sparse datasets. The value of temporally rich data in the accuracy of continuous sediment-discharge records is hard to overstate, in part because such data can often overcome the shortcomings of poor spatial coverage. Coupled with calibration data produced by traditional means, surrogate technologies show considerable promise toward providing the fluvial sediment data needed to increase and bring more consistency to sediment-discharge measurements worldwide.
Greedy Sparse Approaches for Homological Coverage in Location Unaware Sensor Networks
2017-12-08
GlobalSIP); 2013 Dec; Austin , TX . p. 595– 598. 33. Farah C, Schwaner F, Abedi A, Worboys M. Distributed homology algorithm to detect topological events...ARL-TR-8235•DEC 2017 US Army Research Laboratory Greedy Sparse Approaches for Homological Coverage in Location-Unaware Sensor Net- works by Terrence...8235•DEC 2017 US Army Research Laboratory Greedy Sparse Approaches for Homological Coverage in Location-Unaware Sensor Net- works by Terrence J Moore
Verdin, Andrew; Funk, Christopher C.; Rajagopalan, Balaji; Kleiber, William
2016-01-01
Robust estimates of precipitation in space and time are important for efficient natural resource management and for mitigating natural hazards. This is particularly true in regions with developing infrastructure and regions that are frequently exposed to extreme events. Gauge observations of rainfall are sparse but capture the precipitation process with high fidelity. Due to its high resolution and complete spatial coverage, satellite-derived rainfall data are an attractive alternative in data-sparse regions and are often used to support hydrometeorological early warning systems. Satellite-derived precipitation data, however, tend to underrepresent extreme precipitation events. Thus, it is often desirable to blend spatially extensive satellite-derived rainfall estimates with high-fidelity rain gauge observations to obtain more accurate precipitation estimates. In this research, we use two different methods, namely, ordinary kriging and κ-nearest neighbor local polynomials, to blend rain gauge observations with the Climate Hazards Group Infrared Precipitation satellite-derived precipitation estimates in data-sparse Central America and Colombia. The utility of these methods in producing blended precipitation estimates at pentadal (five-day) and monthly time scales is demonstrated. We find that these blending methods significantly improve the satellite-derived estimates and are competitive in their ability to capture extreme precipitation.
Dose-shaping using targeted sparse optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayre, George A.; Ruan, Dan
2013-07-15
Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, themore » authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E{sub tot}{sup sparse}), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L{sub 1} norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E{sub tot}{sup sparse} improves tradeoff between planning goals by 'sacrificing' voxels that have already been violated to improve PTV coverage, PTV homogeneity, and/or OAR-sparing. In doing so, overall plan quality is increased since these large violations only arise if a net reduction in E{sub tot}{sup sparse} occurs as a result. For example, large violations to dose prescription in the PTV in E{sub tot}{sup sparse}-optimized plans will naturally localize to voxels in and around PTV-OAR overlaps where OAR-sparing may be increased without compromising target coverage. The authors compared the results of our method and the corresponding clinical plans using analyses of DVH plots, dose maps, and two quantitative metrics that quantify PTV homogeneity and overdose. These metrics do not penalize underdose since E{sub tot}{sup sparse}-optimized plans were planned such that their target coverage was similar or better than that of the clinical plans. Finally, plan deliverability was assessed with the 2D modulation index.Results: The proposed method was implemented using IBM's CPLEX optimization package (ILOG CPLEX, Sunnyvale, CA) and required 1-4 min to solve with a 12-core Intel i7 processor. In the testing procedure, the authors optimized for several points on the Pareto surface of four 7-field 6MV prostate cases that were optimized for different levels of PTV homogeneity and OAR-sparing. The generated results were compared against each other and the clinical plan by analyzing their DVH plots and dose maps. After developing intuition by planning the four prostate cases, which had relatively few tradeoffs, the authors applied our method to a 7-field 6 MV pancreas case and a 9-field 6MV head-and-neck case to test the potential impact of our method on more challenging cases. The authors found that our formulation: (1) provided excellent flexibility for balancing OAR-sparing with PTV homogeneity; and (2) permitted the dose planner more control over the evolution of the PTV's spatial dose distribution than conventional objective functions. In particular, E{sub tot}{sup sparse}-optimized plans for the pancreas case and head-and-neck case exhibited substantially improved sparing of the spinal cord and parotid glands, respectively, while maintaining or improving sparing for other OARs and markedly improving PTV homogeneity. Plan deliverability for E{sub tot}{sup sparse}-optimized plans was shown to be better than their associated clinical plans, according to the two-dimensional modulation index.Conclusions: These results suggest that our formulation may be used to improve dose-shaping and OAR-sparing for complicated disease sites, such as the pancreas or head and neck. Furthermore, our objective function and constraints are linear and constitute a linear program, which converges to the global minimum quickly, and can be easily implemented in treatment planning software. Thus, the authors expect fast translation of our method to the clinic where it may have a positive impact on plan quality for challenging disease sites.« less
Removal of nuisance signals from limited and sparse 1H MRSI data using a union-of-subspaces model.
Ma, Chao; Lam, Fan; Johnson, Curtis L; Liang, Zhi-Pei
2016-02-01
To remove nuisance signals (e.g., water and lipid signals) for (1) H MRSI data collected from the brain with limited and/or sparse (k, t)-space coverage. A union-of-subspace model is proposed for removing nuisance signals. The model exploits the partial separability of both the nuisance signals and the metabolite signal, and decomposes an MRSI dataset into several sets of generalized voxels that share the same spectral distributions. This model enables the estimation of the nuisance signals from an MRSI dataset that has limited and/or sparse (k, t)-space coverage. The proposed method has been evaluated using in vivo MRSI data. For conventional chemical shift imaging data with limited k-space coverage, the proposed method produced "lipid-free" spectra without lipid suppression during data acquisition at 130 ms echo time. For sparse (k, t)-space data acquired with conventional pulses for water and lipid suppression, the proposed method was also able to remove the remaining water and lipid signals with negligible residuals. Nuisance signals in (1) H MRSI data reside in low-dimensional subspaces. This property can be utilized for estimation and removal of nuisance signals from (1) H MRSI data even when they have limited and/or sparse coverage of (k, t)-space. The proposed method should prove useful especially for accelerated high-resolution (1) H MRSI of the brain. © 2015 Wiley Periodicals, Inc.
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Disk Density Tuning of a Maximal Random Packing.
Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A
2016-08-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
Wang, Kang; Zhang, Tingjun; Zhang, Xiangdong; ...
2017-09-13
Historically, in-situ measurements have been notoriously sparse over the Arctic. As a consequence, the existing gridded data of Surface Air Temperature (SAT) may have large biases in estimating the warming trend in this region. Using data from an expanded monitoring network with 31 stations in the Alaskan Arctic, we demonstrate that the SAT has increased by 2.19 °C in this region, or at a rate of 0.23 °C/decade during 1921-2015. Mean- while, we found that the SAT warmed at 0.71 °C/decade over 1998-2015, which is two to three times faster than the rate established from the gridded datasets. Focusing onmore » the "hiatus" period 1998-2012 as identied by the Intergovernmental Panel on Climate Change (IPCC) report, the SAT has increased at 0.45 °C/decade, which captures more than 90% of the regional trend for 1951- 2012. We suggest that sparse in-situ measurements are responsible for underestimation of the SAT change in the gridded datasets. It is likely that enhanced climate warming may also have happened in the other regions of the Arctic since the late 1990s but left undetected because of incomplete observational coverage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Kang; Zhang, Tingjun; Zhang, Xiangdong
Historically, in-situ measurements have been notoriously sparse over the Arctic. As a consequence, the existing gridded data of Surface Air Temperature (SAT) may have large biases in estimating the warming trend in this region. Using data from an expanded monitoring network with 31 stations in the Alaskan Arctic, we demonstrate that the SAT has increased by 2.19 °C in this region, or at a rate of 0.23 °C/decade during 1921-2015. Mean- while, we found that the SAT warmed at 0.71 °C/decade over 1998-2015, which is two to three times faster than the rate established from the gridded datasets. Focusing onmore » the "hiatus" period 1998-2012 as identied by the Intergovernmental Panel on Climate Change (IPCC) report, the SAT has increased at 0.45 °C/decade, which captures more than 90% of the regional trend for 1951- 2012. We suggest that sparse in-situ measurements are responsible for underestimation of the SAT change in the gridded datasets. It is likely that enhanced climate warming may also have happened in the other regions of the Arctic since the late 1990s but left undetected because of incomplete observational coverage.« less
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
NASA Astrophysics Data System (ADS)
Babcock, C. R.; Finley, A. O.; Andersen, H. E.; Moskal, L. M.; Morton, D. C.; Cook, B.; Nelson, R.
2017-12-01
Upcoming satellite lidar missions, such as GEDI and IceSat-2, are designed to collect laser altimetry data from space for narrow bands along orbital tracts. As a result lidar metric sets derived from these sources will not be of complete spatial coverage. This lack of complete coverage, or sparsity, means traditional regression approaches that consider lidar metrics as explanatory variables (without error) cannot be used to generate wall-to-wall maps of forest inventory variables. We implement a coregionalization framework to jointly model sparsely sampled lidar information and point-referenced forest variable measurements to create wall-to-wall maps with full probabilistic uncertainty quantification of all inputs. We inform the model with USFS Forest Inventory and Analysis (FIA) in-situ forest measurements and GLAS lidar data to spatially predict aboveground forest biomass (AGB) across the contiguous US. We cast our model within a Bayesian hierarchical framework to better model complex space-varying correlation structures among the lidar metrics and FIA data, which yields improved prediction and uncertainty assessment. To circumvent computational difficulties that arise when fitting complex geostatistical models to massive datasets, we use a Nearest Neighbor Gaussian process (NNGP) prior. Results indicate that a coregionalization modeling approach to leveraging sampled lidar data to improve AGB estimation is effective. Further, fitting the coregionalization model within a Bayesian mode of inference allows for AGB quantification across scales ranging from individual pixel estimates of AGB density to total AGB for the continental US with uncertainty. The coregionalization framework examined here is directly applicable to future spaceborne lidar acquisitions from GEDI and IceSat-2. Pairing these lidar sources with the extensive FIA forest monitoring plot network using a joint prediction framework, such as the coregionalization model explored here, offers the potential to improve forest AGB accounting certainty and provide maps for post-model fitting analysis of the spatial distribution of AGB.
Investigation of aquifer-estuary interaction using wavelet analysis of fiber-optic temperature data
Henderson, R.D.; Day-Lewis, Frederick D.; Harvey, Charles F.
2009-01-01
Fiber-optic distributed temperature sensing (FODTS) provides sub-minute temporal and meter-scale spatial resolution over kilometer-long cables. Compared to conventional thermistor or thermocouple-based technologies, which measure temperature at discrete (and commonly sparse) locations, FODTS offers nearly continuous spatial coverage, thus providing hydrologic information at spatiotemporal scales previously impossible. Large and information-rich FODTS datasets, however, pose challenges for data exploration and analysis. To date, FODTS analyses have focused on time-series variance as the means to discriminate between hydrologic phenomena. Here, we demonstrate the continuous wavelet transform (CWT) and cross-wavelet transform (XWT) to analyze FODTS in the context of related hydrologic time series. We apply the CWT and XWT to data from Waquoit Bay, Massachusetts to identify the location and timing of tidal pumping of submarine groundwater.
NASA Astrophysics Data System (ADS)
Casson, David; Werner, Micha; Weerts, Albrecht; Schellekens, Jaap; Solomatine, Dimitri
2017-04-01
Hydrological modelling in the Canadian Sub-Arctic is hindered by the limited spatial and temporal coverage of local meteorological data. Local watershed modelling often relies on data from a sparse network of meteorological stations with a rough density of 3 active stations per 100,000 km2. Global datasets hold great promise for application due to more comprehensive spatial and extended temporal coverage. A key objective of this study is to demonstrate the application of global datasets and data assimilation techniques for hydrological modelling of a data sparse, Sub-Arctic watershed. Application of available datasets and modelling techniques is currently limited in practice due to a lack of local capacity and understanding of available tools. Due to the importance of snow processes in the region, this study also aims to evaluate the performance of global SWE products for snowpack modelling. The Snare Watershed is a 13,300 km2 snowmelt driven sub-basin of the Mackenzie River Basin, Northwest Territories, Canada. The Snare watershed is data sparse in terms of meteorological data, but is well gauged with consistent discharge records since the late 1970s. End of winter snowpack surveys have been conducted every year from 1978-present. The application of global re-analysis datasets from the EU FP7 eartH2Observe project are investigated in this study. Precipitation data are taken from Multi-Source Weighted-Ensemble Precipitation (MSWEP) and temperature data from Watch Forcing Data applied to European Reanalysis (ERA)-Interim data (WFDEI). GlobSnow-2 is a global Snow Water Equivalent (SWE) measurement product funded by the European Space Agency (ESA) and is also evaluated over the local watershed. Downscaled precipitation, temperature and potential evaporation datasets are used as forcing data in a distributed version of the HBV model implemented in the WFLOW framework. Results demonstrate the successful application of global datasets in local watershed modelling, but that validation of actual frozen precipitation and snowpack conditions is very difficult. The distributed hydrological model shows good streamflow simulation performance based on statistical model evaluation techniques. Results are also promising for inter-annual variability, spring snowmelt onset and time to peak flows. It is expected that data assimilation of stream flow using an Ensemble Kalman Filter will further improve model performance. This study shows that global re-analysis datasets hold great potential for understanding the hydrology and snowpack dynamics of the expansive and data sparse sub-Arctic. However, global SWE products will require further validation and algorithm improvements, particularly over boreal forest and lake-rich regions.
NASA Astrophysics Data System (ADS)
Qualls, R. J.; Woodruff, C.
2017-12-01
The behavior of inter-annual trends in mountain snow cover would represent extremely useful information for drought and climate change assessment; however, individual data sources exhibit specific limitations for characterizing this behavior. For example, SNOTEL data provide time series point values of Snow Water Equivalent (SWE), but lack spatial content apart from that contained in a sparse network of point values. Satellite observations in the visible spectrum can provide snow covered area, but not SWE at present, and are limited by cloud cover which often obscures visibility of the ground, especially during the winter and spring in mountainous areas. Cloud cover, therefore, often limits both temporal and spatial coverage of satellite remote sensing of snow. Among the platforms providing the best combination of temporal and spatial coverage to overcome the cloud obscuration problem by providing frequent overflights, the Aqua and Terra satellites carrying the MODIS instrument package provide 500 m, daily resolution observations of snow cover. These were only launched in 1999 and the early 2000's, thus limiting the historical period over which these data are available. A hybrid method incorporating SNOTEL and MODIS data has been developed which accomplishes cloud removal, and enables determination of the time series of watershed spatial snow cover when either SNOTEL or MODIS data are available. This allows one to generate spatial snow cover information for watersheds with SNOTEL stations for periods both before and after the launch of the Aqua and Terra satellites, extending the spatial information about snow cover over the period of record of the SNOTEL stations present in a watershed. This method is used to quantify the spatial time series of snow over the 9000 km2 Upper Snake River watershed and to evaluate inter-annual trends in the timing, rate, and duration of melt over the nearly 40 year period from the early 1980's to the present, and shows promise for generating snow cover depletion maps for drought and climate change scenarios.
NASA Astrophysics Data System (ADS)
Y Yang, M.; Wang, J.; Zhang, Q.
2017-07-01
Vegetation coverage is one of the most important indicators for ecological environment change, and is also an effective index for the assessment of land degradation and desertification. The dry-hot valley regions have sparse surface vegetation, and the spectral information about the vegetation in such regions usually has a weak representation in remote sensing, so there are considerable limitations for applying the commonly-used vegetation index method to calculate the vegetation coverage in the dry-hot valley regions. Therefore, in this paper, Alternating Angle Minimum (AAM) algorithm of deterministic model is adopted for selective endmember for pixel unmixing of MODIS image in order to extract the vegetation coverage, and accuracy test is carried out by the use of the Landsat TM image over the same period. As shown by the results, in the dry-hot valley regions with sparse vegetation, AAM model has a high unmixing accuracy, and the extracted vegetation coverage is close to the actual situation, so it is promising to apply the AAM model to the extraction of vegetation coverage in the dry-hot valley regions.
2010-2011 Performance of the AirNow Satellite Data Processor
NASA Astrophysics Data System (ADS)
Pasch, A. N.; DeWinter, J. L.; Haderman, M. D.; van Donkelaar, A.; Martin, R. V.; Szykman, J.; White, J. E.; Dickerson, P.; Zahn, P. H.; Dye, T. S.
2012-12-01
The U.S. Environmental Protection Agency's (EPA) AirNow program provides maps of real time hourly Air Quality Index (AQI) conditions and daily AQI forecasts nationwide (http://www.airnow.gov). The public uses these maps to make health-based decisions. The usefulness of the AirNow air quality maps depends on the accuracy and spatial coverage of air quality measurements. Currently, the maps use only ground-based measurements, which have significant gaps in coverage in some parts of the United States. As a result, contoured AQI levels have high uncertainty in regions far from monitors. To improve the usefulness of air quality maps, scientists at EPA, Dalhousie University, and Sonoma Technology, Inc. have been working in collaboration with the National Aeronautics and Space Administration (NASA) and the National Oceanic and Atmospheric Administration (NOAA) to incorporate satellite-estimated surface PM2.5 concentrations into the maps via the AirNow Satellite Data Processor (ASDP). These satellite estimates are derived using NASA/NOAA satellite aerosol optical depth (AOD) retrievals and GEOS-Chem modeled ratios of surface PM2.5 concentrations to AOD. GEOS-Chem is a three-dimensional chemical transport model for atmospheric composition driven by meteorological input from the Goddard Earth Observing System (GOES). The ASDP can fuse multiple PM2.5 concentration data sets to generate AQI maps with improved spatial coverage. The goal of ASDP is to provide more detailed AQI information in monitor-sparse locations and augment monitor-dense locations with more information. We will present a statistical analysis for 2010-2011 of the ASDP predictions of PM2.5 focusing on performance at validation sites. In addition, we will present several case studies evaluating the ASDP's performance for multiple regions and seasons, focusing specifically on days when large spatial gradients in AQI and wildfire smoke impact were observed.
Relationships Between Long-Range Lightning Networks and TRMM/LIS Observations
NASA Technical Reports Server (NTRS)
Rudlosky, Scott D.; Holzworth, Robert H.; Carey, Lawrence D.; Schultz, Chris J.; Bateman, Monte; Cummins, Kenneth L.; Cummins, Kenneth L.; Blakeslee, Richard J.; Goodman, Steven J.
2012-01-01
Recent advances in long-range lightning detection technologies have improved our understanding of thunderstorm evolution in the data sparse oceanic regions. Although the expansion and improvement of long-range lightning datasets have increased their applicability, these applications (e.g., data assimilation, atmospheric chemistry, and aviation weather hazards) require knowledge of the network detection capabilities. The present study intercompares long-range lightning data with observations from the Lightning Imaging Sensor (LIS) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite. The study examines network detection efficiency and location accuracy relative to LIS observations, describes spatial variability in these performance metrics, and documents the characteristics of LIS flashes that are detected by the long-range networks. Improved knowledge of relationships between these datasets will allow researchers, algorithm developers, and operational users to better prepare for the spatial and temporal coverage of the upcoming GOES-R Geostationary Lightning Mapper (GLM).
Evaluation of Long-Range Lightning Detection Networks Using TRMM/LIS Observations
NASA Technical Reports Server (NTRS)
Rudlosky, Scott D.; Holzworth, Robert H.; Carey, Lawrence D.; Schultz, Chris J.; Bateman, Monte; Cecil, Daniel J.; Cummins, Kenneth L.; Petersen, Walter A.; Blakeslee, Richard J.; Goodman, Steven J.
2011-01-01
Recent advances in long-range lightning detection technologies have improved our understanding of thunderstorm evolution in the data sparse oceanic regions. Although the expansion and improvement of long-range lightning datasets have increased their applicability, these applications (e.g., data assimilation, atmospheric chemistry, and aviation weather hazards) require knowledge of the network detection capabilities. Toward this end, the present study evaluates data from the World Wide Lightning Location Network (WWLLN) using observations from the Lightning Imaging Sensor (LIS) aboard the Tropical Rainfall Measurement Mission (TRMM) satellite. The study documents the WWLLN detection efficiency and location accuracy relative to LIS observations, describes the spatial variability in these performance metrics, and documents the characteristics of LIS flashes that are detected by WWLLN. Improved knowledge of the WWLLN detection capabilities will allow researchers, algorithm developers, and operational users to better prepare for the spatial and temporal coverage of the upcoming GOES-R Geostationary Lightning Mapper (GLM).
Sparse modeling of spatial environmental variables associated with asthma
Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.
2014-01-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437
Sparse modeling of spatial environmental variables associated with asthma.
Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W
2015-02-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.
GEO/SAMS - The Geostationary Synthetic Aperture Microwave Sounder
NASA Technical Reports Server (NTRS)
Lambrigtsen, Bjorn H.
2008-01-01
The National Oceanic and Atmospheric Administration (NOAA) has for many years operated two weather satellite systems, the Polar-orbiting Operational Environmental Satellite system (POES), using low-earth orbiting (LEO) satellites, and the Geostationary Operational Environmental Satellite system (GOES), using geostationary earth orbiting (GEO) satellites. (Similar systems are also operated by other nations.) The POES satellites have been equipped with both infrared (IR) and microwave (MW) atmospheric sounders, which makes it possible to determine the vertical distribution of temperature and humidity in the troposphere even under cloudy conditions. Such satellite observations have had a significant impact on weather forecasting accuracy, especially in regions where in situ observations are sparse. In contrast, the GOES satellites have only been equipped with IR sounders, since it has not been feasible to build a large enough antenna to achieve sufficient spatial resolution for a MW sounder in GEO. As a result, GOES soundings can only be obtained in cloud free areas and in the less important upper atmosphere, above the cloud tops. This has hindered the effective use of GOES data in numerical weather prediction. Full sounding capabilities with the GOES system is highly desirable because of the advantageous spatial and temporal coverage that is possible from GEO. While POES satellites provide coverage in relatively narrow swaths, and with a revisit time of 12-24 hours or more, GOES satellites can provide continuous hemispheric coverage, making it possible to monitor highly dynamic phenomena such as hurricanes.
Improve EPA's AIRNow Air Quality Index Maps with NASA/NOAA Satellite Data
NASA Astrophysics Data System (ADS)
Pasch, A.; Zahn, P. H.; DeWinter, J. L.; Haderman, M. D.; White, J. E.; Dickerson, P.; Dye, T. S.; Martin, R. V.
2011-12-01
The U.S. Environmental Protection Agency's (EPA) AIRNow program provides maps of real-time hourly Air Quality Index (AQI) conditions and daily AQI forecasts nationwide (http://www.airnow.gov). The public uses these maps to make decisions concerning their respiratory health. The usefulness of the AIRNow air quality maps depends on the accuracy and spatial coverage of air quality measurements. Currently, the maps use only ground-based measurements, which have significant gaps in coverage in some parts of the United States. As a result, contoured AQI levels have high uncertainty in regions far from monitors. To improve the usefulness of air quality maps, scientists at EPA and Sonoma Technology, Inc. are working in collaboration with the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), and university researchers on a project to incorporate additional measurements into the maps via the AIRNow Satellite Data Processor (ASDP). These measurements include estimated surface PM
NASA Astrophysics Data System (ADS)
Flores, A. N.; Smith, K.; LaPorte, P.
2011-12-01
Applications like flood forecasting, military trafficability assessment, and slope stability analysis necessitate the use of models capable of resolving hydrologic states and fluxes at spatial scales of hillslopes (e.g., 10s to 100s m). These models typically require precipitation forcings at spatial scales of kilometers or better and time intervals of hours. Yet in especially rugged terrain that typifies much of the Western US and throughout much of the developing world, precipitation data at these spatiotemporal resolutions is difficult to come by. Ground-based weather radars have significant problems in high-relief settings and are sparsely located, leaving significant gaps in coverage and high uncertainties. Precipitation gages provide accurate data at points but are very sparsely located and their placement is often not representative, yielding significant coverage gaps in a spatial and physiographic sense. Numerical weather prediction efforts have made precipitation data, including critically important information on precipitation phase, available globally and in near real-time. However, these datasets present watershed modelers with two problems: (1) spatial scales of many of these datasets are tens of kilometers or coarser, (2) numerical weather models used to generate these datasets include a land surface parameterization that in some circumstances can significantly affect precipitation predictions. We report on the development of a regional precipitation dataset for Idaho that leverages: (1) a dataset derived from a numerical weather prediction model, (2) gages within Idaho that report hourly precipitation data, and (3) a long-term precipitation climatology dataset. Hourly precipitation estimates from the Modern Era Retrospective-analysis for Research and Applications (MERRA) are stochastically downscaled using a hybrid orographic and statistical model from their native resolution (1/2 x 2/3 degrees) to a resolution of approximately 1 km. Downscaled precipitation realizations are conditioned on hourly observations from reporting gages and then conditioned again on the Parameter-elevation Regressions on Independent Slopes Model (PRISM) at the monthly timescale to reflect orographic precipitation trends common to watersheds of the Western US. While this methodology potentially introduces cross-pollination of errors due to the re-use of precipitation gage data, it nevertheless achieves an ensemble-based precipitation estimate and appropriate measures of uncertainty at a spatiotemporal resolution appropriate for watershed modeling.
NASA Astrophysics Data System (ADS)
Zhang, Yuzhong; Wang, Yuhang; Crawford, James; Cheng, Ye; Li, Jianfeng
2018-05-01
Obtaining the full spatial coverage of daily surface ozone fields is challenging because of the sparsity of the surface monitoring network and the difficulty in direct satellite retrievals of surface ozone. We propose an indirect satellite retrieval framework to utilize the information from satellite-measured column densities of tropospheric NO2 and CH2O, which are sensitive to the lower troposphere, to derive surface ozone fields. The method is applicable to upcoming geostationary satellites with high-quality NO2 and CH2O measurements. To prove the concept, we conduct a simulation experiment using a 3-D chemical transport model for July 2011 over the eastern US. The results show that a second order regression using both NO2 and CH2O column densities can be an effective predictor for daily maximum 8-h average ozone. Furthermore, this indirect retrieval approach is shown to be complementary to spatial interpolation of surface observations, especially in regions where the surface sites are sparse. Combining column observations of NO2 and CH2O with surface site measurements leads to an improved representation of surface ozone over simple kriging, increasing the R2 value from 0.53 to 0.64 at a surface site distance of 252 km. The improvements are even more significant with larger surface site distances. The simulation experiment suggests that the indirect satellite retrieval technique can potentially be a useful tool to derive the full spatial coverage of daily surface ozone fields if satellite observation uncertainty is moderate.
DEM generation from contours and a low-resolution DEM
NASA Astrophysics Data System (ADS)
Li, Xinghua; Shen, Huanfeng; Feng, Ruitao; Li, Jie; Zhang, Liangpei
2017-12-01
A digital elevation model (DEM) is a virtual representation of topography, where the terrain is established by the three-dimensional co-ordinates. In the framework of sparse representation, this paper investigates DEM generation from contours. Since contours are usually sparsely distributed and closely related in space, sparse spatial regularization (SSR) is enforced on them. In order to make up for the lack of spatial information, another lower spatial resolution DEM from the same geographical area is introduced. In this way, the sparse representation implements the spatial constraints in the contours and extracts the complementary information from the auxiliary DEM. Furthermore, the proposed method integrates the advantage of the unbiased estimation of kriging. For brevity, the proposed method is called the kriging and sparse spatial regularization (KSSR) method. The performance of the proposed KSSR method is demonstrated by experiments in Shuttle Radar Topography Mission (SRTM) 30 m DEM and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) 30 m global digital elevation model (GDEM) generation from the corresponding contours and a 90 m DEM. The experiments confirm that the proposed KSSR method outperforms the traditional kriging and SSR methods, and it can be successfully used for DEM generation from contours.
Improved FastICA algorithm in fMRI data analysis using the sparsity property of the sources.
Ge, Ruiyang; Wang, Yubao; Zhang, Jipeng; Yao, Li; Zhang, Hang; Long, Zhiying
2016-04-01
As a blind source separation technique, independent component analysis (ICA) has many applications in functional magnetic resonance imaging (fMRI). Although either temporal or spatial prior information has been introduced into the constrained ICA and semi-blind ICA methods to improve the performance of ICA in fMRI data analysis, certain types of additional prior information, such as the sparsity, has seldom been added to the ICA algorithms as constraints. In this study, we proposed a SparseFastICA method by adding the source sparsity as a constraint to the FastICA algorithm to improve the performance of the widely used FastICA. The source sparsity is estimated through a smoothed ℓ0 norm method. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of SparseFastICA and made a performance comparison between SparseFastICA, FastICA and Infomax ICA. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of SparseFastICA for the source separation in fMRI data. Both the simulated and real fMRI experimental results showed that SparseFastICA has better robustness to noise and better spatial detection power than FastICA. Although the spatial detection power of SparseFastICA and Infomax did not show significant difference, SparseFastICA had faster computation speed than Infomax. SparseFastICA was comparable to the Infomax algorithm with a faster computation speed. More importantly, SparseFastICA outperformed FastICA in robustness and spatial detection power and can be used to identify more accurate brain networks than FastICA algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.
Natural image sequences constrain dynamic receptive fields and imply a sparse code.
Häusler, Chris; Susemihl, Alex; Nawrot, Martin P
2013-11-06
In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
EPR oximetry in three spatial dimensions using sparse spin distribution
NASA Astrophysics Data System (ADS)
Som, Subhojit; Potter, Lee C.; Ahmad, Rizwan; Vikram, Deepti S.; Kuppusamy, Periannan
2008-08-01
A method is presented to use continuous wave electron paramagnetic resonance imaging for rapid measurement of oxygen partial pressure in three spatial dimensions. A particulate paramagnetic probe is employed to create a sparse distribution of spins in a volume of interest. Information encoding location and spectral linewidth is collected by varying the spatial orientation and strength of an applied magnetic gradient field. Data processing exploits the spatial sparseness of spins to detect voxels with nonzero spin and to estimate the spectral linewidth for those voxels. The parsimonious representation of spin locations and linewidths permits an order of magnitude reduction in data acquisition time, compared to four-dimensional tomographic reconstruction using traditional spectral-spatial imaging. The proposed oximetry method is experimentally demonstrated for a lithium octa- n-butoxy naphthalocyanine (LiNc-BuO) probe using an L-band EPR spectrometer.
Huard, Edouard; Derelle, Sophie; Jaeck, Julien; Nghiem, Jean; Haïdar, Riad; Primot, Jérôme
2018-03-05
A challenging point in the prediction of the image quality of infrared imaging systems is the evaluation of the detector modulation transfer function (MTF). In this paper, we present a linear method to get a 2D continuous MTF from sparse spectral data. Within the method, an object with a predictable sparse spatial spectrum is imaged by the focal plane array. The sparse data is then treated to return the 2D continuous MTF with the hypothesis that all the pixels have an identical spatial response. The linearity of the treatment is a key point to estimate directly the error bars of the resulting detector MTF. The test bench will be presented along with measurement tests on a 25 μm pitch InGaAs detector.
The Biogeography of Putative Microbial Antibiotic Production
Bryant, Jessica A.; Charkoudian, Louise K.; Docherty, Kathryn M.; Jones, Evan; Kembel, Steven W.; Green, Jessica L.; Bohannan, Brendan J. M.
2015-01-01
Understanding patterns in the distribution and abundance of functional traits across a landscape is of fundamental importance to ecology. Mapping these distributions is particularly challenging for species-rich groups with sparse trait measurement coverage, such as flowering plants, insects, and microorganisms. Here, we use likelihood-based character reconstruction to infer and analyze the spatial distribution of unmeasured traits. We apply this framework to a microbial dataset comprised of 11,732 ketosynthase alpha gene sequences extracted from 144 soil samples from three continents to document the spatial distribution of putative microbial polyketide antibiotic production. Antibiotic production is a key competitive strategy for soil microbial survival and performance. Additionally, novel antibiotic discovery is highly relevant to human health, making natural antibiotic production by soil microorganisms a major target for bioprospecting. Our comparison of trait-based biogeographical patterns to patterns based on taxonomy and phylogeny is relevant to our basic understanding of microbial biogeography as well as the pressing need for new antibiotics. PMID:26102275
Objective sea level pressure analysis for sparse data areas
NASA Technical Reports Server (NTRS)
Druyan, L. M.
1972-01-01
A computer procedure was used to analyze the pressure distribution over the North Pacific Ocean for eleven synoptic times in February, 1967. Independent knowledge of the central pressures of lows is shown to reduce the analysis errors for very sparse data coverage. The application of planned remote sensing of sea-level wind speeds is shown to make a significant contribution to the quality of the analysis especially in the high gradient mid-latitudes and for sparse coverage of conventional observations (such as over Southern Hemisphere oceans). Uniform distribution of the available observations of sea-level pressure and wind velocity yields results far superior to those derived from a random distribution. A generalization of the results indicates that the average lower limit for analysis errors is between 2 and 2.5 mb based on the perfect specification of the magnitude of the sea-level pressure gradient from a known verification analysis. A less than perfect specification will derive from wind-pressure relationships applied to satellite observed wind speeds.
The impact of the resolution of meteorological datasets on catchment-scale drought studies
NASA Astrophysics Data System (ADS)
Hellwig, Jost; Stahl, Kerstin
2017-04-01
Gridded meteorological datasets provide the basis to study drought at a range of scales, including catchment scale drought studies in hydrology. They are readily available to study past weather conditions and often serve real time monitoring as well. As these datasets differ in spatial/temporal coverage and spatial/temporal resolution, for most studies there is a tradeoff between these features. Our investigation examines whether biases occur when studying drought on catchment scale with low resolution input data. For that, a comparison among the datasets HYRAS (covering Central Europe, 1x1 km grid, daily data, 1951 - 2005), E-OBS (Europe, 0.25° grid, daily data, 1950-2015) and GPCC (whole world, 0.5° grid, monthly data, 1901 - 2013) is carried out. Generally, biases in precipitation increase with decreasing resolution. Most important variations are found during summer. In low mountain range of Central Europe the datasets of sparse resolution (E-OBS, GPCC) overestimate dry days and underestimate total precipitation since they are not able to describe high spatial variability. However, relative measures like the correlation coefficient reveal good consistencies of dry and wet periods, both for absolute precipitation values and standardized indices like the Standardized Precipitation Index (SPI) or Standardized Precipitation Evaporation Index (SPEI). Particularly the most severe droughts derived from the different datasets match very well. These results indicate that absolute values of sparse resolution datasets applied to catchment scale might be critical to use for an assessment of the hydrological drought at catchment scale, whereas relative measures for determining periods of drought are more trustworthy. Therefore, studies on drought, that downscale meteorological data, should carefully consider their data needs and focus on relative measures for dry periods if sufficient for the task.
Wu, Jun-Jun; Gao, Zhi-Hai; Li, Zeng-Yuan; Wang, Hong-Yan; Pang, Yong; Sun, Bin; Li, Chang-Long; Li, Xu-Zhi; Zhang, Jiu-Xing
2014-03-01
In order to estimate the sparse vegetation information accurately in desertification region, taking southeast of Sunite Right Banner, Inner Mongolia, as the test site and Tiangong-1 hyperspectral image as the main data, sparse vegetation coverage and biomass were retrieved based on normalized difference vegetation index (NDVI) and soil adjusted vegetation index (SAVI), combined with the field investigation data. Then the advantages and disadvantages between them were compared. Firstly, the correlation between vegetation indexes and vegetation coverage under different bands combination was analyzed, as well as the biomass. Secondly, the best bands combination was determined when the maximum correlation coefficient turned up between vegetation indexes (VI) and vegetation parameters. It showed that the maximum correlation coefficient between vegetation parameters and NDVI could reach as high as 0.7, while that of SAVI could nearly reach 0.8. The center wavelength of red band in the best bands combination for NDVI was 630nm, and that of the near infrared (NIR) band was 910 nm. Whereas, when the center wavelength was 620 and 920 nm respectively, they were the best combination for SAVI. Finally, the linear regression models were established to retrieve vegetation coverage and biomass based on Tiangong-1 VIs. R2 of all models was more than 0.5, while that of the model based on SAVI was higher than that based on NDVI, especially, the R2 of vegetation coverage retrieve model based on SAVI was as high as 0.59. By intersection validation, the standard errors RMSE based on SAVI models were lower than that of the model based on NDVI. The results showed that the abundant spectral information of Tiangong-1 hyperspectral image can reflect the actual vegetaion condition effectively, and SAVI can estimate the sparse vegetation information more accurately than NDVI in desertification region.
Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang
2017-02-15
Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ilyas, Maryam; Brierley, Christopher M.; Guillas, Serge
2017-09-01
Instrumental records showing increases in surface temperature are some of the robust and iconic evidence of climate change. But how much should we trust regional temperature estimates interpolated from sparse observations? Here we quantify the uncertainty in the instrumental record by applying multiresolution lattice kriging, a recently developed interpolation technique that leverages the multiple spatial scales of temperature anomalies. The probability of monthly anomalies across the globe is represented by an ensemble, based on HadCRUT4 and accounting for observational and coverage uncertainties. To demonstrate the potential of these new data, we investigate the area-averaged temperature anomalies over the Niño 3.4 region in the equatorial Pacific. Having developed a definition of the El Niño-Southern Oscillation (ENSO) able to cope with probability distribution functions, we classify the ENSO state for each year since 1851. We find that for many years it is ambiguous as to whether there was an El Niño or not from the Niño 3.4 region alone. These years are mainly before 1920, but also just after World War II.
Kim, Steve M; Ganguli, Surya; Frank, Loren M
2012-08-22
Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
Sparse orthogonal population representation of spatial context in the retrosplenial cortex.
Mao, Dun; Kandler, Steffen; McNaughton, Bruce L; Bonin, Vincent
2017-08-15
Sparse orthogonal coding is a key feature of hippocampal neural activity, which is believed to increase episodic memory capacity and to assist in navigation. Some retrosplenial cortex (RSC) neurons convey distributed spatial and navigational signals, but place-field representations such as observed in the hippocampus have not been reported. Combining cellular Ca 2+ imaging in RSC of mice with a head-fixed locomotion assay, we identified a population of RSC neurons, located predominantly in superficial layers, whose ensemble activity closely resembles that of hippocampal CA1 place cells during the same task. Like CA1 place cells, these RSC neurons fire in sequences during movement, and show narrowly tuned firing fields that form a sparse, orthogonal code correlated with location. RSC 'place' cell activity is robust to environmental manipulations, showing partial remapping similar to that observed in CA1. This population code for spatial context may assist the RSC in its role in memory and/or navigation.Neurons in the retrosplenial cortex (RSC) encode spatial and navigational signals. Here the authors use calcium imaging to show that, similar to the hippocampus, RSC neurons also encode place cell-like activity in a sparse orthogonal representation, partially anchored to the allocentric cues on the linear track.
NASA Astrophysics Data System (ADS)
Duffy, James P.; Pratt, Laura; Anderson, Karen; Land, Peter E.; Shutler, Jamie D.
2018-01-01
Seagrass ecosystems are highly sensitive to environmental change. They are also in global decline and under threat from a variety of anthropogenic factors. There is now an urgency to establish robust monitoring methodologies so that changes in seagrass abundance and distribution in these sensitive coastal environments can be understood. Typical monitoring approaches have included remote sensing from satellites and airborne platforms, ground based ecological surveys and snorkel/scuba surveys. These techniques can suffer from temporal and spatial inconsistency, or are very localised making it hard to assess seagrass meadows in a structured manner. Here we present a novel technique using a lightweight (sub 7 kg) drone and consumer grade cameras to produce very high spatial resolution (∼4 mm pixel-1) mosaics of two intertidal sites in Wales, UK. We present a full data collection methodology followed by a selection of classification techniques to produce coverage estimates at each site. We trialled three classification approaches of varying complexity to investigate and illustrate the differing performance and capabilities of each. Our results show that unsupervised classifications perform better than object-based methods in classifying seagrass cover. We also found that the more sparsely vegetated of the two meadows studied was more accurately classified - it had lower root mean squared deviation (RMSD) between observed and classified coverage (9-9.5%) compared to a more densely vegetated meadow (RMSD 16-22%). Furthermore, we examine the potential to detect other biotic features, finding that lugworm mounds can be detected visually at coarser resolutions such as 43 mm pixel-1, whereas smaller features such as cockle shells within seagrass require finer grained data (<17 mm pixel-1).
Southern Hemisphere Upper Thermospheric Wind Climatology
NASA Astrophysics Data System (ADS)
Dhadly, M. S.; Emmert, J. T.; Drob, D. P.
2017-12-01
This study is focused on the poorly understood large-scale upper thermospheric wind dynamics in the southern polar cap, auroral, and mid latitudes. The gaps in our understanding of the dynamic high-latitude thermosphere are largely due to the sparseness of thermospheric wind measurements. Using data from current observational facilities, it is unfeasible to construct a synoptic picture of the Southern Hemisphere upper thermospheric winds. However, enough data with wide spatial and temporal coverage have accumulated to construct a meaningful statistical analysis of winds as function of season, magnetic latitude, and magnetic local time. We use long-term data from nine ground-based stations located at different southern high latitudes and three space-based instruments. These diverse data sets possess different geometries and different spatial and solar coverage. The major challenge of the effort is to combine these disparate sources of data into a coherent picture while overcoming the sampling limitations and biases among the datasets. Our preliminary analyses show mutual biases present among some of them. We first address the biases among various data sets and then combine them in a coherent way to construct maps of neutral winds for various seasons. We then validate the fitted climatology against the observational data and compare with corresponding fits of 25 years of simulated winds from the National Center for Atmospheric Research Thermosphere-Ionosphere-Electrodynamics General Circulation Model. This study provides critical insight into magnetosphere-ionosphere-thermosphere coupling and sets a necessary benchmark for validating new observations and tuning first-principles models.
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.
NASA Technical Reports Server (NTRS)
Joiner, J.; Gaunter, L.; Lindstrot, R.; Voigt, M.; Vasilkov, A. P.; Middleton, E. M.; Huemmrich, K. F.; Yoshida, Y.; Frankenberg, C.
2013-01-01
Globally mapped terrestrial chlorophyll fluorescence retrievals are of high interest because they can provide information on the functional status of vegetation including light-use efficiency and global primary productivity that can be used for global carbon cycle modeling and agricultural applications. Previous satellite retrievals of fluorescence have relied solely upon the filling-in of solar Fraunhofer lines that are not significantly affected by atmospheric absorption. Although these measurements provide near-global coverage on a monthly basis, they suffer from relatively low precision and sparse spatial sampling. Here, we describe a new methodology to retrieve global far-red fluorescence information; we use hyperspectral data with a simplified radiative transfer model to disentangle the spectral signatures of three basic components: atmospheric absorption, surface reflectance, and fluorescence radiance. An empirically based principal component analysis approach is employed, primarily using cloudy data over ocean, to model and solve for the atmospheric absorption. Through detailed simulations, we demonstrate the feasibility of the approach and show that moderate-spectral-resolution measurements with a relatively high signal-to-noise ratio can be used to retrieve far-red fluorescence information with good precision and accuracy. The method is then applied to data from the Global Ozone Monitoring Instrument 2 (GOME-2). The GOME-2 fluorescence retrievals display similar spatial structure as compared with those from a simpler technique applied to the Greenhouse gases Observing SATellite (GOSAT). GOME-2 enables global mapping of far-red fluorescence with higher precision over smaller spatial and temporal scales than is possible with GOSAT. Near-global coverage is provided within a few days. We are able to show clearly for the first time physically plausible variations in fluorescence over the course of a single month at a spatial resolution of 0.5 deg × 0.5 deg. We also show some significant differences between fluorescence and coincident normalized difference vegetation indices (NDVI) retrievals.
NASA Technical Reports Server (NTRS)
Joiner, J.; Guanter, L.; Lindstrot, R.; Voigt, M.; Vasilkov, A. P.; Middleton, E. M.; Huemmrich, K. F.; Yoshida, Y.; Frankenberg, C.
2013-01-01
Globally mapped terrestrial chlorophyll fluorescence retrievals are of high interest because they can provide information on the functional status of vegetation including light-use efficiency and global primary productivity that can be used for global carbon cycle modeling and agricultural applications. Previous satellite retrievals of fluorescence have relied solely upon the filling-in of solar Fraunhofer lines that are not significantly affected by atmospheric absorption. Although these measurements provide near-global coverage on a monthly basis, they suffer from relatively low precision and sparse spatial sampling. Here, we describe a new methodology to retrieve global far-red fluorescence information; we use hyperspectral data with a simplified radiative transfer model to disentangle the spectral signatures of three basic components: atmospheric absorption, surface reflectance, and fluorescence radiance. An empirically based principal component analysis approach is employed, primarily using cloudy data over ocean, to model and solve for the atmospheric absorption. Through detailed simulations, we demonstrate the feasibility of the approach and show that moderate-spectral-resolution measurements with a relatively high signal-to-noise ratio can be used to retrieve far-red fluorescence information with good precision and accuracy. The method is then applied to data from the Global Ozone Monitoring Instrument 2 (GOME-2). The GOME-2 fluorescence retrievals display similar spatial structure as compared with those from a simpler technique applied to the Greenhouse gases Observing SATellite (GOSAT). GOME-2 enables global mapping of far-red fluorescence with higher precision over smaller spatial and temporal scales than is possible with GOSAT. Near-global coverage is provided within a few days. We are able to show clearly for the first time physically plausible variations in fluorescence over the course of a single month at a spatial resolution of 0.5 0.5. We also show some significant differences between fluorescence and coincident normalized difference vegetation indices (NDVI) retrievals.
Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat
2013-07-01
The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.
Spatial, Temporal and Spectral Satellite Image Fusion via Sparse Representation
NASA Astrophysics Data System (ADS)
Song, Huihui
Remote sensing provides good measurements for monitoring and further analyzing the climate change, dynamics of ecosystem, and human activities in global or regional scales. Over the past two decades, the number of launched satellite sensors has been increasing with the development of aerospace technologies and the growing requirements on remote sensing data in a vast amount of application fields. However, a key technological challenge confronting these sensors is that they tradeoff between spatial resolution and other properties, including temporal resolution, spectral resolution, swath width, etc., due to the limitations of hardware technology and budget constraints. To increase the spatial resolution of data with other good properties, one possible cost-effective solution is to explore data integration methods that can fuse multi-resolution data from multiple sensors, thereby enhancing the application capabilities of available remote sensing data. In this thesis, we propose to fuse the spatial resolution with temporal resolution and spectral resolution, respectively, based on sparse representation theory. Taking the study case of Landsat ETM+ (with spatial resolution of 30m and temporal resolution of 16 days) and MODIS (with spatial resolution of 250m ~ 1km and daily temporal resolution) reflectance, we propose two spatial-temporal fusion methods to combine the fine spatial information of Landsat image and the daily temporal resolution of MODIS image. Motivated by that the images from these two sensors are comparable on corresponding bands, we propose to link their spatial information on available Landsat- MODIS image pair (captured on prior date) and then predict the Landsat image from the MODIS counterpart on prediction date. To well-learn the spatial details from the prior images, we use a redundant dictionary to extract the basic representation atoms for both Landsat and MODIS images based on sparse representation. Under the scenario of two prior Landsat-MODIS image pairs, we build the corresponding relationship between the difference images of MODIS and ETM+ by training a low- and high-resolution dictionary pair from the given prior image pairs. In the second scenario, i.e., only one Landsat- MODIS image pair being available, we directly correlate MODIS and ETM+ data through an image degradation model. Then, the fusion stage is achieved by super-resolving the MODIS image combining the high-pass modulation in a two-layer fusion framework. Remarkably, the proposed spatial-temporal fusion methods form a unified framework for blending remote sensing images with phenology change or land-cover-type change. Based on the proposed spatial-temporal fusion models, we propose to monitor the land use/land cover changes in Shenzhen, China. As a fast-growing city, Shenzhen faces the problem of detecting the rapid changes for both rational city planning and sustainable development. However, the cloudy and rainy weather in region Shenzhen located makes the capturing circle of high-quality satellite images longer than their normal revisit periods. Spatial-temporal fusion methods are capable to tackle this problem by improving the spatial resolution of images with coarse spatial resolution but frequent temporal coverage, thereby making the detection of rapid changes possible. On two Landsat-MODIS datasets with annual and monthly changes, respectively, we apply the proposed spatial-temporal fusion methods to the task of multiple change detection. Afterward, we propose a novel spatial and spectral fusion method for satellite multispectral and hyperspectral (or high-spectral) images based on dictionary-pair learning and sparse non-negative matrix factorization. By combining the spectral information from hyperspectral image, which is characterized by low spatial resolution but high spectral resolution and abbreviated as LSHS, and the spatial information from multispectral image, which is featured by high spatial resolution but low spectral resolution and abbreviated as HSLS, this method aims to generate the fused data with both high spatial and high spectral resolutions. Motivated by the observation that each hyperspectral pixel can be represented by a linear combination of a few endmembers, this method first extracts the spectral bases of LSHS and HSLS images by making full use of the rich spectral information in LSHS data. The spectral bases of these two categories data then formulate a dictionary-pair due to their correspondence in representing each pixel spectra of LSHS data and HSLS data, respectively. Subsequently, the LSHS image is spatially unmixed by representing the HSLS image with respect to the corresponding learned dictionary to derive its representation coefficients. Combining the spectral bases of LSHS data and the representation coefficients of HSLS data, we finally derive the fused data characterized by the spectral resolution of LSHS data and the spatial resolution of HSLS data.
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.
Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff
2017-01-01
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
Real-time incident detection using social media data.
DOT National Transportation Integrated Search
2016-05-09
The effectiveness of traditional incident detection is often limited by sparse sensor coverage, and reporting incidents to emergency response systems : is labor-intensive. This research project mines tweet texts to extract incident information on bot...
NASA Astrophysics Data System (ADS)
Tang, L.; Hossain, F.
2009-12-01
Understanding the error characteristics of satellite rainfall data at different spatial/temporal scales is critical, especially when the scheduled Global Precipitation Mission (GPM) plans to provide High Resolution Precipitation Products (HRPPs) at global scales. Satellite rainfall data contain errors which need ground validation (GV) data for characterization, while satellite rainfall data will be most useful in the regions that are lacking in GV. Therefore, a critical step is to develop a spatial interpolation scheme for transferring the error characteristics of satellite rainfall data from GV regions to Non-GV regions. As a prelude to GPM, The TRMM Multi-satellite Precipitation Analysis (TMPA) products of 3B41RT and 3B42RT (Huffman et al., 2007) over the US spanning a record of 6 years are used as a representative example of satellite rainfall data. Next Generation Radar (NEXRAD) Stage IV rainfall data are used as the reference for GV data. Initial work by the authors (Tang et al., 2009, GRL) has shown promise in transferring error from GV to Non-GV regions, based on a six-year climatologic average of satellite rainfall data assuming only 50% of GV coverage. However, this transfer of error characteristics needs to be investigated for a range of GV data coverage. In addition, it is also important to investigate if proxy-GV data from an accurate space-borne sensor, such as the TRMM PR (or the GPM DPR), can be leveraged for the transfer of error at sparsely gauged regions. The specific question we ask in this study is, “what is the minimum coverage of GV data required for error transfer scheme to be implemented at acceptable accuracy in hydrological relevant scale?” Three geostatistical interpolation methods are compared: ordinary kriging, indicator kriging and disjunctive kriging. Various error metrics are assessed for transfer such as, Probability of Detection for rain and no rain, False Alarm Ratio, Frequency Bias, Critical Success Index, RMSE etc. Understanding the proper space-time scales at which these metrics can be reasonably transferred is also explored in this study. Keyword: Satellite rainfall, error transfer, spatial interpolation, kriging methods.
Analytical Incorporation of Velocity Parameters into Ice Sheet Elevation Change Rate Computations
NASA Astrophysics Data System (ADS)
Nagarajan, S.; Ahn, Y.; Teegavarapu, R. S. V.
2014-12-01
NASA, ESA and various other agencies have been collecting laser, optical and RADAR altimetry data through various missions to study the elevation changes of the Cryosphere. The laser altimetry collected by various airborne and spaceborne missions provides multi-temporal coverage of Greenland and Antarctica since 1993 to now. Though these missions have increased the data coverage, considering the dynamic nature of the ice surface, it is still sparse both spatially and temporally for accurate elevation change detection studies. The temporal and spatial gaps are usually filled by interpolation techniques. This presentation will demonstrate a method to improve the temporal interpolation. Considering the accuracy, repeat coverage and spatial distribution, the laser scanning data has been widely used to compute elevation change rate of Greenland and Antarctica ice sheets. A major problem with these approaches is non-consideration of ice sheet velocity dynamics into change rate computations. Though the correlation between velocity and elevation change rate have been noticed by Hurkmans et al., 2012, the corrections for velocity changes were applied after computing elevation change rates by assuming linear or higher polynomial relationship. This research will discuss the possibilities of parameterizing ice sheet dynamics as unknowns (dX and dY) in the adjustment mathematical model that computes elevation change (dZ) rates. It is a simultaneous computation of changes in all three directions of the ice surface. Also, the laser points between two time epochs in a crossover area have different distribution and count. Therefore, a registration method that does not require point-to-point correspondence is required to recover the unknown elevation and velocity parameters. This research will experiment the possibilities of registering multi-temporal datasets using volume minimization algorithm, which determines the unknown dX, dY and dZ that minimizes the volume between two or more time-epoch point clouds. In order to make use of other existing data as well as to constrain the adjustment, InSAR velocity will be used as initial values for the parameters dX and dY. The presentation will discuss the results of analytical incorporation of parameters and the volume based registration method for a test site in Greenland.
Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking
Qu, Shiru
2016-01-01
Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710
Enclosure Transform for Interest Point Detection From Speckle Imagery.
Yongjian Yu; Jue Wang
2017-03-01
We present a fast enclosure transform (ET) to localize complex objects of interest from speckle imagery. This approach explores the spatial confinement on regional features from a sparse image feature representation. Unrelated, broken ridge features surrounding an object are organized collaboratively, giving rise to the enclosureness of the object. Three enclosure likelihood measures are constructed, consisting of the enclosure force, potential energy, and encloser count. In the transform domain, the local maxima manifest the locations of objects of interest, for which only the intrinsic dimension is known a priori. The discrete ET algorithm is computationally efficient, being on the order of O(MN) using N measuring distances across an image of M ridge pixels. It involves easy and few parameter settings. We demonstrate and assess the performance of ET on the automatic detection of the prostate locations from supra-pubic ultrasound images. ET yields superior results in terms of positive detection rate, accuracy and coverage.
Different Operating Modes of the Rosetta's Ion Composition Analyzer and Its Virtual Counterpart
NASA Astrophysics Data System (ADS)
Pospieszyński, R.
2009-12-01
The Ion Composition Analyzer (ICA) is a part of the Rosetta Plasma Consortium (RPC) which is on board the Rosetta space probe heading for the comet 67/P Churyumov-Gerasimenko. It is scheduled to reach the comet in year 2014. In order to reduce telemetry the ICA instrument has a number of data reduction modes (sampling modes). The effects of these different modes are investigated and a plan on how to best operate the instrument when in orbit around the comet will be prepared. In order to investigate all of the cases a virtual instrument is being prepared. The virtual instrument can be operated in different modes just as the ``real'' one. The work with sampling will be to calculate what particles are coming from each direction we are looking in, based on the ISSI Comet Model, and then see how much information we loose by too sparse sampling and incomplete spatial coverage.
NASA Technical Reports Server (NTRS)
Malloy, Kelsey; Folmer, Michael J.; Phillips, Joseph; Sienkiewicz, Joseph M.; Berndt, Emily
2017-01-01
Motivation: Ocean data is sparse: reliance on satellite imagery for marine forecasting; Ocean Prediction Center (OPC) –“mariner’s weather lifeline”. Responsible for: Pacific, Atlantic, Pacific Alaska surface analyses –24, 48, 96 hrs.; Wind & wave analyses –24, 48, 96 hrs.; Issue warnings, make decisions, Geostationary Operational Environmental Satellite –R Series (now GOES-16), Compared to the old GOES: 3 times spectral resolution, 4 times spatial resolution, 5 times faster coverage; Comparable to Japanese Meteorological Agency’s Himawari-8, used a lot throughout this research. Research Question: How can integrating satellite data imagery and derived products help forecasters improve prognosis of rapid cyclogenesis and hurricane-force wind events? Phase I –Identifying stratospheric air intrusions: Water Vapor –6.2, 6.9, 7.3 micron channels; Airmass RGB Product; AIRS, IASI, NUCAPS total column ozone and ozone anomaly; ASCAT (A/B) and AMSR-2 wind data.
Validation Of TRMM For Hazard Assessment In The Remote Context Of Tropical Africa
NASA Astrophysics Data System (ADS)
Monsieurs, E.; Kirschbaum, D.; Tan, J.; Jacobs, L.; Kervyn, M.; Demoulin, A.; Dewitte, O.
2017-12-01
Accurate rainfall data is fundamental for understanding and mitigating the disastrous effects of many rainfall-triggered hazards, especially when one considers the challenges arising from climate change and rainfall variability. In tropical Africa in particular, the sparse operational rainfall gauging network hampers the ability to understand these hazards. Satellite rainfall estimates (SRE) can therefore be of great value. Yet, rigorous validation is required to identify the uncertainties when using SRE for hazard applications. We evaluated the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 Research Derived Daily Product from 1998 to 2017, at 0.25° x 0.25° spatial and 24 h temporal resolution. The validation was done over the western branch of the East African Rift, with the perspective of regional landslide hazard assessment in mind. Even though we collected an unprecedented dataset of 47 gauges with a minimum temporal resolution of 24 h, the sparse and heterogeneous temporal coverage in a region with high rainfall variability poses challenges for validation. In addition, the discrepancy between local-scale gauge data and spatially averaged ( 775 km²) TMPA data in the context of local convective storms and orographic rainfall is a crucial source of uncertainty. We adopted a flexible framework for SRE validation that fosters explorative research in a remote context. Results show that TMPA performs reasonably well during the rainy seasons for rainfall intensities <20 mm/day. TMPA systematically underestimates rainfall, but most problematic is the decreasing probability of detection of high intensity rainfalls. We suggest that landslide hazard might be efficiently assessed if we take account of the systematic biases in TMPA data and determine rainfall thresholds modulated by controls on, and uncertainties of, TMPA revealed in this study. Moreover, it is found relevant in mapping regional-scale rainfall-triggered hazards that are in any case poorly covered by the sparse available gauges. We anticipate validation of TMPA's successor (Integrated Multi-satellitE Retrievals for Global Precipitation Measurement; 10 km × 10 km, half-hourly) using the proposed framework, as soon as this product will be available in early 2018 for the 1998-present period.
NASA Astrophysics Data System (ADS)
Yang, W.; Min, M.; Bai, Y.; Lynnes, C.; Holloway, D.; Enloe, Y.; di, L.
2008-12-01
In the past few years, there have been growing interests, among major earth observing satellite (EOS) data providers, in serving data through the interoperable Web Coverage Service (WCS) interface protocol, developed by the Open Geospatial Consortium (OGC). The interface protocol defined in WCS specifications allows client software to make customized requests of multi-dimensional EOS data, including spatial and temporal subsetting, resampling and interpolation, and coordinate reference system (CRS) transformation. A WCS server describes an offered coverage, i.e., a data product, through a response to a client's DescribeCoverage request. The description includes the offered coverage's spatial/temporal extents and resolutions, supported CRSs, supported interpolation methods, and supported encoding formats. Based on such information, a client can request the entire or a subset of coverage in any spatial/temporal resolutions and in any one of the supported CRSs, formats, and interpolation methods. When implementing a WCS server, a data provider has different approaches to present its data holdings to clients. One of the most straightforward, and commonly used, approaches is to offer individual physical data files as separate coverages. Such implementation, however, will result in too many offered coverages for large data holdings and it also cannot fully present the relationship among different, but spatially and/or temporally associated, data files. It is desirable to disconnect offered coverages from physical data files so that the former is more coherent, especially in spatial and temporal domains. Therefore, some servers offer one single coverage for a set of spatially coregistered time series data files such as a daily global precipitation coverage linked to many global single- day precipitation files; others offer one single coverage for multiple temporally coregistered files together forming a large spatial extent. In either case, a server needs to assemble an output coverage real-time by combining potentially large number of physical files, which can be operationally difficult. The task becomes more challenging if an offered coverage involves spatially and temporally un-registered physical files. In this presentation, we will discuss issues and lessons learned in providing NASA's AIRS Level 2 atmospheric products, which are in satellite swath CRS and in 6-minute segment granule files, as virtual global coverages. We"ll discuss the WCS server's on- the-fly georectification, mosaicking, quality screening, performance, and scalability.
Seasonal Dependence of Geomagnetic Active-Time Northern High-Latitude Upper Thermospheric Winds
NASA Astrophysics Data System (ADS)
Dhadly, Manbharat S.; Emmert, John T.; Drob, Douglas P.; Conde, Mark G.; Doornbos, Eelco; Shepherd, Gordon G.; Makela, Jonathan J.; Wu, Qian; Nieciejewski, Richard J.; Ridley, Aaron J.
2018-01-01
This study is focused on improving the poorly understood seasonal dependence of northern high-latitude F region thermospheric winds under active geomagnetic conditions. The gaps in our understanding of the dynamic high-latitude thermosphere are largely due to the sparseness of thermospheric wind measurements. With current observational facilities, it is infeasible to construct a synoptic picture of thermospheric winds, but enough data with wide spatial and temporal coverage have accumulated to construct a meaningful statistical analysis. We use long-term data from eight ground-based and two space-based instruments to derive climatological wind patterns as a function of magnetic local time, magnetic latitude, and season. These diverse data sets possess different geometries and different spatial and solar activity coverage. The major challenge is to combine these disparate data sets into a coherent picture while overcoming the sampling limitations and biases among them. In our previous study (focused on quiet time winds), we found bias in the Gravity Field and Steady State Ocean Circulation Explorer (GOCE) cross-track winds. Here we empirically quantify the GOCE bias and use it as a correction profile for removing apparent bias before empirical wind formulation. The assimilated wind patterns exhibit all major characteristics of high-latitude neutral circulation. The latitudinal extent of duskside circulation expands almost 10∘ from winter to summer. The dawnside circulation subsides from winter to summer. Disturbance winds derived from geomagnetic active and quiet winds show strong seasonal and latitudinal variability. Comparisons between wind patterns derived here and Disturbance Wind Model (DWM07) (which have no seasonal dependence) suggest that DWM07 is skewed toward summertime conditions.
Satellite-based PM concentrations and their application to COPD in Cleveland, OH
Kumar, Naresh; Liang, Dong; Comellas, Alejandro; Chu, Allen D.; Abrams, Thad
2014-01-01
A hybrid approach is proposed to estimate exposure to fine particulate matter (PM2.5) at a given location and time. This approach builds on satellite-based aerosol optical depth (AOD), air pollution data from sparsely distributed Environmental Protection Agency (EPA) sites and local time–space Kriging, an optimal interpolation technique. Given the daily global coverage of AOD data, we can develop daily estimate of air quality at any given location and time. This can assure unprecedented spatial coverage, needed for air quality surveillance and management and epidemiological studies. In this paper, we developed an empirical relationship between the 2 km AOD and PM2.5 data from EPA sites. Extrapolating this relationship to the study domain resulted in 2.3 million predictions of PM2.5 between 2000 and 2009 in Cleveland Metropolitan Statistical Area (MSA). We have developed local time–space Kriging to compute exposure at a given location and time using the predicted PM2.5. Daily estimates of PM2.5 were developed for Cleveland MSA between 2000 and 2009 at 2.5 km spatial resolution; 1.7 million (~79.8%) of 2.13 million predictions required for multiyear and geographic domain were robust. In the epidemiological application of the hybrid approach, admissions for an acute exacerbation of chronic obstructive pulmonary disease (AECOPD) was examined with respect to time–space lagged PM2.5 exposure. Our analysis suggests that the risk of AECOPD increases 2.3% with a unit increase in PM2.5 exposure within 9 days and 0.05° (~5 km) distance lags. In the aggregated analysis, the exposed groups (who experienced exposure to PM2.5 >15.4 μg/m3) were 54% more likely to be admitted for AECOPD than the reference group. The hybrid approach offers greater spatiotemporal coverage and reliable characterization of ambient concentration than conventional in situ monitoring-based approaches. Thus, this approach can potentially reduce exposure misclassification errors in the conventional air pollution epidemiology studies. PMID:24045428
Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej
2015-11-30
Common spatial pattern (CSP) has been most popularly applied to motor-imagery (MI) feature extraction for classification in brain-computer interface (BCI) application. Successful application of CSP depends on the filter band selection to a large degree. However, the most proper band is typically subject-specific and can hardly be determined manually. This study proposes a sparse filter band common spatial pattern (SFBCSP) for optimizing the spatial patterns. SFBCSP estimates CSP features on multiple signals that are filtered from raw EEG data at a set of overlapping bands. The filter bands that result in significant CSP features are then selected in a supervised way by exploiting sparse regression. A support vector machine (SVM) is implemented on the selected features for MI classification. Two public EEG datasets (BCI Competition III dataset IVa and BCI Competition IV IIb) are used to validate the proposed SFBCSP method. Experimental results demonstrate that SFBCSP help improve the classification performance of MI. The optimized spatial patterns by SFBCSP give overall better MI classification accuracy in comparison with several competing methods. The proposed SFBCSP is a potential method for improving the performance of MI-based BCI. Copyright © 2015 Elsevier B.V. All rights reserved.
Node Deployment Algorithm Based on Connected Tree for Underwater Sensor Networks
Jiang, Peng; Wang, Xingmin; Jiang, Lurong
2015-01-01
Designing an efficient deployment method to guarantee optimal monitoring quality is one of the key topics in underwater sensor networks. At present, a realistic approach of deployment involves adjusting the depths of nodes in water. One of the typical algorithms used in such process is the self-deployment depth adjustment algorithm (SDDA). This algorithm mainly focuses on maximizing network coverage by constantly adjusting node depths to reduce coverage overlaps between two neighboring nodes, and thus, achieves good performance. However, the connectivity performance of SDDA is irresolute. In this paper, we propose a depth adjustment algorithm based on connected tree (CTDA). In CTDA, the sink node is used as the first root node to start building a connected tree. Finally, the network can be organized as a forest to maintain network connectivity. Coverage overlaps between the parent node and the child node are then reduced within each sub-tree to optimize coverage. The hierarchical strategy is used to adjust the distance between the parent node and the child node to reduce node movement. Furthermore, the silent mode is adopted to reduce communication cost. Simulations show that compared with SDDA, CTDA can achieve high connectivity with various communication ranges and different numbers of nodes. Moreover, it can realize coverage as high as that of SDDA with various sensing ranges and numbers of nodes but with less energy consumption. Simulations under sparse environments show that the connectivity and energy consumption performances of CTDA are considerably better than those of SDDA. Meanwhile, the connectivity and coverage performances of CTDA are close to those depth adjustment algorithms base on connected dominating set (CDA), which is an algorithm similar to CTDA. However, the energy consumption of CTDA is less than that of CDA, particularly in sparse underwater environments. PMID:26184209
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters).
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-07
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
NASA Astrophysics Data System (ADS)
Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Fredenberg, E.; Lundqvist, Mats; Siewerdsen, J. H.
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging.
Volumetric CT with sparse detector arrays (and application to Si-strip photon counters)
Sisniega, A; Zbijewski, W; Stayman, J W; Xu, J; Taguchi, K; Fredenberg, E; Lundqvist, Mats; Siewerdsen, J H
2016-01-01
Novel x-ray medical imaging sensors, such as photon counting detectors (PCDs) and large area CCD and CMOS cameras can involve irregular and/or sparse sampling of the detector plane. Application of such detectors to CT involves undersampling that is markedly different from the commonly considered case of sparse angular sampling. This work investigates volumetric sampling in CT systems incorporating sparsely sampled detectors with axial and helical scan orbits and evaluates performance of model-based image reconstruction (MBIR) with spatially varying regularization in mitigating artifacts due to sparse detector sampling. Volumetric metrics of sampling density and uniformity were introduced. Penalized-likelihood MBIR with a spatially varying penalty that homogenized resolution by accounting for variations in local sampling density (i.e. detector gaps) was evaluated. The proposed methodology was tested in simulations and on an imaging bench based on a Si-strip PCD (total area 5 cm × 25 cm) consisting of an arrangement of line sensors separated by gaps of up to 2.5 mm. The bench was equipped with translation/rotation stages allowing a variety of scanning trajectories, ranging from a simple axial acquisition to helical scans with variable pitch. Statistical (spherical clutter) and anthropomorphic (hand) phantoms were considered. Image quality was compared to that obtained with a conventional uniform penalty in terms of structural similarity index (SSIM), image uniformity, spatial resolution, contrast, and noise. Scan trajectories with intermediate helical width (~10 mm longitudinal distance per 360° rotation) demonstrated optimal tradeoff between the average sampling density and the homogeneity of sampling throughout the volume. For a scan trajectory with 10.8 mm helical width, the spatially varying penalty resulted in significant visual reduction of sampling artifacts, confirmed by a 10% reduction in minimum SSIM (from 0.88 to 0.8) and a 40% reduction in the dispersion of SSIM in the volume compared to the constant penalty (both penalties applied at optimal regularization strength). Images of the spherical clutter and wrist phantoms confirmed the advantages of the spatially varying penalty, showing a 25% improvement in image uniformity and 1.8 × higher CNR (at matched spatial resolution) compared to the constant penalty. The studies elucidate the relationship between sampling in the detector plane, acquisition orbit, sampling of the reconstructed volume, and the resulting image quality. They also demonstrate the benefit of spatially varying regularization in MBIR for scenarios with irregular sampling patterns. Such findings are important and integral to the incorporation of a sparsely sampled Si-strip PCD in CT imaging. PMID:26611740
Tensor-guided fitting of subduction slab depths
Bazargani, Farhad; Hayes, Gavin P.
2013-01-01
Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.
Mid-frequency MTF compensation of optical sparse aperture system.
Zhou, Chenghao; Wang, Zhile
2018-03-19
Optical sparse aperture (OSA) can greatly improve the spatial resolution of optical system. However, because of its aperture dispersion and sparse, its mid-frequency modulation transfer function (MTF) are significantly lower than that of a single aperture system. The main focus of this paper is on the mid-frequency MTF compensation of the optical sparse aperture system. Firstly, the principle of the mid-frequency MTF decreasing and missing of optical sparse aperture are analyzed. This paper takes the filling factor as a clue. The method of processing the mid-frequency MTF decreasing with large filling factor and method of compensation mid-frequency MTF with small filling factor are given respectively. For the MTF mid-frequency decreasing, the image spatial-variant restoration method is proposed to restore the mid-frequency information in the image; for the mid-frequency MTF missing, two images obtained by two system respectively are fused to compensate the mid-frequency information in optical sparse aperture image. The feasibility of the two method are analyzed in this paper. The numerical simulation of the system and algorithm of the two cases are presented using Zemax and Matlab. The results demonstrate that by these two methods the mid-frequency MTF of OSA system can be compensated effectively.
Moody, Daniela; Wohlberg, Brendt
2018-01-02
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.
Ran, Bin; Song, Li; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. PMID:27448326
Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun
2016-01-01
Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.
Zhao, Dan; Di Nicola, Matteo; Khani, Mohammad M; Jestin, Jacques; Benicewicz, Brian C; Kumar, Sanat K
2016-09-14
We compare the self-assembly of silica nanoparticles (NPs) with physically adsorbed polystyrene-block-poly(2-vinylpyridine) (PS-b-P2VP) copolymers (BCP) against NPs with grafted bimodal (BM) brushes comprised of long, sparsely grafted PS chains and a short dense carpet of P2VP chains. As with grafted NPs, the dispersion state of the BCP NPs can be facilely tuned in PS matrices by varying the PS coverage on the NP surface or by changes in the ratio of the PS graft to matrix chain lengths. Surprisingly, the BCP NPs are remarkably better dispersed than the NPs tethered with bimodal brushes at comparable PS grafting densities. We postulate that this difference arises because of two factors inherent in the synthesis of the NPs: In the case of the BCP NPs the adsorption process is analogous to the chains being "grafted to" the NP surface, while the BM case corresponds to "grafting from" the surface. We have shown that the "grafted from" protocol yields patchy NPs even if the graft points are uniformly placed on each particle. This phenomenon, which is caused by chain conformation fluctuations, is exacerbated by the distribution function associated with the (small) number of grafts per particle. In contrast, in the case of BCP adsorption, each NP is more uniformly coated by a P2VP monolayer driven by the strongly favorable P2VP-silica interactions. Since each P2VP block is connected to a PS chain we conjecture that these adsorbed systems are closer to the limit of spatially uniform sparse brush coverage than the chemically grafted case. We finally show that the better NP dispersion resulting from BCP adsorption leads to larger mechanical reinforcement than those achieved with BM particles. These results emphasize that physical adsorption of BCPs is a simple, effective and practically promising strategy to direct NP dispersion in a chemically unfavorable polymer matrix.
ToxCast Data Expands Universe of Chemical-Gene Interactions (SOT)
Characterizing the effects of chemicals in biological systems is often summarized by chemical-gene interactions, which have sparse coverage in literature. The ToxCast chemical screening program has produced bioactivity data for nearly 2000 chemicals and over 450 gene targets. Thi...
NASA Astrophysics Data System (ADS)
Lian, Xu; Zeng, Zhenzhong; Yao, Yitong; Peng, Shushi; Wang, Kaicun; Piao, Shilong
2017-02-01
There is an increasing demand to integrate land surface temperature (LST) into climate research due to its global coverage, which requires a comprehensive knowledge of its distinctive characteristics compared to near-surface air temperature (Tair). Using satellite observations and in situ station-based data sets, we conducted a global-scale assessment of the spatial and seasonal variations in the difference between daily maximum LST and daily maximum Tair (δT, LST - Tair) during 2003-2014. Spatially, LST is generally higher than Tair over arid and sparsely vegetated regions in the middle-low latitudes, but LST is lower than Tair in tropical rainforests due to strong evaporative cooling, and in the high-latitude regions due to snow-induced radiative cooling. Seasonally, δT is negative in tropical regions throughout the year, while it displays a pronounced seasonality in both the midlatitudes and boreal regions. The seasonality in the midlatitudes is a result of the asynchronous responses of LST and Tair to the seasonal cycle of radiation and vegetation abundance, whereas in the boreal regions, seasonality is mainly caused by the change in snow cover. Our study identified substantial spatial heterogeneity and seasonality in δT, as well as its determinant environmental drivers, and thus provides a useful reference for monitoring near-surface air temperature changes using remote sensing, particularly in remote regions.
NASA Technical Reports Server (NTRS)
Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.
2014-01-01
Moderate Resolution Imaging SpectroRadiometer (MODIS) and Multi-angle Imaging Spectroradiomater (MISR) provide regular aerosol observations with global coverage. It is essential to examine the coherency between space- and ground-measured aerosol parameters in representing aerosol spatial and temporal variability, especially in the climate forcing and model validation context. In this paper, we introduce Maximum Covariance Analysis (MCA), also known as Singular Value Decomposition analysis as an effective way to compare correlated aerosol spatial and temporal patterns between satellite measurements and AERONET data. This technique not only successfully extracts the variability of major aerosol regimes but also allows the simultaneous examination of the aerosol variability both spatially and temporally. More importantly, it well accommodates the sparsely distributed AERONET data, for which other spectral decomposition methods, such as Principal Component Analysis, do not yield satisfactory results. The comparison shows overall good agreement between MODIS/MISR and AERONET AOD variability. The correlations between the first three modes of MCA results for both MODIS/AERONET and MISR/ AERONET are above 0.8 for the full data set and above 0.75 for the AOD anomaly data. The correlations between MODIS and MISR modes are also quite high (greater than 0.9). We also examine the extent of spatial agreement between satellite and AERONET AOD data at the selected stations. Some sites with disagreements in the MCA results, such as Kanpur, also have low spatial coherency. This should be associated partly with high AOD spatial variability and partly with uncertainties in satellite retrievals due to the seasonally varying aerosol types and surface properties.
A novel principal component analysis for spatially misaligned multivariate air pollution data.
Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A
2017-01-01
We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.
NASA Astrophysics Data System (ADS)
Xiao, Jianyong; Bai, Xiaoyong; Zhou, Dequan; Qian, Qinghuan; Zeng, Cheng; Chen, Fei
2018-01-01
Vegetation coverage dynamics is affected by climatic, topography and human activities, which is an important indicator reflecting the regional ecological environment. Revealing the spatial-temporal characteristics of vegetation coverage is of great significance to the protection and management of ecological environment. Based on MODIS NDVI data and the Maximum Value Composites (MVC), we excluded soil spectrum interference to calculate Fractional Vegetation Coverage (FVC). Then the long-term FVC was used to calculate the spatial pattern and temporal variation of vegetation in Wujiang River Basin from 2000 to 2016 by using Trend analysis and Hurst index. The relationship between topography and spatial distribution of FVC was analyzed. The main conclusions are as follows: (1) The multi-annual mean vegetation coverage reveals a spatial distribution variation characteristic of low value in midstream and high level in other parts of the basin, owing a mean value of 0.6567. (2) From 2000 to 2016, the FVC of the Wujiang River Basin fluctuated between 0.6110 and 0.7380, and the overall growth rate of FVC was 0.0074/a. (3) The area of vegetation coverage tending to improve is more than that going to degrade in the future. Grass land, Arable land and Others improved significantly; karst rocky desertification comprehensive management project lead to persistent vegetation coverage improvement of Grass land, Arable land and Others. Residential land is covered with obviously degraded vegetation, resulting of urban sprawl; (4) The spatial distribution of FVC is positively correlated with TNI. Researches of spatial-temporal evolution of vegetation coverage have significant meaning for the ecological environment protection and management of the Wujiang River Basin.
High-Resolution Spatial Distribution and Estimation of Access to Improved Sanitation in Kenya.
Jia, Peng; Anderson, John D; Leitner, Michael; Rheingans, Richard
2016-01-01
Access to sanitation facilities is imperative in reducing the risk of multiple adverse health outcomes. A distinct disparity in sanitation exists among different wealth levels in many low-income countries, which may hinder the progress across each of the Millennium Development Goals. The surveyed households in 397 clusters from 2008-2009 Kenya Demographic and Health Surveys were divided into five wealth quintiles based on their national asset scores. A series of spatial analysis methods including excess risk, local spatial autocorrelation, and spatial interpolation were applied to observe disparities in coverage of improved sanitation among different wealth categories. The total number of the population with improved sanitation was estimated by interpolating, time-adjusting, and multiplying the surveyed coverage rates by high-resolution population grids. A comparison was then made with the annual estimates from United Nations Population Division and World Health Organization /United Nations Children's Fund Joint Monitoring Program for Water Supply and Sanitation. The Empirical Bayesian Kriging interpolation produced minimal root mean squared error for all clusters and five quintiles while predicting the raw and spatial coverage rates of improved sanitation. The coverage in southern regions was generally higher than in the north and east, and the coverage in the south decreased from Nairobi in all directions, while Nyanza and North Eastern Province had relatively poor coverage. The general clustering trend of high and low sanitation improvement among surveyed clusters was confirmed after spatial smoothing. There exists an apparent disparity in sanitation among different wealth categories across Kenya and spatially smoothed coverage rates resulted in a closer estimation of the available statistics than raw coverage rates. Future intervention activities need to be tailored for both different wealth categories and nationally where there are areas of greater needs when resources are limited.
Overcoming Spatial and Temporal Barriers to Public Access Defibrillators Via Optimization
Sun, Christopher L. F.; Demirtas, Derya; Brooks, Steven C.; Morrison, Laurie J.; Chan, Timothy C.Y.
2016-01-01
BACKGROUND Immediate access to an automated external defibrillator (AED) increases the chance of survival from out-of-hospital cardiac arrest (OHCA). Current deployment usually considers spatial AED access, assuming AEDs are available 24 h a day. OBJECTIVES We sought to develop an optimization model for AED deployment, accounting for spatial and temporal accessibility, to evaluate if OHCA coverage would improve compared to deployment based on spatial accessibility alone. METHODS This was a retrospective population-based cohort study using data from the Toronto Regional RescuNET cardiac arrest database. We identified all nontraumatic public-location OHCAs in Toronto, Canada (January 2006 through August 2014) and obtained a list of registered AEDs (March 2015) from Toronto emergency medical services. We quantified coverage loss due to limited temporal access by comparing the number of OHCAs that occurred within 100 meters of a registered AED (assumed 24/7 coverage) with the number that occurred both within 100 meters of a registered AED and when the AED was available (actual coverage). We then developed a spatiotemporal optimization model that determined AED locations to maximize OHCA actual coverage and overcome the reported coverage loss. We computed the coverage gain between the spatiotemporal model and a spatial-only model using 10-fold cross-validation. RESULTS We identified 2,440 atraumatic public OHCAs and 737 registered AED locations. A total of 451 OHCAs were covered by registered AEDs under assumed 24/7 coverage, and 354 OHCAs under actual coverage, representing a coverage loss of 21.5% (p < 0.001). Using the spatiotemporal model to optimize AED deployment, a 25.3% relative increase in actual coverage was achieved over the spatial-only approach (p < 0.001). CONCLUSIONS One in 5 OHCAs occurred near an inaccessible AED at the time of the OHCA. Potential AED use was significantly improved with a spatiotemporal optimization model guiding deployment. PMID:27539176
Quresh S. Latif; Martha M. Ellis; Victoria A. Saab; Kim Mellen-McLean
2017-01-01
Sparsely distributed species attract conservation concern, but insufficient information on population trends challenges conservation and funding prioritization. Occupancy-based monitoring is attractive for these species, but appropriate sampling design and inference depend on particulars of the study system. We employed spatially explicit simulations to identify...
Factors Impacting Spatial Patterns of Snow Distribution in a Small Catchment near Nome, AK
NASA Astrophysics Data System (ADS)
Chen, M.; Wilson, C. J.; Charsley-Groffman, L.; Busey, R.; Bolton, W. R.
2017-12-01
Snow cover plays an important role in the climate, hydrology and ecological systems of the Arctic due to its influence on the water balance, thermal regimes, vegetation and carbon flux. Thus, snow depth and coverage have been key components in all the earth system models but are often poorly represented for arctic regions, where fine scale snow distribution data is sparse. The snow data currently used in the models is at coarse resolution, which in turn leads to high uncertainty in model predictions. Through the DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic, high resolution snow distribution data is being developed and applied in catchment scale models to ultimately improve representation of snow and its interactions with other model components in the earth system models . To improve these models, it is important to identify key factors that control snow distribution and quantify the impacts of those factors on snow distribution. In this study, two intensive snow depth surveys (1 to 10 meters scale) were conducted for a 2.3 km2 catchment on the Teller road, near Nome, AK in the winter of 2016 and 2017. We used a statistical model to quantify the impacts of vegetation types, macro-topography, micro-topography, and meteorological parameters on measured snow depth. The results show that snow spatial distribution was similar between 2016 and 2017, snow depth was spatially auto correlated over small distance (2-5 meters), but not spatially auto correlated over larger distance (more than 2-5 meters). The coefficients of variation of snow depth was above 0.3 for all the snow survey transects (500-800 meters long). Variation of snow depth is governed by vegetation height, aspect, slope, surface curvature, elevation and wind speed and direction. We expect that this empirical statistical model can be used to estimate end of winter snow depth for the whole watershed and will further develop the model using data from other arctic regions to estimate seasonally dynamic snow coverage and properties for use in catchment scale to pan-Arctic models.
NASA Astrophysics Data System (ADS)
Moghaddam, M.; Silva, A.; Clewley, D.; Akbar, R.; Entekhabi, D.
2013-12-01
Soil Moisture Sensing Controller and oPtimal Estimator (SoilSCAPE) is a wireless in-situ sensor network technology, developed under the support of NASA ESTO/AIST program, for multi-scale validation of soil moisture retrievals from the Soil Moisture Active and Passive (SMAP) mission. The SMAP sensor suite is expected to produce soil moisture retrievals at 3 km scale from the radar instrument, at 36 km from the radiometer, and at 10 km from the combination of the two sensors. To validate the retrieved soil moisture maps at any of these scales, it is necessary to perform in-situ observations at multiple scales (ten, hundreds, and thousands of meters), representative of the true spatial variability of soil moisture fields. The most recent SoilSCAPE network, deployed in the California central valley, has been designed, built, and deployed to accomplish this goal, and is expected to become a core validation site for SMAP. The network consists of up to 150 sensor nodes, each comprised of 3-4 soil moisture sensors at various depths, deployed over a spatial extent of 36 km by 36 km. The network contains multiple sub-networks, each having up to 30 nodes, whose location is selected in part based on maximizing the land cover diversity within the 36 km cell. The network has achieved unprecedented energy efficiency, longevity, and spatial coverage using custom-designed hardware and software protocols. The network architecture utilizes a nested strategy, where a number of end devices (EDs) communicate to a local coordinator (LC) using our recently developed hardware with ultra-efficient circuitry and best-effort-timeslot allocation communication protocol. The LCs in turn communicates with the base station (BS) via text messages and a new compression scheme. The hardware and software technologies required to implement this latest deployment of the SoilSCAPE network will be presented in this paper, and several data sets resulting from the measurements will be shown. The data are available publicly in near-real-time from the project web site, and are also available and searchable via an extensive set of metadata fields through the ORNL-DAAC.
Observing System Simulations for Small Satellite Formations Estimating Bidirectional Reflectance
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Gatebe, Charles K.; de Weck, Olivier
2015-01-01
The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: Use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.
Observing system simulations for small satellite formations estimating bidirectional reflectance
NASA Astrophysics Data System (ADS)
Nag, Sreeja; Gatebe, Charles K.; Weck, Olivier de
2015-12-01
The bidirectional reflectance distribution function (BRDF) gives the reflectance of a target as a function of illumination geometry and viewing geometry, hence carries information about the anisotropy of the surface. BRDF is needed in remote sensing for the correction of view and illumination angle effects (for example in image standardization and mosaicing), for deriving albedo, for land cover classification, for cloud detection, for atmospheric correction, and other applications. However, current spaceborne instruments provide sparse angular sampling of BRDF and airborne instruments are limited in the spatial and temporal coverage. To fill the gaps in angular coverage within spatial, spectral and temporal requirements, we propose a new measurement technique: use of small satellites in formation flight, each satellite with a VNIR (visible and near infrared) imaging spectrometer, to make multi-spectral, near-simultaneous measurements of every ground spot in the swath at multiple angles. This paper describes an observing system simulation experiment (OSSE) to evaluate the proposed concept and select the optimal formation architecture that minimizes BRDF uncertainties. The variables of the OSSE are identified; number of satellites, measurement spread in the view zenith and relative azimuth with respect to solar plane, solar zenith angle, BRDF models and wavelength of reflection. Analyzing the sensitivity of BRDF estimation errors to the variables allow simplification of the OSSE, to enable its use to rapidly evaluate formation architectures. A 6-satellite formation is shown to produce lower BRDF estimation errors, purely in terms of angular sampling as evaluated by the OSSE, than a single spacecraft with 9 forward-aft sensors. We demonstrate the ability to use OSSEs to design small satellite formations as complements to flagship mission data. The formations can fill angular sampling gaps and enable better BRDF products than currently possible.
Spatial heterogeneity study of vegetation coverage at Heihe River Basin
NASA Astrophysics Data System (ADS)
Wu, Lijuan; Zhong, Bo; Guo, Liyu; Zhao, Xiangwei
2014-11-01
Spatial heterogeneity of the animal-landscape system has three major components: heterogeneity of resource distributions in the physical environment, heterogeneity of plant tissue chemistry, heterogeneity of movement modes by the animal. Furthermore, all three different types of heterogeneity interact each other and can either reinforce or offset one another, thereby affecting system stability and dynamics. In previous studies, the study areas are investigated by field sampling, which costs a large amount of manpower. In addition, uncertain in sampling affects the quality of field data, which leads to unsatisfactory results during the entire study. In this study, remote sensing data is used to guide the sampling for research on heterogeneity of vegetation coverage to avoid errors caused by randomness of field sampling. Semi-variance and fractal dimension analysis are used to analyze the spatial heterogeneity of vegetation coverage at Heihe River Basin. The spherical model with nugget is used to fit the semivariogram of vegetation coverage. Based on the experiment above, it is found, (1)there is a strong correlation between vegetation coverage and distance of vegetation populations within the range of 0-28051.3188m at Heihe River Basin, but the correlation loses suddenly when the distance greater than 28051.3188m. (2)The degree of spatial heterogeneity of vegetation coverage at Heihe River Basin is medium. (3)Spatial distribution variability of vegetation occurs mainly on small scales. (4)The degree of spatial autocorrelation is 72.29% between 25% and 75%, which means that spatial correlation of vegetation coverage at Heihe River Basin is medium high.
Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E
2017-04-15
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.
Characterizing the effects of chemicals in biological systems is often summarized by chemical-gene interactions, which have sparse coverage in the literature. The ToxCast chemical screening program has produced bioactivity data for nearly 2000 chemicals and over 450 gene targets....
USDA-ARS?s Scientific Manuscript database
A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...
Selecting informative subsets of sparse supermatrices increases the chance to find correct trees.
Misof, Bernhard; Meyer, Benjamin; von Reumont, Björn Marcus; Kück, Patrick; Misof, Katharina; Meusemann, Karen
2013-12-03
Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30%. With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the usually used simple selections of taxa and genes with high data coverage.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370
Action Recognition Using Nonnegative Action Component Representation and Sparse Basis Selection.
Wang, Haoran; Yuan, Chunfeng; Hu, Weiming; Ling, Haibin; Yang, Wankou; Sun, Changyin
2014-02-01
In this paper, we propose using high-level action units to represent human actions in videos and, based on such units, a novel sparse model is developed for human action recognition. There are three interconnected components in our approach. First, we propose a new context-aware spatial-temporal descriptor, named locally weighted word context, to improve the discriminability of the traditionally used local spatial-temporal descriptors. Second, from the statistics of the context-aware descriptors, we learn action units using the graph regularized nonnegative matrix factorization, which leads to a part-based representation and encodes the geometrical information. These units effectively bridge the semantic gap in action recognition. Third, we propose a sparse model based on a joint l2,1-norm to preserve the representative items and suppress noise in the action units. Intuitively, when learning the dictionary for action representation, the sparse model captures the fact that actions from the same class share similar units. The proposed approach is evaluated on several publicly available data sets. The experimental results and analysis clearly demonstrate the effectiveness of the proposed approach.
Joint sparse coding based spatial pyramid matching for classification of color medical image.
Shi, Jun; Li, Yi; Zhu, Jie; Sun, Haojie; Cai, Yin
2015-04-01
Although color medical images are important in clinical practice, they are usually converted to grayscale for further processing in pattern recognition, resulting in loss of rich color information. The sparse coding based linear spatial pyramid matching (ScSPM) and its variants are popular for grayscale image classification, but cannot extract color information. In this paper, we propose a joint sparse coding based SPM (JScSPM) method for the classification of color medical images. A joint dictionary can represent both the color information in each color channel and the correlation between channels. Consequently, the joint sparse codes calculated from a joint dictionary can carry color information, and therefore this method can easily transform a feature descriptor originally designed for grayscale images to a color descriptor. A color hepatocellular carcinoma histological image dataset was used to evaluate the performance of the proposed JScSPM algorithm. Experimental results show that JScSPM provides significant improvements as compared with the majority voting based ScSPM and the original ScSPM for color medical image classification. Copyright © 2014 Elsevier Ltd. All rights reserved.
Atmospheric Science Data Center
2018-04-09
... UV Ozone Detector Location: Northeastern United States Spatial Coverage: Data are provided from seven ... Related Data: Spatial Coverage: Northeastern United States NARSTO Northeast SCAR-B Block: ...
NASA Astrophysics Data System (ADS)
Hu, Rongming; Wang, Shu; Guo, Jiao; Guo, Liankun
2018-04-01
Impervious surface area and vegetation coverage are important biophysical indicators of urban surface features which can be derived from medium-resolution images. However, remote sensing data obtained by a single sensor are easily affected by many factors such as weather conditions, and the spatial and temporal resolution can not meet the needs for soil erosion estimation. Therefore, the integrated multi-source remote sensing data are needed to carry out high spatio-temporal resolution vegetation coverage estimation. Two spatial and temporal vegetation coverage data and impervious data were obtained from MODIS and Landsat 8 remote sensing images. Based on the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), the vegetation coverage data of two scales were fused and the data of vegetation coverage fusion (ESTARFM FVC) and impervious layer with high spatiotemporal resolution (30 m, 8 day) were obtained. On this basis, the spatial variability of the seepage-free surface and the vegetation cover landscape in the study area was measured by means of statistics and spatial autocorrelation analysis. The results showed that: 1) ESTARFM FVC and impermeable surface have higher accuracy and can characterize the characteristics of the biophysical components covered by the earth's surface; 2) The average impervious surface proportion and the spatial configuration of each area are different, which are affected by natural conditions and urbanization. In the urban area of Xi'an, which has typical characteristics of spontaneous urbanization, landscapes are fragmented and have less spatial dependence.
Artificial neural network does better spatiotemporal compressive sampling
NASA Astrophysics Data System (ADS)
Lee, Soo-Young; Hsu, Charles; Szu, Harold
2012-06-01
Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.
Overcoming Spatial and Temporal Barriers to Public Access Defibrillators Via Optimization.
Sun, Christopher L F; Demirtas, Derya; Brooks, Steven C; Morrison, Laurie J; Chan, Timothy C Y
2016-08-23
Immediate access to an automated external defibrillator (AED) increases the chance of survival for out-of-hospital cardiac arrest (OHCA). Current deployment usually considers spatial AED access, assuming AEDs are available 24 h a day. The goal of this study was to develop an optimization model for AED deployment, accounting for spatial and temporal accessibility, to evaluate if OHCA coverage would improve compared with deployment based on spatial accessibility alone. This study was a retrospective population-based cohort trial using data from the Toronto Regional RescuNET Epistry cardiac arrest database. We identified all nontraumatic public location OHCAs in Toronto, Ontario, Canada (January 2006 through August 2014) and obtained a list of registered AEDs (March 2015) from Toronto Paramedic Services. Coverage loss due to limited temporal access was quantified by comparing the number of OHCAs that occurred within 100 meters of a registered AED (assumed coverage 24 h per day, 7 days per week) with the number that occurred both within 100 meters of a registered AED and when the AED was available (actual coverage). A spatiotemporal optimization model was then developed that determined AED locations to maximize OHCA actual coverage and overcome the reported coverage loss. The coverage gain between the spatiotemporal model and a spatial-only model was computed by using 10-fold cross-validation. A total of 2,440 nontraumatic public OHCAs and 737 registered AED locations were identified. A total of 451 OHCAs were covered by registered AEDs under assumed coverage 24 h per day, 7 days per week, and 354 OHCAs under actual coverage, representing a coverage loss of 21.5% (p < 0.001). Using the spatiotemporal model to optimize AED deployment, a 25.3% relative increase in actual coverage was achieved compared with the spatial-only approach (p < 0.001). One in 5 OHCAs occurred near an inaccessible AED at the time of the OHCA. Potential AED use was significantly improved with a spatiotemporal optimization model guiding deployment. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Statistical regularities of art images and natural scenes: spectra, sparseness and nonlinearities.
Graham, Daniel J; Field, David J
2007-01-01
Paintings are the product of a process that begins with ordinary vision in the natural world and ends with manipulation of pigments on canvas. Because artists must produce images that can be seen by a visual system that is thought to take advantage of statistical regularities in natural scenes, artists are likely to replicate many of these regularities in their painted art. We have tested this notion by computing basic statistical properties and modeled cell response properties for a large set of digitized paintings and natural scenes. We find that both representational and non-representational (abstract) paintings from our sample (124 images) show basic similarities to a sample of natural scenes in terms of their spatial frequency amplitude spectra, but the paintings and natural scenes show significantly different mean amplitude spectrum slopes. We also find that the intensity distributions of paintings show a lower skewness and sparseness than natural scenes. We account for this by considering the range of luminances found in the environment compared to the range available in the medium of paint. A painting's range is limited by the reflective properties of its materials. We argue that artists do not simply scale the intensity range down but use a compressive nonlinearity. In our studies, modeled retinal and cortical filter responses to the images were less sparse for the paintings than for the natural scenes. But when a compressive nonlinearity was applied to the images, both the paintings' sparseness and the modeled responses to the paintings showed the same or greater sparseness compared to the natural scenes. This suggests that artists achieve some degree of nonlinear compression in their paintings. Because paintings have captivated humans for millennia, finding basic statistical regularities in paintings' spatial structure could grant insights into the range of spatial patterns that humans find compelling.
Response of an eddy-permitting ocean model to the assimilation of sparse in situ data
NASA Astrophysics Data System (ADS)
Li, Jian-Guo; Killworth, Peter D.; Smeed, David A.
2003-04-01
The response of an eddy-permitting ocean model to changes introduced by data assimilation is studied when the available in situ data are sparse in both space and time (typical for the majority of the ocean). Temperature and salinity (T&S) profiles from the WOCE upper ocean thermal data set were assimilated into a primitive equation ocean model over the North Atlantic, using a simple nudging scheme with a time window of about 2 days and a horizontal spatial radius of about 1°. When data are sparse the model returns to its unassimilated behavior, locally "forgetting" or rejecting the assimilation, on timescales determined by the local advection and diffusion. Increasing the spatial weighting radius effectively reduces both processes and hence lengthens the model restoring time (and with it, the impact of assimilation). Increasing the nudging factor enhances the assimilation effect but has little effect on the model restoring time.
Coverage-dependent amplifiers of vegetation change on global water cycle dynamics
NASA Astrophysics Data System (ADS)
Feng, Huihui; Zou, Bin; Luo, Juhua
2017-07-01
The terrestrial water cycle describes the circulation of water worldwide from one store to another via repeated evapotranspiration (E) from land and precipitation (P) back to the surface. The cycle presents significant spatial variability, which is strongly affected by natural climate and anthropogenic influences. As one of the major anthropogenic influences, vegetation change unavoidably alters surface property and subsequent the terrestrial water cycle, while its contribution is yet difficult to isolate from the mixed influences. Here, we use satellite and in-situ datasets to identify the terrestrial water cycle dynamics in spatial detail and to evaluate the impact of vegetation change. Methodologically, the water cycle is identified by the indicator of difference between evapotranspiration and precipitation (E-P). Then the scalar form of the indicator's trend (ΔE + ΔP) is used for evaluating the dynamics of water cycle, with the positive value means acceleration and negative means deceleration. Then, the contributions of climate and vegetation change are isolated by the trajectory-based method. Our results indicate that 4 accelerating and 4 decelerating water cycles can be identified, affecting 42.11% of global land. The major water cycle type is characterized by non-changing precipitation and increasing evapotranspiration (PNO-EIN), which covers 20.88% of globally land. Vegetation change amplifies both accelerating and decelerating water cycles. It tends to intensify the trend of the decelerating water cycles, while climate change weakens the trend. In the accelerating water cycles, both vegetation and climate change present positive effect to intensify the trend. The effect of plant cover change varies with the coverage. In particular, vegetation change intensifies the water cycle in moderately vegetated regions (0.1 < NDVI < 0.6), but weakens the cycle in sparsely or highly vegetated regions (NDVI < 0.1 or 0.6 < NDVI < 0.8). In extremely vegetated regions (NDVI > 0.85), the water cycle is accelerated because of the significant increase of precipitation. We conclude that vegetation change acts as an amplifier for both accelerating and decelerating terrestrial water cycles, depending on the degree of vegetation coverage.
NASA Astrophysics Data System (ADS)
Craymer, M. R.; Henton, J. A.; Piraszewski, M.
2008-12-01
Glacial isostatic adjustment following the last glacial period is the dominant source of crustal deformation in Canada east of the Rocky Mountains. The present-day vertical component of motion associated with this process may exceed 1 cm/y and is being directly measured with the Global Positioning System (GPS). A consequence of this steady deformation is that high accuracy coordinates at one epoch may not be compatible with those at another epoch. For example, modern precise point positioning (PPP) methods provide coordinates at the epoch of observation while NAD83, the officially adopted reference frame in Canada and the U.S., is expressed at some past reference epoch. The PPP positions are therefore incompatible with coordinates in such a realization of the reference frame and need to be propagated back to the frame's reference epoch. Moreover, the realizations of NAD83 adopted by the provincial geodetic agencies in Canada are referenced to different coordinate epochs; either 1997.0 or 2002.0. Proper comparison of coordinates between provinces therefore requires propagating them from one reference epoch to another. In an effort to reconcile PPP results and different realizations of NAD83, we empirically represent crustal deformation throughout Canada using a velocity field based solely on high accuracy continuous and episodic GPS observations. The continuous observations from 2001 to 2007 were obtained from nearly 100 permanent GPS stations, predominately operated by Natural Resources Canada (NRCan) and provincial geodetic agencies. Many of these sites are part of the International GNSS Service (IGS) global network. Episodic observations from 1994 to 2006 were obtained from repeated occupations of the Canadian Base Network (CBN), which consists of approximately 160 stable pillar-type monuments across the entire country. The CBN enables a much denser spatial sampling of crustal motions although coverage in the far north is still rather sparse. NRCan solutions of the continuous GPS data were combined with those from other agencies as part of the North American Reference Frame (NAREF) effort to improve the reliability of the results. This NAREF solution has then been combined with our CBN results to obtain a denser velocity sampling for fitting different types of surfaces in a first attempt to determine a continuous GPS velocity field for the entire country. Expressing this velocity field as a grid enables users to interpolate to any location in Canada, allowing for the propagation of coordinates to any desired reference epoch. We examine the accuracy and limitations of this GPS velocity field by comparing it to other published GPS velocity solutions (which are all based on less data) as well as to GIA models, including versions of ICE-3G, ICE-5G and the recent Stable North America Reference Frame (SNARF) model. Of course, the accuracy of the GPS velocity field depends directly on the density of the GPS coverage. Consequently, the GPS velocity field is unable to fully represent the actual GIA motion in the far north and tends to smooth out the signal due to the spatially sparse coverage. On the other hand, the model performs quite well in the southern parts of the country where there is a much greater spatial density of GPS measurements.
Local structure preserving sparse coding for infrared target recognition
Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lianfa
2017-01-01
Sparse coding performs well in image classification. However, robust target recognition requires a lot of comprehensive template images and the sparse learning process is complex. We incorporate sparsity into a template matching concept to construct a local sparse structure matching (LSSM) model for general infrared target recognition. A local structure preserving sparse coding (LSPSc) formulation is proposed to simultaneously preserve the local sparse and structural information of objects. By adding a spatial local structure constraint into the classical sparse coding algorithm, LSPSc can improve the stability of sparse representation for targets and inhibit background interference in infrared images. Furthermore, a kernel LSPSc (K-LSPSc) formulation is proposed, which extends LSPSc to the kernel space to weaken the influence of the linear structure constraint in nonlinear natural data. Because of the anti-interference and fault-tolerant capabilities, both LSPSc- and K-LSPSc-based LSSM can implement target identification based on a simple template set, which just needs several images containing enough local sparse structures to learn a sufficient sparse structure dictionary of a target class. Specifically, this LSSM approach has stable performance in the target detection with scene, shape and occlusions variations. High performance is demonstrated on several datasets, indicating robust infrared target recognition in diverse environments and imaging conditions. PMID:28323824
2012-01-01
2007) in pecan orchards, among others. In this sparse shrub desert environment, Nappo et al. (2010) showed that, during sta- ble conditions, the...measuring water use in flood-irrigated pecans (Carya illinoinensis). Agric. Water Mgmt. 88(1-3): 181-191. Solanelles, F., E. Gregorio, R. Sanz, J. R
Coverage maximization under resource constraints using a nonuniform proliferating random walk.
Saha, Sudipta; Ganguly, Niloy
2013-02-01
Information management services on networks, such as search and dissemination, play a key role in any large-scale distributed system. One of the most desirable features of these services is the maximization of the coverage, i.e., the number of distinctly visited nodes under constraints of network resources as well as time. However, redundant visits of nodes by different message packets (modeled, e.g., as walkers) initiated by the underlying algorithms for these services cause wastage of network resources. In this work, using results from analytical studies done in the past on a K-random-walk-based algorithm, we identify that redundancy quickly increases with an increase in the density of the walkers. Based on this postulate, we design a very simple distributed algorithm which dynamically estimates the density of the walkers and thereby carefully proliferates walkers in sparse regions. We use extensive computer simulations to test our algorithm in various kinds of network topologies whereby we find it to be performing particularly well in networks that are highly clustered as well as sparse.
Bridges, Daniel J; Pollard, Derek; Winters, Anna M; Winters, Benjamin; Sikaala, Chadwick; Renn, Silvia; Larsen, David A
2018-02-23
Indoor residual spraying (IRS) is a key tool in the fight to control, eliminate and ultimately eradicate malaria. IRS protection is based on a communal effect such that an individual's protection primarily relies on the community-level coverage of IRS with limited protection being provided by household-level coverage. To ensure a communal effect is achieved through IRS, achieving high and uniform community-level coverage should be the ultimate priority of an IRS campaign. Ensuring high community-level coverage of IRS in malaria-endemic areas is challenging given the lack of information available about both the location and number of households needing IRS in any given area. A process termed 'mSpray' has been developed and implemented and involves use of satellite imagery for enumeration for planning IRS and a mobile application to guide IRS implementation. This study assessed (1) the accuracy of the satellite enumeration and (2) how various degrees of spatial aid provided through the mSpray process affected community-level IRS coverage during the 2015 spray campaign in Zambia. A 2-stage sampling process was applied to assess accuracy of satellite enumeration to determine number and location of sprayable structures. Results indicated an overall sensitivity of 94% for satellite enumeration compared to finding structures on the ground. After adjusting for structure size, roof, and wall type, households in Nchelenge District where all types of satellite-based spatial aids (paper-based maps plus use of the mobile mSpray application) were used were more likely to have received IRS than Kasama district where maps used were not based on satellite enumeration. The probability of a household being sprayed in Nchelenge district where tablet-based maps were used, did not differ statistically from that of a household in Samfya District, where detailed paper-based spatial aids based on satellite enumeration were provided. IRS coverage from the 2015 spray season benefited from the use of spatial aids based upon satellite enumeration. These spatial aids can guide costly IRS planning and implementation leading to attainment of higher spatial coverage, and likely improve disease impact.
On the feasibility of measuring urban air pollution by wireless distributed sensor networks.
Moltchanov, Sharon; Levy, Ilan; Etzion, Yael; Lerner, Uri; Broday, David M; Fishbain, Barak
2015-01-01
Accurate evaluation of air pollution on human-wellbeing requires high-resolution measurements. Standard air quality monitoring stations provide accurate pollution levels but due to their sparse distribution they cannot capture the highly resolved spatial variations within cities. Similarly, dedicated field campaigns can use tens of measurement devices and obtain highly dense spatial coverage but normally deployment has been limited to short periods of no more than few weeks. Nowadays, advances in communication and sensory technologies enable the deployment of dense grids of wireless distributed air monitoring nodes, yet their sensor ability to capture the spatiotemporal pollutant variability at the sub-neighborhood scale has never been thoroughly tested. This study reports ambient measurements of gaseous air pollutants by a network of six wireless multi-sensor miniature nodes that have been deployed in three urban sites, about 150 m apart. We demonstrate the network's capability to capture spatiotemporal concentration variations at an exceptional fine resolution but highlight the need for a frequent in-situ calibration to maintain the consistency of some sensors. Accordingly, a procedure for a field calibration is proposed and shown to improve the system's performance. Overall, our results support the compatibility of wireless distributed sensor networks for measuring urban air pollution at a sub-neighborhood spatial resolution, which suits the requirement for highly spatiotemporal resolved measurements at the breathing-height when assessing exposure to urban air pollution. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lian, X.
2016-12-01
There is an increasing demand to integrate land surface temperature (LST) into climate research due to its global coverage, which requires a comprehensive knowledge of its distinctive characteristics compared to near-surface air temperature ( ). Using satellite observations and in-situ station-based datasets, we conducted a global-scale assessment of the spatial, seasonal, and interannual variations in the difference between daytime maximum LST and daytime maximum ( , LST - ) during 2003-2014. Spatially, LST is generally higher than over arid and sparsely vegetated regions in the mid-low latitudes, but LST is lower than in the tropical rainforests due to strong evaporative cooling, and in the high-latitude regions due to snow-induced radiative cooling. Seasonally, is negative in tropical regions throughout the year, while it displays a pronounced seasonality in both the mid-latitudes and boreal regions. The seasonality in the mid-latitudes is a result of the asynchronous responses of LST and to the seasonal cycle of radiation and vegetation abundance, whereas in the boreal regions, seasonality is mainly caused by the change in snow cover. At an interannual scale, only a small proportion of the land surface displays a statistically significant trend (P <0.05) due to the short time span of current measurements. Our study identified substantial spatial heterogeneity and seasonality in , as well as its determinant environmental drivers, and thus provides a useful reference for monitoring near-surface temperature changes using remote sensing, particularly in remote regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anjos, Daniela M; Mamontov, Eugene; Brown, Gilbert M
We used quasielastic neutron scattering (QENS) to study the dynamics of phenanthrenequinone (PQ) on the surface of onion-like carbon (OLC), or so called carbon onions, as a function of surface coverage and temperature. For both the high- and low-coverage samples, we observed two diffusion processes; a faster process and nearly an order of magnitude slower process. On the high-coverage surface, the slow diffusion process is of long-range translational character, whereas the fast diffusion process is spatially localized on the length scale of ~ 4.7 . On the low-coverage surface, both diffusion processes are spatially localized; on the same length scalemore » of ~ 4.7 for the fast diffusion and a somewhat larger length scale for the slow diffusion. Arrhenius temperature dependence is observed except for the long-range diffusion on the high-coverage surface. We attribute the fast diffusion process to the generic localized in-cage dynamics of PQ molecules, and the slow diffusion process to the long-range translational dynamics of PQ molecules, which, depending on the coverage, may be either spatially restricted, or long-range. On the low-coverage surface, uniform surface coverage is not attained, and the PQ molecules experience the effect of spatial constraints on their long-range translational dynamics. Unexpectedly, the dynamics of PQ molecules on OLC as a function of temperature and surface coverage bears qualitative resemblance to the dynamics of water molecules on oxide surfaces, including practically temperature-independent residence times for the low-coverage surface. The dynamics features that we observed may be universal across different classes of surface adsorbates.« less
2013-09-01
of baselines than would a pattern with equal spacing . Nevertheless, many of the telescope pairs have equivalent baselines resulting in...magnitude to a spatial domain representation of the object, sparse and irregular spacing of the measurements in the Fourier plane, and low SNR...any particular geometry of the telescope array configuration. Its inputs are a list of measurements, each
Derek B. Van Berkel; Bronwyn Rayfield; Sebastián Martinuzzi; Martin J. Lechowicz; Eric White; Kathleen P. Bell; Chris R. Colocousis; Kent F. Kovacs; Anita T. Morzillo; Darla K. Munroe; Benoit Parmentier; Volker C. Radeloff; Brian J. McGill
2018-01-01
Sparsely settled forests (SSF) are poorly studied, coupled natural and human systems involving rural communities in forest ecosystems that are neither largely uninhabited wildland nor forests on the edges of urban areas. We developed and applied a multidisciplinary approach to define, map, and examine changes in the spatial extent and structure of both the landscapes...
Hyperspectral Image Classification via Kernel Sparse Representation
2013-01-01
classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model , where...joint sparsity model , where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training...hyperspectral imagery, joint spar- sity model , kernel methods, sparse representation. I. INTRODUCTION HYPERSPECTRAL imaging sensors capture images
Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People
NASA Astrophysics Data System (ADS)
Thomik, Andreas; Faisal, A. Aldo
2015-03-01
Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of ``eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex. Andreas is supported by the Luxemburg Research Fund (1229297).
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
Evaluation of the AirNow Satellite Data Processor for 2010-2012
NASA Astrophysics Data System (ADS)
Pasch, A. N.; DeWinter, J. L.; Dye, T.; Haderman, M.; Zahn, P. H.; Szykman, J.; White, J. E.; Dickerson, P.; van Donkelaar, A.; Martin, R.
2013-12-01
The U.S. Environmental Protection Agency's (EPA) AirNow program provides the public with real-time and forecasted air quality conditions. Millions of people each day use information from AirNow to protect their health. The AirNow program (http://www.airnow.gov) reports ground-level ozone (O3) and fine particulate matter (PM2.5) with a standardized index called the Air Quality Index (AQI). AirNow aggregates information from over 130 state, local, and federal air quality agencies and provides tools for over 2,000 agency staff responsible for monitoring, forecasting, and communicating local air quality. Each hour, AirNow systems generate thousands of maps and products. The usefulness of the AirNow air quality maps depends on the accuracy and spatial coverage of air quality measurements. Currently, the maps use only ground-based measurements, which have significant gaps in coverage in some parts of the United States. As a result, contoured AQI levels have high uncertainty in regions far from monitors. To improve the usefulness of air quality maps, scientists at EPA, Dalhousie University, and Sonoma Technology, Inc., in collaboration with the National Aeronautics and Space Administration (NASA) and the National Oceanic and Atmospheric Administration (NOAA), have completed a project to incorporate satellite-estimated surface PM2.5 concentrations into the maps via the AirNow Satellite Data Processor (ASDP). These satellite estimates are derived using NASA/NOAA satellite aerosol optical depth (AOD) retrievals and GEOS-Chem modeled ratios of surface PM2.5 concentrations to AOD. GEOS-Chem is a three-dimensional chemical transport model for atmospheric composition driven by meteorological input from the Goddard Earth Observing System (GEOS). The ASDP can fuse multiple PM2.5 concentration data sets to generate AQI maps with improved spatial coverage. The goals of ASDP are to provide more detailed AQI information in monitor-sparse locations and to augment monitor-dense locations with more information. The ASDP system uses a weighted-average approach using uncertainty information about each data set. Recent improvements in the estimation of the uncertainty of interpolated ground-based monitor data have allowed for a more complete characterization of the uncertainty of the surface measurements. We will present a statistical analysis for 2010-2012 of the ASDP predictions of PM2.5 focusing on performance at validation sites. In addition, we will present several case studies evaluating the ASDP's performance for multiple regions and seasons, focusing specifically on days when large spatial gradients in AQI and wildfire smoke impacts were observed.
NASA Astrophysics Data System (ADS)
Doss, Derek J.; Heiselman, Jon S.; Collins, Jarrod A.; Weis, Jared A.; Clements, Logan W.; Geevarghese, Sunil K.; Miga, Michael I.
2017-03-01
Sparse surface digitization with an optically tracked stylus for use in an organ surface-based image-to-physical registration is an established approach for image-guided open liver surgery procedures. However, variability in sparse data collections during open hepatic procedures can produce disparity in registration alignments. In part, this variability arises from inconsistencies with the patterns and fidelity of collected intraoperative data. The liver lacks distinct landmarks and experiences considerable soft tissue deformation. Furthermore, data coverage of the organ is often incomplete or unevenly distributed. While more robust feature-based registration methodologies have been developed for image-guided liver surgery, it is still unclear how variation in sparse intraoperative data affects registration. In this work, we have developed an application to allow surgeons to study the performance of surface digitization patterns on registration. Given the intrinsic nature of soft-tissue, we incorporate realistic organ deformation when assessing fidelity of a rigid registration methodology. We report the construction of our application and preliminary registration results using four participants. Our preliminary results indicate that registration quality improves as users acquire more experience selecting patterns of sparse intraoperative surface data.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
NASA Astrophysics Data System (ADS)
Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping
2016-09-01
Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.
Example-Based Image Colorization Using Locality Consistent Sparse Representation.
Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L
2017-11-01
Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.
A Space-Time-Frequency Dictionary for Sparse Cortical Source Localization.
Korats, Gundars; Le Cam, Steven; Ranta, Radu; Louis-Dorr, Valerie
2016-09-01
Cortical source imaging aims at identifying activated cortical areas on the surface of the cortex from the raw electroencephalogram (EEG) data. This problem is ill posed, the number of channels being very low compared to the number of possible source positions. In some realistic physiological situations, the active areas are sparse in space and of short time durations, and the amount of spatio-temporal data to carry the inversion is then limited. In this study, we propose an original data driven space-time-frequency (STF) dictionary which takes into account simultaneously both spatial and time-frequency sparseness while preserving smoothness in the time frequency (i.e., nonstationary smooth time courses in sparse locations). Based on these assumptions, we take benefit of the matching pursuit (MP) framework for selecting the most relevant atoms in this highly redundant dictionary. We apply two recent MP algorithms, single best replacement (SBR) and source deflated matching pursuit, and we compare the results using a spatial dictionary and the proposed STF dictionary to demonstrate the improvements of our multidimensional approach. We also provide comparison using well-established inversion methods, FOCUSS and RAP-MUSIC, analyzing performances under different degrees of nonstationarity and signal to noise ratio. Our STF dictionary combined with the SBR approach provides robust performances on realistic simulations. From a computational point of view, the algorithm is embedded in the wavelet domain, ensuring high efficiency in term of computation time. The proposed approach ensures fast and accurate sparse cortical localizations on highly nonstationary and noisy data.
Petrov, Andrii Y; Herbst, Michael; Andrew Stenger, V
2017-08-15
Rapid whole-brain dynamic Magnetic Resonance Imaging (MRI) is of particular interest in Blood Oxygen Level Dependent (BOLD) functional MRI (fMRI). Faster acquisitions with higher temporal sampling of the BOLD time-course provide several advantages including increased sensitivity in detecting functional activation, the possibility of filtering out physiological noise for improving temporal SNR, and freezing out head motion. Generally, faster acquisitions require undersampling of the data which results in aliasing artifacts in the object domain. A recently developed low-rank (L) plus sparse (S) matrix decomposition model (L+S) is one of the methods that has been introduced to reconstruct images from undersampled dynamic MRI data. The L+S approach assumes that the dynamic MRI data, represented as a space-time matrix M, is a linear superposition of L and S components, where L represents highly spatially and temporally correlated elements, such as the image background, while S captures dynamic information that is sparse in an appropriate transform domain. This suggests that L+S might be suited for undersampled task or slow event-related fMRI acquisitions because the periodic nature of the BOLD signal is sparse in the temporal Fourier transform domain and slowly varying low-rank brain background signals, such as physiological noise and drift, will be predominantly low-rank. In this work, as a proof of concept, we exploit the L+S method for accelerating block-design fMRI using a 3D stack of spirals (SoS) acquisition where undersampling is performed in the k z -t domain. We examined the feasibility of the L+S method to accurately separate temporally correlated brain background information in the L component while capturing periodic BOLD signals in the S component. We present results acquired in control human volunteers at 3T for both retrospective and prospectively acquired fMRI data for a visual activation block-design task. We show that a SoS fMRI acquisition with an acceleration of four and L+S reconstruction can achieve a brain coverage of 40 slices at 2mm isotropic resolution and 64 x 64 matrix size every 500ms. Copyright © 2017 Elsevier Inc. All rights reserved.
Chargé, Pascal; Bazzi, Oussama; Ding, Yuehua
2018-01-01
A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit–receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit–receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit–receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods. PMID:29734797
Mohydeen, Ali; Chargé, Pascal; Wang, Yide; Bazzi, Oussama; Ding, Yuehua
2018-05-06
A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit⁻receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit⁻receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit⁻receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods.
Sparse grid techniques for particle-in-cell schemes
NASA Astrophysics Data System (ADS)
Ricketson, L. F.; Cerfon, A. J.
2017-02-01
We propose the use of sparse grids to accelerate particle-in-cell (PIC) schemes. By using the so-called ‘combination technique’ from the sparse grids literature, we are able to dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error. The resulting increase in cell size allows us to reduce the statistical noise in the simulation without increasing total particle number. We present initial proof-of-principle results from test cases in two and three dimensions that demonstrate the new scheme’s efficiency, both in terms of computation time and memory usage.
TES/Aura L3 Atmospheric Temperatures Daily V5 (TL3ATD)
Atmospheric Science Data Center
2018-05-08
... Platform: TES Aura L1B Nadir/Limb Spatial Coverage: (-180, 180)(-90, 90) Spatial Resolution: 0.5 x 5 km nadir 2.3 x 23 km limb Temporal Coverage: 07/15/2004 - Present Temporal Resolution: ...
NASA Astrophysics Data System (ADS)
Xiao, Q.; Liu, Y.
2017-12-01
Satellite aerosol optical depth (AOD) has been used to assess fine particulate matter (PM2.5) pollution worldwide. However, non-random missing AOD due to cloud cover or high surface reflectance can cause up to 80% data loss and bias model-estimated spatial and temporal trends of PM2.5. Previous studies filled the data gap largely by spatial smoothing which ignored the impact of cloud cover and meteorology on aerosol loadings and has been shown to exhibit poor performance when monitoring stations are sparse or when there is seasonal large-scale missingness. Using the Yangtze River Delta of China as an example, we present a flexible Multiple Imputation (MI) method that combines cloud fraction, elevation, humidity, temperature, and spatiotemporal trends to impute the missing AOD. A two-stage statistical model driven by gap-filled AOD, meteorology and land use information was then fitted to estimate daily ground PM2.5 concentrations in 2013 and 2014 at 1 km resolution with complete coverage in space and time. The daily MI models have an average R2 of 0.77, with an inter-quartile range of 0.71 to 0.82 across days. The overall model 10-fold cross-validation R2 were 0.81 and 0.73 (for year 2013 and 2014, respectively. Predictions with only observational AOD or only imputed AOD showed similar accuracy. This method provides reliable PM2.5 predictions with complete coverage at high resolution. By including all the pixels of all days into model development, this method corrected the sampling bias in satellite-driven air pollution modelling due to non-random missingness in AOD. Comparing with previously reported gap-filling methods, the MI method has the strength of not relying on ground PM2.5 measurements, therefore allows the prediction of historical PM2.5 levels prior to the establishment of regular ground monitoring networks.
NASA Astrophysics Data System (ADS)
Simard, M.; Denbina, M. W.
2017-12-01
Using data collected by NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) and Land, Vegetation, and Ice Sensor (LVIS) lidar, we have estimated forest canopy height for a number of study areas in the country of Gabon using a new machine learning data fusion approach. Using multi-baseline polarimetric synthetic aperture radar interferometry (PolInSAR) data collected by UAVSAR, forest heights can be estimated using the random volume over ground model. In the case of multi-baseline UAVSAR data consisting of many repeat passes with spatially separated flight tracks, we can estimate different forest height values for each different image pair, or baseline. In order to choose the best forest height estimate for each pixel, the baselines must be selected or ranked, taking care to avoid baselines with unsuitable spatial separation, or severe temporal decorrelation effects. The current baseline selection algorithms in the literature use basic quality metrics derived from the PolInSAR data which are not necessarily indicative of the true height accuracy in all cases. We have developed a new data fusion technique which treats PolInSAR baseline selection as a supervised classification problem, where the classifier is trained using a sparse sampling of lidar data within the PolInSAR coverage area. The classifier uses a large variety of PolInSAR-derived features as input, including radar backscatter as well as features based on the PolInSAR coherence region shape and the PolInSAR complex coherences. The resulting data fusion method produces forest height estimates which are more accurate than a purely radar-based approach, while having a larger coverage area than the input lidar training data, combining some of the strengths of each sensor. The technique demonstrates the strong potential for forest canopy height and above-ground biomass mapping using fusion of PolInSAR with data from future spaceborne lidar missions such as the upcoming Global Ecosystems Dynamics Investigation (GEDI) lidar.
NASA Astrophysics Data System (ADS)
Garcia-Eidell, Cynthia; Comiso, Josefino C.; Dinnat, Emmanuel; Brucker, Ludovic
2017-09-01
Global surface ocean salinity measurements have been available since the launch of SMOS in 2009 and coverage was further enhanced with the launch of Aquarius in 2011. In the polar regions where spatial and temporal changes in sea surface salinity (SSS) are deemed important, the data have not been as robustly validated because of the paucity of in situ measurements. This study presents a comparison of four SSS products in the ice-free Arctic region, three using Aquarius data and one using SMOS data. The accuracy of each product is assessed through comparative analysis with ship and other in situ measurements. Results indicate RMS errors ranging between 0.33 and 0.89 psu. Overall, the four products show generally good consistency in spatial distribution with the Atlantic side being more saline than the Pacific side. A good agreement between the ship and satellite measurements was also observed in the low salinity regions in the Arctic Ocean, where SSS in situ measurements are usually sparse, at the end of summer melt seasons. Some discrepancies including biases of about 1 psu between the products in spatial and temporal distribution are observed. These are due in part to differences in retrieval techniques, geophysical filtering, and sea ice and land masks. The monthly SSS retrievals in the Arctic from 2011 to 2015 showed variations (within ˜1 psu) consistent with effects of sea ice seasonal cycles. This study indicates that spaceborne observations capture the seasonality and interannual variability of SSS in the Arctic with reasonably good accuracy.
NASA Astrophysics Data System (ADS)
Webster, C.; Bühler, Y.; Schirmer, M.; Stoffel, A.; Giulia, M.; Jonas, T.
2017-12-01
Snow depth distribution in forests exhibits strong spatial heterogeneity compared to adjacent open sites. Measurement of snow depths in forests is currently limited to a) manual point measurements, which are sparse and time-intensive, b) ground-penetrating radar surveys, which have limited spatial coverage, or c) airborne LiDAR acquisition, which are expensive and may deteriorate in denser forests. We present the application of unmanned aerial vehicles in combination with structure-from-motion (SfM) methods to photogrammetrically map snow depth distribution in forested terrain. Two separate flights were carried out 10 days apart across a heterogeneous forested area of 900 x 500 m. Corresponding snow depth maps were derived using both, LiDAR-based and SfM-based DTM data, obtained during snow-off conditions. Manual measurements collected following each flight were used to validate the snow depth maps. Snow depths were resolved at 5cm resolution and forest snow depth distribution structures such as tree wells and other areas of preferential melt were represented well. Differential snow depth maps showed maximum ablation in the exposed south sides of trees and smaller differences in the centre of gaps and on the north side of trees. This new application of SfM to map snow depth distribution in forests demonstrates a straightforward method for obtaining information that was previously only available through manual spatially limited ground-based measurements. These methods could therefore be extended to more frequent observation of snow depths in forests as well as estimating snow accumulation and depletion rates.
LiDAR point classification based on sparse representation
NASA Astrophysics Data System (ADS)
Li, Nan; Pfeifer, Norbert; Liu, Chun
2017-04-01
In order to combine the initial spatial structure and features of LiDAR data for accurate classification. The LiDAR data is represented as a 4-order tensor. Sparse representation for classification(SRC) method is used for LiDAR tensor classification. It turns out SRC need only a few of training samples from each class, meanwhile can achieve good classification result. Multiple features are extracted from raw LiDAR points to generate a high-dimensional vector at each point. Then the LiDAR tensor is built by the spatial distribution and feature vectors of the point neighborhood. The entries of LiDAR tensor are accessed via four indexes. Each index is called mode: three spatial modes in direction X ,Y ,Z and one feature mode. Sparse representation for classification(SRC) method is proposed in this paper. The sparsity algorithm is to find the best represent the test sample by sparse linear combination of training samples from a dictionary. To explore the sparsity of LiDAR tensor, the tucker decomposition is used. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Those matrices could be considered as the principal components in each mode. The entries of core tensor show the level of interaction between the different components. Therefore, the LiDAR tensor can be approximately represented by a sparse tensor multiplied by a matrix selected from a dictionary along each mode. The matrices decomposed from training samples are arranged as initial elements in the dictionary. By dictionary learning, a reconstructive and discriminative structure dictionary along each mode is built. The overall structure dictionary composes of class-specified sub-dictionaries. Then the sparse core tensor is calculated by tensor OMP(Orthogonal Matching Pursuit) method based on dictionaries along each mode. It is expected that original tensor should be well recovered by sub-dictionary associated with relevant class, while entries in the sparse tensor associated with other classed should be nearly zero. Therefore, SRC use the reconstruction error associated with each class to do data classification. A section of airborne LiDAR points of Vienna city is used and classified into 6classes: ground, roofs, vegetation, covered ground, walls and other points. Only 6 training samples from each class are taken. For the final classification result, ground and covered ground are merged into one same class(ground). The classification accuracy for ground is 94.60%, roof is 95.47%, vegetation is 85.55%, wall is 76.17%, other object is 20.39%.
TES/Aura L3 Atmospheric Temperatures Daily V4 (TL3ATD)
Atmospheric Science Data Center
2018-05-09
... Platform: TES Aura L1B Nadir/Limb Spatial Coverage: 5.3 x 8.5 km nadir 37 x 23 km limb Spatial ... 0.5 x 5 km nadir 2.3 x 23 km limb Temporal Coverage: 08/22/2004 - present Temporal Resolution: ...
Results of time-domain electromagnetic soundings in Everglades National Park, Florida
Fitterman, D.V.; Deszcz-Pan, Maria; Stoddard, C.E.
1999-01-01
This report describes the collection, processing, and interpretation of time-domain electromagnetic soundings from Everglades National Park. The results are used to locate the extent of seawater intrusion in the Biscayne aquifer and to map the base of the Biscayne aquifer in regions where well coverage is sparse. The data show no evidence of fresh, ground-water flows at depth into Florida Bay.
NASA Astrophysics Data System (ADS)
Trawinski, P. R.; Mackay, D. S.
2009-03-01
The objective of this study is to quantify and model spatial dependence in mosquito vector populations and develop predictions for unsampled locations using geostatistics. Mosquito control program trap sites are often located too far apart to detect spatial dependence but the results show that integration of spatial data over time for Cx. pipiens-restuans and according to meteorological conditions for Ae. vexans enables spatial analysis of sparse sample data. This study shows that mosquito abundance is spatially correlated and that spatial dependence differs between Cx. pipiens-restuans and Ae. vexans mosquitoes.
Population coding in sparsely connected networks of noisy neurons.
Tripp, Bryan P; Orchard, Jeff
2012-01-01
This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.
Assimilation of Spatially Sparse In Situ Soil Moisture Networks into a Continuous Model Domain
NASA Astrophysics Data System (ADS)
Gruber, A.; Crow, W. T.; Dorigo, W. A.
2018-02-01
Growth in the availability of near-real-time soil moisture observations from ground-based networks has spurred interest in the assimilation of these observations into land surface models via a two-dimensional data assimilation system. However, the design of such systems is currently hampered by our ignorance concerning the spatial structure of error afflicting ground and model-based soil moisture estimates. Here we apply newly developed triple collocation techniques to provide the spatial error information required to fully parameterize a two-dimensional (2-D) data assimilation system designed to assimilate spatially sparse observations acquired from existing ground-based soil moisture networks into a spatially continuous Antecedent Precipitation Index (API) model for operational agricultural drought monitoring. Over the contiguous United States (CONUS), the posterior uncertainty of surface soil moisture estimates associated with this 2-D system is compared to that obtained from the 1-D assimilation of remote sensing retrievals to assess the value of ground-based observations to constrain a surface soil moisture analysis. Results demonstrate that a fourfold increase in existing CONUS ground station density is needed for ground network observations to provide a level of skill comparable to that provided by existing satellite-based surface soil moisture retrievals.
NASA Technical Reports Server (NTRS)
Xiao, Qingyang; Wang, Yujie; Chang, Howard H.; Meng, Xia; Geng, Guannan; Lyapustin, Alexei Ivanovich; Liu, Yang
2017-01-01
Satellite aerosol optical depth (AOD) has been used to assess population exposure to fine particulate matter (PM (sub 2.5)). The emerging high-resolution satellite aerosol product, Multi-Angle Implementation of Atmospheric Correction(MAIAC), provides a valuable opportunity to characterize local-scale PM(sub 2.5) at 1-km resolution. However, non-random missing AOD due to cloud snow cover or high surface reflectance makes this task challenging. Previous studies filled the data gap by spatially interpolating neighboring PM(sub 2.5) measurements or predictions. This strategy ignored the effect of cloud cover on aerosol loadings and has been shown to exhibit poor performance when monitoring stations are sparse or when there is seasonal large-scale missngness. Using the Yangtze River Delta of China as an example, we present a Multiple Imputation (MI) method that combines the MAIAC high-resolution satellite retrievals with chemical transport model (CTM) simulations to fill missing AOD. A two-stage statistical model driven by gap-filled AOD, meteorology and land use information was then fitted to estimate daily ground PM(sub 2.5) concentrations in 2013 and 2014 at 1 km resolution with complete coverage in space and time. The daily MI models have an average R(exp 2) of 0.77, with an inter-quartile range of 0.71 to 0.82 across days. The overall Ml model 10-fold cross-validation R(exp 2) (root mean square error) were 0.81 (25 gm(exp 3)) and 0.73 (18 gm(exp 3)) for year 2013 and 2014, respectively. Predictions with only observational AOD or only imputed AOD showed similar accuracy.Comparing with previous gap-filling methods, our MI method presented in this study performed bette rwith higher coverage, higher accuracy, and the ability to fill missing PM(sub 2.5) predictions without ground PM(sub 2.5) measurements. This method can provide reliable PM(sub 2.5)predictions with complete coverage that can reduce biasin exposure assessment in air pollution and health studies.
Spectrum recovery method based on sparse representation for segmented multi-Gaussian model
NASA Astrophysics Data System (ADS)
Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan
2016-09-01
Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
Embedded sparse representation of fMRI data via group-wise dictionary optimization
NASA Astrophysics Data System (ADS)
Zhu, Dajiang; Lin, Binbin; Faskowitz, Joshua; Ye, Jieping; Thompson, Paul M.
2016-03-01
Sparse learning enables dimension reduction and efficient modeling of high dimensional signals and images, but it may need to be tailored to best suit specific applications and datasets. Here we used sparse learning to efficiently represent functional magnetic resonance imaging (fMRI) data from the human brain. We propose a novel embedded sparse representation (ESR), to identify the most consistent dictionary atoms across different brain datasets via an iterative group-wise dictionary optimization procedure. In this framework, we introduced additional criteria to make the learned dictionary atoms more consistent across different subjects. We successfully identified four common dictionary atoms that follow the external task stimuli with very high accuracy. After projecting the corresponding coefficient vectors back into the 3-D brain volume space, the spatial patterns are also consistent with traditional fMRI analysis results. Our framework reveals common features of brain activation in a population, as a new, efficient fMRI analysis method.
Loxley, P N
2017-10-01
The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.
Hippocampal Remapping Is Constrained by Sparseness rather than Capacity
Kammerer, Axel; Leibold, Christian
2014-01-01
Grid cells in the medial entorhinal cortex encode space with firing fields that are arranged on the nodes of spatial hexagonal lattices. Potential candidates to read out the space information of this grid code and to combine it with other sensory cues are hippocampal place cells. In this paper, we investigate a population of grid cells providing feed-forward input to place cells. The capacity of the underlying synaptic transformation is determined by both spatial acuity and the number of different spatial environments that can be represented. The codes for different environments arise from phase shifts of the periodical entorhinal cortex patterns that induce a global remapping of hippocampal place fields, i.e., a new random assignment of place fields for each environment. If only a single environment is encoded, the grid code can be read out at high acuity with only few place cells. A surplus in place cells can be used to store a space code for more environments via remapping. The number of stored environments can be increased even more efficiently by stronger recurrent inhibition and by partitioning the place cell population such that learning affects only a small fraction of them in each environment. We find that the spatial decoding acuity is much more resilient to multiple remappings than the sparseness of the place code. Since the hippocampal place code is sparse, we thus conclude that the projection from grid cells to the place cells is not using its full capacity to transfer space information. Both populations may encode different aspects of space. PMID:25474570
"More Closeted Than Gayness Itself": The Depiction of Same-Sex Couple Violence in Newspaper Media.
Estes, Michelle L; Webber, Gretchen R
2017-10-01
Same-sex intimate partner violence (IPV) lacks mainstream news media coverage. News media report on those stories that are most prominent, and these stories are often shaped and presented within a White, heterosexual, upper-class, male framework. This framework largely ignores or misrepresents those that do not fit these characteristics, resulting in a gap in research and coverage of same-sex IPV. This article explores whether U.S. newspapers cover same-sex IPV, how often, and how same-sex couple violence is portrayed in newspapers when covered. Twenty-five newspaper articles published from 2005 to 2015, 10 years prior to the U.S. Supreme Court decision that legalized same-sex marriage, were located and analyzed. Findings indicate sparse newspaper coverage of IPV in same-sex couples. Ten articles highlight the lack of coverage and knowledge related to same-sex couple IPV. Eighteen articles address same-sex IPV as a social issue and highlight resource concerns, police involvement, and heteronormativity and heterosexism. Sixteen articles depict specific instances of IPV in same-sex couples. The overall lack of coverage and how same-sex IPV is covered remains problematic and limited. More mainstream and accurate coverage is needed to effectively address this social issue. Limitations and directions for future research are also discussed.
NASA Astrophysics Data System (ADS)
Aizen, V. B.; Aizen, E. M.; Joswiak, D. R.; Surazakov, A. B.; Takeuchi, N.
2007-12-01
The vast arid and semi-arid regions of central Asia, Mongolia, and Northern China are the world's second largest source of atmospheric mineral dust. In recent years, severe dust storms in Asia have intensified in frequency, duration, and areal coverage. However, limited spatial and temporal extent of aerosol measurements precludes definitive statements to be made regarding relationship between the Asian aerosol generation and climate. It has been well known that glaciers are the natural archives of environmental records related to past climate and aerosol generation. In our research, we utilized central Asian and western Siberia shallow ice-core records recovered from Altai, Tien Shan and Pamir mountain glaciers. Despite the fact that ice-core data may extend climate/aerosol records back in time, their sparse coverage is inadequate to document aerosol spatial distribution. The NASA products from Aura, Terra and Aqua satellite missions address this gap identifying aerosol sources, transport pathways, and area of deposition. The main objective of our research is to evaluate an affect of climate variability on dynamics of Asian aerosol loading to atmosphere and changes in aerosol transport pathways. Dust particle, major and rare earth element analysis from dust aerosols deposited and accumulated in Altai, Tien Shan and Pamir glaciers suggests that loess from Tajikistan, Afghanistan and north-western China are main sources of aerosol loading into the upper troposphere over the central Asia and western Siberia. At the same time, the soluble ionic component of the ice-cores, related to aerosol generated from evaporate deposits, demonstrated both anthropogenic and natural impacts on atmospheric chemistry over these regions. Large perturbations of Ca2+ derived from CaCO3- rich dust transported from Goby Desert to Altai and Tien Shan. Origin and pathway of the ice-core aerosol depositions for the last 10-years were identified through calibrating ice-core records with dust storm land surface records and remote sensing aerosol data at the monthly/seasonal/annual to event/daily scale. For instance, in southwestern Asia, severe drought developed from 1998 to 2002 has intensified the frequency, duration, and spatial coverage of large dust storms originated in Iran, Afghanistan, Tajikistan, Taklimakan and Goby Deserts. The Pamir and Tien Shan ice-core records revealed, that concentration of major and REE elements during summer is about two times greater in period of 1998-2002 than at the following years. Our qualitative analysis based on ice-core records, the MODIS and SeaWiFS images and determined the origin of dust, transport pathways and aerosol spatial distribution over central Asia and western Siberia in summer 2000, 2001 and 2002. The transport pathways were reconstructed on the basis of visibility observations and NCAR MM5-predicted winds with further validation against of satellite data and isotope- geochemical ice-core data analysis.
NASA Astrophysics Data System (ADS)
Xiao, Sa; Deng, He; Duan, Caohui; Xie, Junshuai; Zhang, Huiting; Sun, Xianping; Ye, Chaohui; Zhou, Xin
2018-05-01
Dynamic hyperpolarized (HP) 129Xe MRI is able to visualize the process of lung ventilation, which potentially provides unique information about lung physiology and pathophysiology. However, the longitudinal magnetization of HP 129Xe is nonrenewable, making it difficult to achieve high image quality while maintaining high temporal-spatial resolution in the pulmonary dynamic MRI. In this paper, we propose a new accelerated dynamic HP 129Xe MRI scheme incorporating the low-rank, sparse and gas-inflow effects (L + S + G) constraints. According to the gas-inflow effects of HP gas during the lung inspiratory process, a variable-flip-angle (VFA) strategy is designed to compensate for the rapid attenuation of the magnetization. After undersampling k-space data, an effective reconstruction algorithm considering the low-rank, sparse and gas-inflow effects constraints is developed to reconstruct dynamic MR images. In this way, the temporal and spatial resolution of dynamic MR images is improved and the artifacts are lessened. Simulation and in vivo experiments implemented on the phantom and healthy volunteers demonstrate that the proposed method is not only feasible and effective to compensate for the decay of the magnetization, but also has a significant improvement compared with the conventional reconstruction algorithms (P-values are less than 0.05). This confirms the superior performance of the proposed designs and their ability to maintain high quality and temporal-spatial resolution.
Neonatal Atlas Construction Using Sparse Representation
Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases. PMID:24638883
Pérez-Núñez, Ricardo; Medina-Solis, Carlo Eduardo; Maupomé, Gerardo; Vargas-Palacios, Armando
2006-10-01
To determine the level of dental health care coverage in people aged > or =18 years across the country, and to identify the factors associated with coverage. Using the instruments and sampling strategies developed by the World Health Organization for the World Health Survey, a cross-sectional national survey was carried out at the household and individual (adult) levels. Dental data were collected in 20 of Mexico's 32 states. The relationship between coverage and environmental and individual characteristics was examined through logistic regression models. Only 6098 of 24 159 individual respondents reported having oral problems during the preceding 12 months (accounting for 14 284 621 inhabitants of the country if weighted). Only 48% of respondents reporting problems were covered, although details of the appropriateness, timeliness and effectiveness of the intervention(s) were not assessed. The multivariate regression model showed that higher level of education, better socioeconomic status, having at least one chronic disease and having medical insurance were positively associated with better dental care coverage. Age and sex were also associated. Overall dental health care coverage could be improved, assuming that ideal coverage is 100%. Some equality of access issues are apparent because there are differences in coverage across populations in terms of wealth and social status. Identifying the factors associated with sparse coverage is a step in the right direction allowing policymakers to establish strategies aimed at increasing this coverage, focusing on more vulnerable groups and on individuals in greater need of preventive and rehabilitative interventions.
NASA Technical Reports Server (NTRS)
Schuster, David M.; Panda, Jayanta; Ross, James C.; Roozeboom, Nettie H.; Burnside, Nathan J.; Ngo, Christina L.; Kumagai, Hiro; Sellers, Marvin; Powell, Jessica M.; Sekula, Martin K.;
2016-01-01
This NESC assessment examined the accuracy of estimating buffet loads on in-line launch vehicles without booster attachments using sparse unsteady pressure measurements. The buffet loads computed using sparse sensor data were compared with estimates derived using measurements with much higher spatial resolution. The current method for estimating launch vehicle buffet loads is through wind tunnel testing of models with approximately 400 unsteady pressure transducers. Even with this relatively large number of sensors, the coverage can be insufficient to provide reliable integrated unsteady loads on vehicles. In general, sparse sensor spacing requires the use of coherence-length-based corrections in the azimuthal and axial directions to integrate the unsteady pressures and obtain reasonable estimates of the buffet loads. Coherence corrections have been used to estimate buffet loads for a variety of launch vehicles with the assumption methodology results in reasonably conservative loads. For the Space Launch System (SLS), the first estimates of buffet loads exceeded the limits of the vehicle structure, so additional tests with higher sensor density were conducted to better define the buffet loads and possibly avoid expensive modifications to the vehicle design. Without the additional tests and improvements to the coherence-length analysis methods, there would have been significant impacts to the vehicle weight, cost, and schedule. If the load estimates turn out to be too low, there is significant risk of structural failure of the vehicle. This assessment used a combination of unsteady pressure-sensitive paint (uPSP), unsteady pressure transducers, and a dynamic force and moment balance to investigate the integration schemes used with limited unsteady pressure data by comparing them with direct integration of extremely dense fluctuating pressure measurements. An outfall of the assessment was to evaluate the potential of using the emerging uPSP technique in a production test environment for future launch vehicles. The results show that modifications to the current technique can improve the accuracy of buffet estimates. More importantly, the uPSP worked remarkably well and, with improvements to the frequency response, sensitivity, and productivity, will provide an enhanced method for measuring wind tunnel buffet forcing functions (BFFs).
Using Geostatistical Data Fusion Techniques and MODIS Data to Upscale Simulated Wheat Yield
NASA Astrophysics Data System (ADS)
Castrignano, A.; Buttafuoco, G.; Matese, A.; Toscano, P.
2014-12-01
Population growth increases food request. Assessing food demand and predicting the actual supply for a given location are critical components of strategic food security planning at regional scale. Crop yield can be simulated using crop models because is site-specific and determined by weather, management, length of growing season and soil properties. Crop models require reliable location-specific data that are not generally available. Obtaining these data at a large number of locations is time-consuming, costly and sometimes simply not feasible. An upscaling method to extend coverage of sparse estimates of crop yield to an appropriate extrapolation domain is required. This work is aimed to investigate the applicability of a geostatistical data fusion approach for merging remote sensing data with the predictions of a simulation model of wheat growth and production using ground-based data. The study area is Capitanata plain (4000 km2) located in Apulia Region, mostly cropped with durum wheat. The MODIS EVI/NDVI data products for Capitanata plain were downloaded from the Land Processes Distributed Active Archive Center (LPDAAC) remote for the whole crop cycle of durum wheat. Phenological development, biomass growth and grain quantity of durum wheat were simulated by the Delphi system, based on a crop simulation model linked to a database including soil properties, agronomical and meteorological data. Multicollocated cokriging was used to integrate secondary exhaustive information (multi-spectral MODIS data) with primary variable (sparsely distributed biomass/yield model predictions of durum wheat). The model estimates looked strongly spatially correlated with the radiance data (red and NIR bands) and the fusion data approach proved to be quite suitable and flexible to integrate data of different type and support.
High-resolution dynamic 31 P-MRSI using a low-rank tensor model.
Ma, Chao; Clifford, Bryan; Liu, Yuchi; Gu, Yuning; Lam, Fan; Yu, Xin; Liang, Zhi-Pei
2017-08-01
To develop a rapid 31 P-MRSI method with high spatiospectral resolution using low-rank tensor-based data acquisition and image reconstruction. The multidimensional image function of 31 P-MRSI is represented by a low-rank tensor to capture the spatial-spectral-temporal correlations of data. A hybrid data acquisition scheme is used for sparse sampling, which consists of a set of "training" data with limited k-space coverage to capture the subspace structure of the image function, and a set of sparsely sampled "imaging" data for high-resolution image reconstruction. An explicit subspace pursuit approach is used for image reconstruction, which estimates the bases of the subspace from the "training" data and then reconstructs a high-resolution image function from the "imaging" data. We have validated the feasibility of the proposed method using phantom and in vivo studies on a 3T whole-body scanner and a 9.4T preclinical scanner. The proposed method produced high-resolution static 31 P-MRSI images (i.e., 6.9 × 6.9 × 10 mm 3 nominal resolution in a 15-min acquisition at 3T) and high-resolution, high-frame-rate dynamic 31 P-MRSI images (i.e., 1.5 × 1.5 × 1.6 mm 3 nominal resolution, 30 s/frame at 9.4T). Dynamic spatiospectral variations of 31 P-MRSI signals can be efficiently represented by a low-rank tensor. Exploiting this mathematical structure for data acquisition and image reconstruction can lead to fast 31 P-MRSI with high resolution, frame-rate, and SNR. Magn Reson Med 78:419-428, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.
Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke
2018-04-29
The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.
A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.
Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing
2015-01-01
Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.
Duhalde, Denisse J; Arumí, José L; Oyarzún, Ricardo A; Rivera, Diego A
2018-06-11
A fuzzy logic approach has been proposed to face the uncertainty caused by sparse data in the assessment of the intrinsic vulnerability of a groundwater system with parametric methods in Las Trancas Valley, Andean Mountain, south-central Chile, a popular touristic place in Chile, but lacking of a centralized drinking and sewage water public systems; this situation is a potentially source of groundwater pollution. Based on DRASTIC, GOD, and EKv and the expert knowledge of the study area, the Mamdani fuzzy approach was generated and the spatial data were processed by ArcGIS. The groundwater system exhibited areas with high, medium, and low intrinsic vulnerability indices. The fuzzy approach results were compared with traditional methods results, which, in general, have shown a good spatial agreement even though significant changes were also identified in the spatial distribution of the indices. The Mamdani logic approach has shown to be a useful and practical tool to assess the intrinsic vulnerability of an aquifer under sparse data conditions.
Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods
Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.
2013-01-01
Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822
Time-Distance Helioseismology with the HMI Instrument
NASA Technical Reports Server (NTRS)
Duvall, Thomas L., Jr.
2010-01-01
We expect considerable improvement of time-distance results from the Helioseismic and Magnetic Imager (HMI) instrument as opposed to the earlier MDI and GONG data. The higher data rate makes possible several improvements, including faster temporal sampling (45 sec), smaller spatial pixels (0.5 arc sec), better wavelength coverage (6 samples across the line all transmitted to the ground), and year-round coverage of the full disk. The higher spatial resolution makes possible better longitude coverage of active regions and supergranulation and also better latitude coverage. Doppler, continuum, and line depth images have a strong granulation signal. Line core images show little granulation. Analyses to test the limits of these new capabilities will be presented.
Tipton, John; Hooten, Mevin B.; Goring, Simon
2017-01-01
Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temperature surfaces from sparse and noisy measurements recorded at historical United States military forts and other observer stations from 1820 to 1894. One common method for reconstructing the paleoclimate from proxy data is principal component regression (PCR). With PCR, one learns a statistical relationship between the paleoclimate proxy data and a set of climate observations that are used as patterns for potential reconstruction scenarios. We explore PCR in a Bayesian hierarchical framework, extending classical PCR in a variety of ways. First, we model the latent principal components probabilistically, accounting for measurement error in the observational data. Next, we extend our method to better accommodate outliers that occur in the proxy data. Finally, we explore alternatives to the truncation of lower-order principal components using different regularization techniques. One fundamental challenge in paleoclimate reconstruction efforts is the lack of out-of-sample data for predictive validation. Cross-validation is of potential value, but is computationally expensive and potentially sensitive to outliers in sparse data scenarios. To overcome the limitations that a lack of out-of-sample records presents, we test our methods using a simulation study, applying proper scoring rules including a computationally efficient approximation to leave-one-out cross-validation using the log score to validate model performance. The result of our analysis is a spatially explicit reconstruction of spatio-temporal temperature from a very sparse historical record.
NASA Astrophysics Data System (ADS)
Lee, O. A.; Eicken, H.; Weyapuk, W., Jr.; Adams, B.; Mohoney, A. R.
2015-12-01
The significance of highly dispersed, remnant Arctic sea ice as a platform for marine mammals and indigenous hunters in spring and summer may have increased disproportionately with changes in the ice cover. As dispersed remnant ice becomes more common in the future it will be increasingly important to understand its ecological role for upper trophic levels such as marine mammals and its role for supporting primary productivity of ice-associated algae. Potential sparse ice habitat at sea ice concentrations below 15% is difficult to detect using remote sensing data alone. A combination of high resolution satellite imagery (including Synthetic Aperture Radar), data from the Barrow sea ice radar, and local observations from indigenous sea ice experts was used to detect sparse sea ice in the Alaska Arctic. Traditional knowledge on sea ice use by marine mammals was used to delimit the scales where sparse ice could still be used as habitat for seals and walrus. Potential sparse ice habitat was quantified with respect to overall spatial extent, size of ice floes, and density of floes. Sparse ice persistence offshore did not prevent the occurrence of large coastal walrus haul outs, but the lack of sparse ice and early sea ice retreat coincided with local observations of ringed seal pup mortality. Observations from indigenous hunters will continue to be an important source of information for validating remote sensing detections of sparse ice, and improving understanding of marine mammal adaptations to sea ice change.
Using an index of habitat patch proximity for landscape design
Eric J. Gustafson; George R. Parker
1994-01-01
A proximity index (PX) inspired by island biogeography theory is described which quantifies the spatial context of a habitat patch in relation to its neighbors. The index distinguishes sparse distributions of small habitat patches from clusters of large patches. An evaluation of the relationship between PX and variation in the spatial characteristics of clusters of...
Effects of satellite image spatial aggregation and resolution on estimates of forest land area
M.D. Nelson; R.E. McRoberts; G.R. Holden; M.E. Bauer
2009-01-01
Satellite imagery is being used increasingly in association with national forest inventories (NFIs) to produce maps and enhance estimates of forest attributes. We simulated several image spatial resolutions within sparsely and heavily forested study areas to assess resolution effects on estimates of forest land area, independent of other sensor characteristics. We...
Paschoal, Monique Ramos; Cavalcanti, Hannalice Gottschalck; Ferreira, Maria Ângela Fernandes
2017-11-01
This article seeks to establish the coverage of neonatal hearing screening in Brazil between January 2008 and June 2015. It is an ecological study that uses the country, through the Urban Articulation Regions, as a base. To calculate the screening coverage percentage, the Live Births Information System, the Outpatient Information System and the Beneficiaries of the National Supplementary Health Agency Information System were used. An exploratory analysis of maps and spatial statistical analysis was conducted using TerraView 4.2.2 software. The coverage of neonatal hearing screening saw an increase of 9.3% to 37.2% during the study period. In 2008-2009 it was observed that the percentage of coverage ranged from 0% to 79.92%, but most areas received coverage from 0% to 20%, though in 2014-2015 coverage ranged from 0% to 171.77%, and there was a visible increase in the percentage of coverage in the country, mainly in the Southern Region. The screening coverage has increased over time, but is still low with an uneven distribution in the territory, which may be explained by local laws and policies and by the existence of different types of auditory health service in the country.
NASA Astrophysics Data System (ADS)
Weiss, J. R.; Walters, R. J.; Wright, T. J.; Hussain, E.; González, P. J.; Hooper, A. J.
2017-12-01
Accurate and high-resolution measurements of interseismic crustal velocity and the strain-rate fields derived from these measurements are an important input for the assessment of earthquake hazard. However, most strain-rate estimation methods and associated seismicity forecasts rely heavily on Global Navigation Satellite System (GNSS) networks with sparse and heterogeneous spatial coverage, limiting both accuracy and resolution. Interferometric Synthetic Aperture Radar (InSAR) provides remotely-sensed observations of surface motion, with accuracy comparable to GNSS data, and with a spatial resolution of a few tens of meters. The recently launched Sentinel-1 (S1) radar satellites can measure deformation at the tectonic-plate scale and across slowly straining regions where earthquake hazard is poorly characterised. We are producing large-scale crustal velocity and strain-rate fields for the Alpine-Himalayan belt (AHB) by augmenting global GNSS data compilations with InSAR-derived surface velocities. We are also systematically processing S1 interferograms for the AHB and these products are freely available to the geoscience community. We focus on the Anatolian microplate, where we have used both Envisat and S1 data to measure crustal velocity. We address some of the challenges associated with merging the complementary geodetic datasets including reference-frame issues, treatment of uncertainties, and comparison of different velocity/strain-rate inversion methods. We use synthetic displacement fields to illustrate how inclusion of InSAR can aid in identifying features such as unmapped active faults and fault segments that are creeping. From our preliminary results for Anatolia, we investigate the spatial distribution of strain and variation of strain rates during the seismic cycle.
Kosik, Ivan; Raess, Avery
2015-01-01
Accurate reconstruction of 3D photoacoustic (PA) images requires detection of photoacoustic signals from many angles. Several groups have adopted staring ultrasound arrays, but assessment of array performance has been limited. We previously reported on a method to calibrate a 3D PA tomography (PAT) staring array system and analyze system performance using singular value decomposition (SVD). The developed SVD metric, however, was impractical for large system matrices, which are typical of 3D PAT problems. The present study consisted of two main objectives. The first objective aimed to introduce the crosstalk matrix concept to the field of PAT for system design. Figures-of-merit utilized in this study were root mean square error, peak signal-to-noise ratio, mean absolute error, and a three dimensional structural similarity index, which were derived between the normalized spatial crosstalk matrix and the identity matrix. The applicability of this approach for 3D PAT was validated by observing the response of the figures-of-merit in relation to well-understood PAT sampling characteristics (i.e. spatial and temporal sampling rate). The second objective aimed to utilize the figures-of-merit to characterize and improve the performance of a near-spherical staring array design. Transducer arrangement, array radius, and array angular coverage were the design parameters examined. We observed that the performance of a 129-element staring transducer array for 3D PAT could be improved by selection of optimal values of the design parameters. The results suggested that this formulation could be used to objectively characterize 3D PAT system performance and would enable the development of efficient strategies for system design optimization. PMID:25875177
Spatial Coverage Planning for Exploration Robots
NASA Technical Reports Server (NTRS)
Gaines, Daniel; Estlin, Tara; Chouinard, Caroline
2007-01-01
A report discusses an algorithm for an onboard planning and execution technology to support the exploration and characterization of geological features by autonomous rovers. A rover that is capable of deciding which observations are more important relieves the engineering team from much of the burden of attempting to make accurate predictions of what the available rover resources will be in the future. Instead, the science and engineering teams can uplink a set of observation requests that may potentially oversubscribe resources and let the rover use observation priorities and its current assessment of available resources to make decisions about which observations to perform and when to perform them. The algorithm gives the rover the ability to model spatial coverage quality based on data from different scientific instruments, to assess the impact of terrain on coverage quality, to incorporate user-defined priorities among subregions of the terrain to be covered, and to update coverage quality rankings of observations when terrain knowledge changes. When the rover is exploring large geographical features such as craters, channels, or boundaries between two different regions, an important factor in assessing the quality of a mission plan is how the set of chosen observations spatially cover the area of interest. The algorithm allows the rover to evaluate which observation to perform and to what extent the candidate observation will increase the spatial coverage of the plan.
Friesz, Aaron M.; Wylie, Bruce K.; Howard, Daniel M.
2017-01-01
Crop cover maps have become widely used in a range of research applications. Multiple crop cover maps have been developed to suite particular research interests. The National Agricultural Statistics Service (NASS) Cropland Data Layers (CDL) are a series of commonly used crop cover maps for the conterminous United States (CONUS) that span from 2008 to 2013. In this investigation, we sought to contribute to the availability of consistent CONUS crop cover maps by extending temporal coverage of the NASS CDL archive back eight additional years to 2000 by creating annual NASS CDL-like crop cover maps derived from a classification tree model algorithm. We used over 11 million records to train a classification tree algorithm and develop a crop classification model (CCM). The model was used to create crop cover maps for the CONUS for years 2000–2013 at 250 m spatial resolution. The CCM and the maps for years 2008–2013 were assessed for accuracy relative to resampled NASS CDLs. The CCM performed well against a withheld test data set with a model prediction accuracy of over 90%. The assessment of the crop cover maps indicated that the model performed well spatially, placing crop cover pixels within their known domains; however, the model did show a bias towards the ‘Other’ crop cover class, which caused frequent misclassifications of pixels around the periphery of large crop cover patch clusters and of pixels that form small, sparsely dispersed crop cover patches.
NASA Astrophysics Data System (ADS)
Meng, Xia; Garay, Michael J.; Diner, David J.; Kalashnikova, Olga V.; Xu, Jin; Liu, Yang
2018-05-01
Research efforts to better characterize the differential toxicity of PM2.5 (particles with aerodynamic diameters less than or equal to 2.5 μm) speciation are often hindered by the sparse or non-existent coverage of ground monitors. The Multi-angle Imaging SpectroRadiometer (MISR) aboard NASA's Terra satellite is one of few satellite aerosol sensors providing information of aerosol shape, size and extinction globally for a long and continuous period that can be used to estimate PM2.5 speciation concentrations since year 2000. Currently, MISR only provides a 17.6 km product for its entire mission with global coverage every 9 days, a bit too coarse for air pollution health effects research and to capture local spatial variability of PM2.5 speciation. In this study, generalized additive models (GAMs) were developed using MISR prototype 4.4 km-resolution aerosol data with meteorological variables and geographical indicators, to predict ground-level concentrations of PM2.5 sulfate, nitrate, organic carbon (OC) and elemental carbon (EC) in Southern California between 2001 and 2015 at the daily level. The GAMs are able to explain 66%, 62%, 55% and 58% of the daily variability in PM2.5 sulfate, nitrate, OC and EC concentrations during the whole study period, respectively. Predicted concentrations capture large regional patterns as well as fine gradients of the four PM2.5 species in urban areas of Los Angeles and other counties, as well as in the Central Valley. This study is the first attempt to use MISR prototype 4.4 km-resolution AOD (aerosol optical depth) components data to predict PM2.5 sulfate, nitrate, OC and EC concentrations at the sub-regional scale. In spite of its low temporal sampling frequency, our analysis suggests that the MISR 4.4 km fractional AODs provide a promising way to capture the spatial hotspots and long-term temporal trends of PM2.5 speciation, understand the effectiveness of air quality controls, and allow our estimated PM2.5 speciation data to be linked with common spatial units such as census tract or zip code in epidemiological studies. This modeling strategy needs to be validated in other regions when more MISR 4.4 km data becoming available in the future.
NASA Astrophysics Data System (ADS)
Moody, Daniela I.; Wilson, Cathy J.; Rowland, Joel C.; Altmann, Garrett L.
2015-06-01
Advanced pattern recognition and computer vision algorithms are of great interest for landscape characterization, change detection, and change monitoring in satellite imagery, in support of global climate change science and modeling. We present results from an ongoing effort to extend neuroscience-inspired models for feature extraction to the environmental sciences, and we demonstrate our work using Worldview-2 multispectral satellite imagery. We use a Hebbian learning rule to derive multispectral, multiresolution dictionaries directly from regional satellite normalized band difference index data. These feature dictionaries are used to build sparse scene representations, from which we automatically generate land cover labels via our CoSA algorithm: Clustering of Sparse Approximations. These data adaptive feature dictionaries use joint spectral and spatial textural characteristics to help separate geologic, vegetative, and hydrologic features. Land cover labels are estimated in example Worldview-2 satellite images of Barrow, Alaska, taken at two different times, and are used to detect and discuss seasonal surface changes. Our results suggest that an approach that learns from both spectral and spatial features is promising for practical pattern recognition problems in high resolution satellite imagery.
Wang, Li-wen; Wei, Ya-xing; Niu, Zheng
2008-06-01
1 km MODIS NDVI time series data combining with decision tree classification, supervised classification and unsupervised classification was used to classify land cover type of Qinghai Province into 14 classes. In our classification system, sparse grassland and sparse shrub were emphasized, and their spatial distribution locations were labeled. From digital elevation model (DEM) of Qinghai Province, five elevation belts were achieved, and we utilized geographic information system (GIS) software to analyze vegetation cover variation on different elevation belts. Our research result shows that vegetation cover in Qinghai Province has been improved in recent five years. Vegetation cover area increases from 370047 km2 in 2001 to 374576 km2 in 2006, and vegetation cover rate increases by 0.63%. Among five grade elevation belts, vegetation cover ratio of high mountain belt is the highest (67.92%). The area of middle density grassland in high mountain belt is the largest, of which area is 94 003 km2. Increased area of dense grassland in high mountain belt is the greatest (1280 km2). During five years, the biggest variation is the conversion from sparse grassland to middle density grassland in high mountain belt, of which area is 15931 km2.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
1980-10-01
infestation or extent of open water was measured following the same procedures described for deter- fmination of transect percent cover. This value was...procedure where the last vegetation type ended along the transect (i.e. hydrilla, eelgrass, open water ), vegetation coverage was determined for the entire...ated open water , no measurements were made. Approximately 150 to 200 prediction stations were used per monthly sample. 61. For sparse and thick
NASA Astrophysics Data System (ADS)
Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.
2010-12-01
Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.
Spatial analysis of MODIS aerosol optical depth, PM2.5, and chronic coronary heart disease.
Hu, Zhiyong
2009-05-12
Numerous studies have found adverse health effects of acute and chronic exposure to fine particulate matter (PM2.5). Air pollution epidemiological studies relying on ground measurements provided by monitoring networks are often limited by sparse and unbalanced spatial distribution of the monitors. Studies have found correlations between satellite aerosol optical depth (AOD) and PM2.5 in some land regions. Satellite aerosol data may be used to extend the spatial coverage of PM2.5 exposure assessment. This study was to investigate correlation between PM2.5 and AOD in the conterminous USA, to derive a spatially complete PM2.5 surface by merging satellite AOD data and ground measurements based on the potential correlation, and to examine if there is an association of coronary heart disease with PM2.5. Years 2003 and 2004 daily MODIS (Moderate Resolution Imaging Spectrometer) Level 2 AOD images were collated with US EPA PM2.5 data covering the conterminous USA. Pearson's correlation analysis and geographically weighted regression (GWR) found that the relationship between PM2.5 and AOD is not spatially consistent across the conterminous states. The average correlation is 0.67 in the east and 0.22 in the west. GWR predicts well in the east and poorly in the west. The GWR model was used to derive a PM2.5 grid surface using the mean AOD raster calculated using the daily AOD data (RMSE = 1.67 microg/m3). Fitting of a Bayesian hierarchical model linking PM2.5 with age-race standardized mortality rates (SMRs) of chronic coronary heart disease found that areas with higher values of PM2.5 also show high rates of CCHD mortality: = 0.802, posterior 95% Bayesian credible interval (CI) = (0.386, 1.225). There is a spatial variation of the relationship between PM2.5 and AOD in the conterminous USA. In the eastern USA where AOD correlates well with PM2.5, AOD can be merged with ground PM2.5 data to derive a PM2.5 surface for epidemiological study. The study found that chronic coronary heart disease mortality rate increases with exposure to PM2.5.
NASA Astrophysics Data System (ADS)
Marshall, Hans-Peter
The distribution of water in the snow-covered areas of the world is an important climate change indicator, and it is a vital component of the water cycle. At local and regional scales, the snow water equivalent (SWE), the amount of liquid water a given area of the snowpack represents, is very important for water resource management, flood forecasting, and prediction of available hydropower energy. Measurements from only a few automatic weather stations, such as the SNOTEL network, or sparse manual snowpack measurements are typically extrapolated for estimating SWE over an entire basin. Widespread spatial variability in the distribution of SWE and snowpack stratigraphy at local scales causes large errors in these basin estimates. Remote sensing measurements offer a promising alternative, due to their large spatial coverage and high temporal resolution. Although snow cover extent can currently be estimated from remote sensing data, accurately quantifying SWE from remote sensing measurements has remained difficult, due to a high sensitivity to variations in grain size and stratigraphy. In alpine snowpacks, the large degree of spatial variability of snowpack properties and geometry, caused by topographic, vegetative, and microclimatic effects, also makes prediction of snow avalanches very difficult. Ground-based radar and penetrometer measurements can quickly and accurately characterize snowpack properties and SWE in the field. A portable lightweight radar was developed, and allows a real-time estimate of SWE to within 10%, as well as measurements of depths of all major density transitions within the snowpack. New analysis techniques developed in this thesis allow accurate estimates of mechanical properties and an index of grain size to be retrieved from the SnowMicroPenetrometer. These two tools together allow rapid characterization of the snowpack's geometry, mechanical properties, and SWE, and are used to guide a finite element model to study the stress distribution on a slope. The ability to accurately characterize snowpack properties at much higher resolutions and spatial extent than previously possible will hopefully help lead to a more complete understanding of spatial variability, its effect on remote sensing measurements and snow slope stability, and result in improvements in avalanche prediction and accuracy of SWE estimates from space.
Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe
2017-01-01
The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement.
Mountain hydrology, snow color, and the fourth paradigm
NASA Astrophysics Data System (ADS)
Dozier, Jeff
2011-10-01
The world's mountain ranges accumulate substantial snow, whose melt produces the bulk of runoff and often combines with rain to cause floods. Worldwide, inadequate understanding and a reliance on sparsely distributed observations limit our ability to predict seasonal and paroxysmal runoff as climate changes, ecosystems adapt, populations grow, land use evolves, and societies make choices. To improve assessments of snow accumulation, melt, and runoff, scientists and community planners can take advantage of two emerging trends: (1) an ability to remotely sense snow properties from satellites at a spatial scale appropriate for mountain regions (10- to 100-meter resolution, coverage of the order of 100,000 square kilometers) and a daily temporal scale appropriate for the dynamic nature of snow and (2) The Fourth Paradigm [Hey et al., 2009], which posits a new scientific approach in which insight is discovered through the manipulation of large data sets as the evolutionary step in scientific thinking beyond the first three paradigms: empiricism, analyses, and simulation. The inspiration for the book's title comes from pioneering computer scientist Jim Gray, based on a lecture he gave at the National Academy of Sciences 3 weeks before he disappeared at sea.
Ice Nucleating Particles around the world - a global review
NASA Astrophysics Data System (ADS)
Kanji, Zamin A.; Atkinson, James; Sierau, Berko; Lohmann, Ulrike
2017-04-01
In the atmosphere the formation of new ice particles at temperatures above -36 °C is due to a subset of aerosol called Ice Nucleating Particles (INP). However, the spatial and temporal evolution of such particles is poorly understood. Current modelling of INP is attempting to estimate the sources and transport of INP, but is hampered by the availability and convenience of INP observations. As part of the EU FP7 project impact of Biogenic versus Anthropogenic emissions on Clouds and Climate: towards a Holistic UnderStanding (BACCHUS), historical and contemporary observations of INP have been collated into a database (http://www.bacchus-env.eu/in/) and are reviewed here. Outside of Europe and North America the coverage of measurements is sparse, especially for modern day climate - in many areas the only measurements available are from the mid-20th century. As well as an overview of all the data in the database, correlations with several accompanying variables are presented. For example, immersion freezing INP seem to be negatively correlated with altitude, whereas CFDC based condensation freezing INP show no height correlation. An initial global parameterisation of INP concentrations taking into account freezing temperature and relative humidity for use in modelling is provided.
Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun
2016-01-01
The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882
NASA Astrophysics Data System (ADS)
Zhang, Hui; Xue, Lianqing; Yang, Changbing; Chen, Xinfang; Zhang, Luochen; Wei, Guanghui
2018-01-01
The Tarim River (TR), as the longest inland river at an arid area in China, is a typical regions of vegetation variation research and plays a crucial role in the sustainable development of regional ecological environment. In this paper, the newest dataset of MODND1M NDVI, at a resolution of 500m, were applied to calculate vegetation index in growing season during the period 2000-2015. Using a vegetation coverage index, a trend line analysis, and the local spatial autocorrelation analysis, this paper investigated the landscape patterns and spatio-temporal variation of vegetation coverage at regional and pixel scales over mainstream of the Tarim River, Xinjiang. The results showed that (1) The bare land area on both sides of Tarim River appeared to have a fluctuated downward trend and there were two obvious valley values in 2005 and 2012. (2) Spatially, the vegetation coverage improved areas is mostly distributed in upstream and the degraded areas is mainly distributed in the left bank of midstream and the end of Tarim River during 2000-2005. (3) The local spatial auto-correlation analysis revealed that vegetation coverage was spatially positive autocorrelated and spatial concentrated. The high-high self-related areas are mainly distributed in upstream, where vegetation cover are relatively good, and the low-low self-related areas are mostly with lower vegetation cover in the lower reaches of Tarim River.
Exploring Deep Learning and Sparse Matrix Format Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Y.; Liao, C.; Shen, X.
We proposed to explore the use of Deep Neural Networks (DNN) for addressing the longstanding barriers. The recent rapid progress of DNN technology has created a large impact in many fields, which has significantly improved the prediction accuracy over traditional machine learning techniques in image classifications, speech recognitions, machine translations, and so on. To some degree, these tasks resemble the decision makings in many HPC tasks, including the aforementioned format selection for SpMV and linear solver selection. For instance, sparse matrix format selection is akin to image classification—such as, to tell whether an image contains a dog or a cat;more » in both problems, the right decisions are primarily determined by the spatial patterns of the elements in an input. For image classification, the patterns are of pixels, and for sparse matrix format selection, they are of non-zero elements. DNN could be naturally applied if we regard a sparse matrix as an image and the format selection or solver selection as classification problems.« less
Sparse dictionary learning for resting-state fMRI analysis
NASA Astrophysics Data System (ADS)
Lee, Kangjoo; Han, Paul Kyu; Ye, Jong Chul
2011-09-01
Recently, there has been increased interest in the usage of neuroimaging techniques to investigate what happens in the brain at rest. Functional imaging studies have revealed that the default-mode network activity is disrupted in Alzheimer's disease (AD). However, there is no consensus, as yet, on the choice of analysis method for the application of resting-state analysis for disease classification. This paper proposes a novel compressed sensing based resting-state fMRI analysis tool called Sparse-SPM. As the brain's functional systems has shown to have features of complex networks according to graph theoretical analysis, we apply a graph model to represent a sparse combination of information flows in complex network perspectives. In particular, a new concept of spatially adaptive design matrix has been proposed by implementing sparse dictionary learning based on sparsity. The proposed approach shows better performance compared to other conventional methods, such as independent component analysis (ICA) and seed-based approach, in classifying the AD patients from normal using resting-state analysis.
An Adaptive Mesh Algorithm: Mesh Structure and Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scannapieco, Anthony J.
2016-06-21
The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented bymore » a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally sparse.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela Irina
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. A Hebbian learning rule may be used to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of pixel patches over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detectmore » geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.« less
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
NASA Astrophysics Data System (ADS)
Lecompte, M. A.; Heaps, J. F.; Williams, F. H.
Imaging the earth from Geostationary Earth Orbit (GEO) allows frequent updates of environmental conditions within an observable hemisphere at time and spatial scales appropriate to the most transient observable terrestrial phenomena. Coverage provided by current GEO Meteorological Satellites (METSATS) fails to fully exploit this advantage due primarily to obsolescent technology and also institutional inertia. With the full benefit of GEO based imaging unrealized, rapidly evolving phenomena, occurring at the smallest spatial and temporal scales that frequently have significant environmental impact remain unobserved. These phenomena may be precursors for the most destructive natural processes that adversely effect society. Timely distribution of information derived from "real-time" observations thus may provide opportunities to mitigate much of the damage to life and property that would otherwise occur. AstroVision International's AVStar Earth monitoring system is designed to overcome the current limitations if GEO Earth coverage and to provide real time monitoring of changes to the Earth's complete atmospheric, land and marine surface environments including fires, volcanic events, lightning and meteoritic events on a "live," true color, and multispectral basis. The understanding of severe storm dynamics and its coupling to the earth's electro-sphere will be greatly enhanced by observations at unprecedented sampling frequencies and spatial resolution. Better understanding of these natural phenomena and AVStar operational real-time coverage may also benefit society through improvements in severe weather prediction and warning. AstroVision's AVStar system, designed to provide this capability with the first of a constellation of GEO- based commercial environmental monitoring satellites to be launched in late 2003 will be discussed, including spatial and temporal resolution, spectral coverage with applications and an inventory of the potential benefits to society, science, commerce and education.
Medical image classification based on multi-scale non-negative sparse coding.
Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar
2017-11-01
With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.
Influences of gender role socialization and anxiety on spatial cognitive style.
Nori, Raffaella; Mercuri, Noemi; Giusberti, Fiorella; Bensi, Luca; Gambetti, Elisa
2009-01-01
Research on the relationship between personality and social factors in spatial cognitive style is sparse. The present research was conducted to help fill the gap in this domain. We investigated the influence of specific personality traits (masculine/feminine, spatial and trait anxiety), state anxiety, and sex on spatial cognitive style. One hundred forty-two participants completed a battery of spatial tasks in order to assess their spatial cognitive style and filled in questionnaires about the personality traits under examination. Results showed that state anxiety, spatial anxiety, sex, and masculine/feminine trait of personality are predictors of spatial cognitive style. More specifically, it seems that masculine/feminine trait mediates the relationship between sex and spatial cognitive style. Such findings confirm the importance of personality in determining differences in spatial representation.
Yuan, Yuan; Lin, Jianzhe; Wang, Qi
2016-12-01
Hyperspectral image (HSI) classification is a crucial issue in remote sensing. Accurate classification benefits a large number of applications such as land use analysis and marine resource utilization. But high data correlation brings difficulty to reliable classification, especially for HSI with abundant spectral information. Furthermore, the traditional methods often fail to well consider the spatial coherency of HSI that also limits the classification performance. To address these inherent obstacles, a novel spectral-spatial classification scheme is proposed in this paper. The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework, which are claimed to be two main contributions in this procedure. First, the MJSR not only reduces the spectral redundancy, but also retains necessary correlation in spectral field during classification. Second, the stepwise optimization further explores the spatial correlation that significantly enhances the classification accuracy and robustness. As far as several universal quality evaluation indexes are concerned, the experimental results on Indian Pines and Pavia University demonstrate the superiority of our method compared with the state-of-the-art competitors.
Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines
NASA Astrophysics Data System (ADS)
Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian
2016-11-01
Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.
Expanding the catalog of binary black-hole simulations: aligned-spin configurations
NASA Astrophysics Data System (ADS)
Chu, Tony; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; SXS Collaboration
2015-04-01
A major goal of numerical relativity is to model the inspiral and merger of binary black holes through sufficiently accurate and long simulations, to enable the successful detection of gravitational waves. However, covering the full parameter space of binary configurations is a computationally daunting task. The SXS Collaboration has made important progress in this direction recently, with a catalog of 174 publicly available binary black-hole simulations [black-holes.org/waveforms]. Nevertheless, the parameter-space coverage remains sparse, even for non-precessing binaries. In this talk, I will describe an addition to the SXS catalog to improve its coverage, consisting of 95 new simulations of aligned-spin binaries with moderate mass ratios and dimensionless spins as high as 0.9. Some applications of these new simulations will also be mentioned.
Mixture of Segmenters with Discriminative Spatial Regularization and Sparse Weight Selection*
Chen, Ting; Rangarajan, Anand; Eisenschenk, Stephan J.
2011-01-01
This paper presents a novel segmentation algorithm which automatically learns the combination of weak segmenters and builds a strong one based on the assumption that the locally weighted combination varies w.r.t. both the weak segmenters and the training images. We learn the weighted combination during the training stage using a discriminative spatial regularization which depends on training set labels. A closed form solution to the cost function is derived for this approach. In the testing stage, a sparse regularization scheme is imposed to avoid overfitting. To the best of our knowledge, such a segmentation technique has never been reported in literature and we empirically show that it significantly improves on the performances of the weak segmenters. After showcasing the performance of the algorithm in the context of atlas-based segmentation, we present comparisons to the existing weak segmenter combination strategies on a hippocampal data set. PMID:22003748
NASA Astrophysics Data System (ADS)
Vidmar, David; Narayan, Sanjiv M.; Krummen, David E.; Rappel, Wouter-Jan
2016-11-01
We present a general method of utilizing bioelectric recordings from a spatially sparse electrode grid to compute a dynamic vector field describing the underlying propagation of electrical activity. This vector field, termed the wave-front flow field, permits quantitative analysis of the magnitude of rotational activity (vorticity) and focal activity (divergence) at each spatial point. We apply this method to signals recorded during arrhythmias in human atria and ventricles using a multipolar contact catheter and show that the flow fields correlate with corresponding activation maps. Further, regions of elevated vorticity and divergence correspond to sites identified as clinically significant rotors and focal sources where therapeutic intervention can be effective. These flow fields can provide quantitative insights into the dynamics of normal and abnormal conduction in humans and could potentially be used to enhance therapies for cardiac arrhythmias.
Stalder, Aurelien F; Schmidt, Michaela; Quick, Harald H; Schlamann, Marc; Maderwald, Stefan; Schmitt, Peter; Wang, Qiu; Nadar, Mariappan S; Zenge, Michael O
2015-12-01
To integrate, optimize, and evaluate a three-dimensional (3D) contrast-enhanced sparse MRA technique with iterative reconstruction on a standard clinical MR system. Data were acquired using a highly undersampled Cartesian spiral phyllotaxis sampling pattern and reconstructed directly on the MR system with an iterative SENSE technique. Undersampling, regularization, and number of iterations of the reconstruction were optimized and validated based on phantom experiments and patient data. Sparse MRA of the whole head (field of view: 265 × 232 × 179 mm(3) ) was investigated in 10 patient examinations. High-quality images with 30-fold undersampling, resulting in 0.7 mm isotropic resolution within 10 s acquisition, were obtained. After optimization of the regularization factor and of the number of iterations of the reconstruction, it was possible to reconstruct images with excellent quality within six minutes per 3D volume. Initial results of sparse contrast-enhanced MRA (CEMRA) in 10 patients demonstrated high-quality whole-head first-pass MRA for both the arterial and venous contrast phases. While sparse MRI techniques have not yet reached clinical routine, this study demonstrates the technical feasibility of high-quality sparse CEMRA of the whole head in a clinical setting. Sparse CEMRA has the potential to become a viable alternative where conventional CEMRA is too slow or does not provide sufficient spatial resolution. © 2014 Wiley Periodicals, Inc.
Semi-implicit integration factor methods on sparse grids for high-dimensional systems
NASA Astrophysics Data System (ADS)
Wang, Dongyong; Chen, Weitao; Nie, Qing
2015-07-01
Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.
Sparse representation of whole-brain fMRI signals for identification of functional networks.
Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming
2015-02-01
There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wason, H.; Herrmann, F. J.; Kumar, R.
2016-12-01
Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.
Lampe, David C.
2009-01-01
The U.S. Geological Survey is assessing groundwater availability in the Lake Michigan Basin. As part of the assessment, a variable-density groundwater-flow model is being developed to simulate the effects of groundwater use on water availability throughout the basin. The hydrogeologic framework for the Lake Michigan Basin model was developed by grouping the bedrock geology of the study area into hydrogeologic units on the basis of the functioning of each unit as an aquifer or confining layer within the basin. Available data were evaluated based on the areal extent of coverage within the study area, and procedures were established to characterize areas with sparse data coverage. Top and bottom altitudes for each hydrogeologic unit were interpolated in a geographic information system for input to the model and compared with existing maps of subsurface formations. Fourteen bedrock hydrogeologic units, making up 17 bedrock model layers, were defined, and they range in age from the Jurassic Period red beds of central Michigan to the Cambrian Period Mount Simon Sandstone. Information on groundwater salinity in the Lake Michigan Basin was compiled to create an input dataset for the variable-density groundwater-flow simulation. Data presented in this report are referred to as 'salinity data' and are reported in terms of total dissolved solids. Salinity data were not available for each hydrogeologic unit. Available datasets were assigned to a hydrogeologic unit, entered into a spatial database, and data quality was visually evaluated. A geographic information system was used to interpolate salinity distributions for each hydrogeologic unit with available data. Hydrogeologic units with no available data either were set equal to neighboring units or were vertically interpolated by use of values from units above and below.
Monitoring Ocean CO2 Fluxes from Space: GOSAT and OCO-2
NASA Technical Reports Server (NTRS)
Crisp, David
2012-01-01
The ocean is a major component of the global carbon cycle, emitting over 330 billion tons of carbon dioxide (CO2) into the atmosphere each year, or about 10 times that emitted fossil fuel combustion and all other human activities [1, 2]. The ocean reabsorbs a comparable amount of CO2 each year, along with 25% of the CO2 emitted by these human activities. The nature and geographic distribution of the processes controlling these ocean CO2 fluxes are still poorly constrained by observations. A better understanding of these processes is essential to predict how this important CO2 sink may evolve as the climate changes.While in situ measurements of ocean CO2 fluxes can be very precise, the sampling density is far too sparse to quantify ocean CO2 sources and sinks over much of the globe. One way to improve the spatial resolution, coverage, and sampling frequency is to make observations of the column averaged CO2 dry air mole fraction, XCO2, from space [4, 5, 6]. Such measurements could provide global coverage at high resolution (< 100 km) on monthly time scales. High precision (< 1 part per million, ppm) is essential to resolve the small, near-surface CO2 variations associated with ocean fluxes and to better constrain the CO2 transport over the ocean. The Japanese Greenhouse gases Observing Satellite (GOSAT) and the NASA Orbiting Carbon Observatory (OCO) were first two space based sensors designed specifically for this task. GOSAT was successfully launched on January 23, 2009, and has been returning measurements of XCO2 since April 2009. The OCO mission was lost in February 2009, when its launch vehicle malfunctioned and failed to reach orbit. In early 2010, NASA authorized a re-flight of OCO, called OCO-2, which is currently under development.
Kim, Yong-Hwan; Kim, Junghoe; Lee, Jong-Hwan
2012-12-01
This study proposes an iterative dual-regression (DR) approach with sparse prior regularization to better estimate an individual's neuronal activation using the results of an independent component analysis (ICA) method applied to a temporally concatenated group of functional magnetic resonance imaging (fMRI) data (i.e., Tc-GICA method). An ordinary DR approach estimates the spatial patterns (SPs) of neuronal activation and corresponding time courses (TCs) specific to each individual's fMRI data with two steps involving least-squares (LS) solutions. Our proposed approach employs iterative LS solutions to refine both the individual SPs and TCs with an additional a priori assumption of sparseness in the SPs (i.e., minimally overlapping SPs) based on L(1)-norm minimization. To quantitatively evaluate the performance of this approach, semi-artificial fMRI data were created from resting-state fMRI data with the following considerations: (1) an artificially designed spatial layout of neuronal activation patterns with varying overlap sizes across subjects and (2) a BOLD time series (TS) with variable parameters such as onset time, duration, and maximum BOLD levels. To systematically control the spatial layout variability of neuronal activation patterns across the "subjects" (n=12), the degree of spatial overlap across all subjects was varied from a minimum of 1 voxel (i.e., 0.5-voxel cubic radius) to a maximum of 81 voxels (i.e., 2.5-voxel radius) across the task-related SPs with a size of 100 voxels for both the block-based and event-related task paradigms. In addition, several levels of maximum percentage BOLD intensity (i.e., 0.5, 1.0, 2.0, and 3.0%) were used for each degree of spatial overlap size. From the results, the estimated individual SPs of neuronal activation obtained from the proposed iterative DR approach with a sparse prior showed an enhanced true positive rate and reduced false positive rate compared to the ordinary DR approach. The estimated TCs of the task-related SPs from our proposed approach showed greater temporal correlation coefficients with a reference hemodynamic response function than those of the ordinary DR approach. Moreover, the efficacy of the proposed DR approach was also successfully demonstrated by the results of real fMRI data acquired from left-/right-hand clenching tasks in both block-based and event-related task paradigms. Copyright © 2012 Elsevier Inc. All rights reserved.
Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
Network dynamics underlying the formation of sparse, informative representations in the hippocampus.
Karlsson, Mattias P; Frank, Loren M
2008-12-24
During development, activity-dependent processes increase the specificity of neural responses to stimuli, but the role that this type of process plays in adult plasticity is unclear. We examined the dynamics of hippocampal activity as animals learned about new environments to understand how neural selectivity changes with experience. Hippocampal principal neurons fire when the animal is located in a particular subregion of its environment, and in any given environment the hippocampal representation is sparse: less than half of the neurons in areas CA1 and CA3 are active whereas the rest are essentially silent. Here we show that different dynamics govern the evolution of this sparsity in CA1 and upstream area CA3. CA1, but not CA3, produces twice as many spikes in novel compared with familiar environments. This high rate firing continues during sharp wave ripple events in a subsequent rest period. The overall CA1 population rate declines and the number of active cells decreases as the environment becomes familiar and task performance improves, but the decline in rate is not uniform across neurons. Instead, the activity of cells with initial peak spatial rates above approximately 12 Hz is enhanced, whereas the activity of cells with lower initial peak rates is suppressed. The result of these changes is that the active CA1 population comes to consist of a relatively small group of cells with strong spatial tuning. This process is not evident in CA3, indicating that a region-specific and long timescale process operates in CA1 to create a sparse, spatially informative population of neurons.
Yang, Xiaomei; Zhou, Chenghu; Li, Zhi
2017-01-01
Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features. PMID:28914787
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu; Li, Zhi
2017-09-15
Cloud cover is inevitable in optical remote sensing (RS) imagery on account of the influence of observation conditions, which limits the availability of RS data. Therefore, it is of great significance to be able to reconstruct the cloud-contaminated ground information. This paper presents a sparse dictionary learning-based image inpainting method for adaptively recovering the missing information corrupted by thick clouds patch-by-patch. A feature dictionary was learned from exemplars in the cloud-free regions, which was later utilized to infer the missing patches via sparse representation. To maintain the coherence of structures, structure sparsity was brought in to encourage first filling-in of missing patches on image structures. The optimization model of patch inpainting was formulated under the adaptive neighborhood-consistency constraint, which was solved by a modified orthogonal matching pursuit (OMP) algorithm. In light of these ideas, the thick-cloud removal scheme was designed and applied to images with simulated and true clouds. Comparisons and experiments show that our method can not only keep structures and textures consistent with the surrounding ground information, but also yield rare smoothing effect and block effect, which is more suitable for the removal of clouds from high-spatial resolution RS imagery with salient structures and abundant textured features.
AirMSPI ORACLES Terrain Data V006
Atmospheric Science Data Center
2018-05-05
... ER-2 Instrument: AirMSPI Spatial Coverage: United States, California, Georgia, Africa, Southern Africa, ... 10/25 meters per pixel Temporal Coverage: 07/28/2016 - 10/06/2016 Temporal Resolution: ...
Observability of global rivers with future SWOT observations
NASA Astrophysics Data System (ADS)
Fisher, Colby; Pan, Ming; Wood, Eric
2017-04-01
The Surface Water and Ocean Topography (SWOT) mission is designed to provide global observations of water surface elevation and slope from which river discharge can be estimated using a data assimilation system. This mission will provide increased spatial and temporal coverage compared to current altimeters, with an expected accuracy for water level elevations of 10 cm on rivers greater than 100 m wide. Within the 21-day repeat cycle, a river reach will be observed 2-4 times on average. Due to the relationship between the basin orientation and the orbit, these observations are not evenly distributed in time, which will impact the derived discharge values. There is, then, a need for a better understanding of how the mission will observe global river basins. In this study, we investigate how SWOT will observe global river basins and how the temporal and spatial sampling impacts the discharge estimated from assimilation. SWOT observations can be assimilated using the Inverse Streamflow Routing (ISR) model of Pan and Wood [2013] with a fixed interval Kalman smoother. Previous work has shown that the ISR assimilation method can be used to reproduce the spatial and temporal dynamics of discharge within many global basins: however, this performance was strongly impacted by the spatial and temporal availability of discharge observations. In this study, we apply the ISR method to 32 global basins with different geometries and crossing patterns for the future orbit, assimilating theoretical SWOT-retrieved "gauges". Results show that the model performance varies significantly across basins and is driven by the orientation, flow distance, and travel time in each. Based on these properties, we quantify the "observability" of each basin and relate this to the performance of the assimilation. Applying this metric globally to a large variety of basins we can gain a better understanding of the impact that SWOT observations may have across basin scales. By determining the availability of SWOT observations in this manner, hydrologic data assimilation approaches like ISR can be optimized to provide useful discharge estimates in sparsely gauged regions where spatially and temporally consistent discharge records are most valuable. Pan, M; Wood, E F 2013 Inverse streamflow routing, HESS 17(11):4577-4588
Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.
Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K
2014-02-01
Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.
JESTR: Jupiter Exploration Science in the Time Regime
NASA Technical Reports Server (NTRS)
Noll, Keith S.; Simon-Miller, A. A.; Wong, M. H.; Choi, D. S.
2012-01-01
Solar system objects are inherently time-varying with changes that occur on timescales ranging from seconds to years. For all planets other than the Earth, temporal coverage of atmospheric phenomena is limited and sparse. Many important atmospheric phenomena, especially those related to atmospheric dynamics, can be studied in only very limited ways with current data. JESTR is a mission concept that would remedy this gap in our exploration of the solar system by ncar-continuous imaging and spectral monitoring of Jupiter over a multi-year mission lifetime.
Validation of NH3 satellite observations by ground-based FTIR measurements
NASA Astrophysics Data System (ADS)
Dammers, Enrico; Palm, Mathias; Van Damme, Martin; Shephard, Mark; Cady-Pereira, Karen; Capps, Shannon; Clarisse, Lieven; Coheur, Pierre; Erisman, Jan Willem
2016-04-01
Global emissions of reactive nitrogen have been increasing to an unprecedented level due to human activities and are estimated to be a factor four larger than pre-industrial levels. Concentration levels of NOx are declining, but ammonia (NH3) levels are increasing around the globe. While NH3 at its current concentrations poses significant threats to the environment and human health, relatively little is known about the total budget and global distribution. Surface observations are sparse and mainly available for north-western Europe, the United States and China and are limited by the high costs and poor temporal and spatial resolution. Since the lifetime of atmospheric NH3 is short, on the order of hours to a few days, due to efficient deposition and fast conversion to particulate matter, the existing surface measurements are not sufficient to estimate global concentrations. Advanced space-based IR-sounders such as the Tropospheric Emission Spectrometer (TES), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) enable global observations of atmospheric NH3 that help overcome some of the limitations of surface observations. However, the satellite NH3 retrievals are complex requiring extensive validation. Presently there have only been a few dedicated satellite NH3 validation campaigns performed with limited spatial, vertical or temporal coverage. Recently a retrieval methodology was developed for ground-based Fourier Transform Infrared Spectroscopy (FTIR) instruments to obtain vertical concentration profiles of NH3. Here we show the applicability of retrieved columns from nine globally distributed stations with a range of NH3 pollution levels to validate satellite NH3 products.
Broday, David M
2017-10-02
The evaluation of the effects of air pollution on public health and human-wellbeing requires reliable data. Standard air quality monitoring stations provide accurate measurements of airborne pollutant levels, but, due to their sparse distribution, they cannot capture accurately the spatial variability of air pollutant concentrations within cities. Dedicated in-depth field campaigns have dense spatial coverage of the measurements but are held for relatively short time periods. Hence, their representativeness is limited. Moreover, the oftentimes integrated measurements represent time-averaged records. Recent advances in communication and sensor technologies enable the deployment of dense grids of Wireless Distributed Environmental Sensor Networks for air quality monitoring, yet their capability to capture urban-scale spatiotemporal pollutant patterns has not been thoroughly examined to date. Here, we summarize our studies on the practicalities of using data streams from sensor nodes for air quality measurement and the required methods to tune the results to different stakeholders and applications. We summarize the results from eight cities across Europe, five sensor technologies-three stationary (with one tested also while moving) and two personal sensor platforms, and eight ambient pollutants. Overall, few sensors showed an exceptional and consistent performance, which can shed light on the fine spatiotemporal urban variability of pollutant concentrations. Stationary sensor nodes were more reliable than personal nodes. In general, the sensor measurements tend to suffer from the interference of various environmental factors and require frequent calibrations. This calls for the development of suitable field calibration procedures, and several such in situ field calibrations are presented.
NASA Astrophysics Data System (ADS)
Zhang, X.; Liang, S.; Wang, G.
2015-12-01
Incident solar radiation (ISR) over the Earth's surface plays an important role in determining the Earth's climate and environment. Generally, can be obtained from direct measurements, remotely sensed data, or reanalysis and general circulation models (GCMs) data. Each type of product has advantages and limitations: the surface direct measurements provide accurate but sparse spatial coverage, whereas other global products may have large uncertainties. Ground measurements have been normally used for validation and occasionally calibration, but transforming their "true values" spatially to improve the satellite products is still a new and challenging topic. In this study, an improved thin-plate smoothing spline approach is presented to locally "calibrate" the Global LAnd Surface Satellite (GLASS) ISR product using the reconstructed ISR data from surface meteorological measurements. The influences of surface elevation on ISR estimation was also considered in the proposed method. The point-based surface reconstructed ISR was used as the response variable, and the GLASS ISR product and the surface elevation data at the corresponding locations as explanatory variables to train the thin plate spline model. We evaluated the performance of the approach using the cross-validation method at both daily and monthly time scales over China. We also evaluated estimated ISR based on the thin-plate spline method using independent ground measurements at 10 sites from the Coordinated Enhanced Observation Network (CEON). These validation results indicated that the thin plate smoothing spline method can be effectively used for calibrating satellite derived ISR products using ground measurements to achieve better accuracy.
2017-01-01
The evaluation of the effects of air pollution on public health and human-wellbeing requires reliable data. Standard air quality monitoring stations provide accurate measurements of airborne pollutant levels, but, due to their sparse distribution, they cannot capture accurately the spatial variability of air pollutant concentrations within cities. Dedicated in-depth field campaigns have dense spatial coverage of the measurements but are held for relatively short time periods. Hence, their representativeness is limited. Moreover, the oftentimes integrated measurements represent time-averaged records. Recent advances in communication and sensor technologies enable the deployment of dense grids of Wireless Distributed Environmental Sensor Networks for air quality monitoring, yet their capability to capture urban-scale spatiotemporal pollutant patterns has not been thoroughly examined to date. Here, we summarize our studies on the practicalities of using data streams from sensor nodes for air quality measurement and the required methods to tune the results to different stakeholders and applications. We summarize the results from eight cities across Europe, five sensor technologies-three stationary (with one tested also while moving) and two personal sensor platforms, and eight ambient pollutants. Overall, few sensors showed an exceptional and consistent performance, which can shed light on the fine spatiotemporal urban variability of pollutant concentrations. Stationary sensor nodes were more reliable than personal nodes. In general, the sensor measurements tend to suffer from the interference of various environmental factors and require frequent calibrations. This calls for the development of suitable field calibration procedures, and several such in situ field calibrations are presented. PMID:28974042
NASA Astrophysics Data System (ADS)
Odman, M. T.; Hu, Y.; Russell, A.; Chai, T.; Lee, P.; Shankar, U.; Boylan, J.
2012-12-01
Regulatory air quality modeling, such as State Implementation Plan (SIP) modeling, requires that model performance meets recommended criteria in the base-year simulations using period-specific, estimated emissions. The goal of the performance evaluation is to assure that the base-year modeling accurately captures the observed chemical reality of the lower troposphere. Any significant deficiencies found in the performance evaluation must be corrected before any base-case (with typical emissions) and future-year modeling is conducted. Corrections are usually made to model inputs such as emission-rate estimates or meteorology and/or to the air quality model itself, in modules that describe specific processes. Use of ground-level measurements that follow approved protocols is recommended for evaluating model performance. However, ground-level monitoring networks are spatially sparse, especially for particulate matter. Satellite retrievals of atmospheric chemical properties such as aerosol optical depth (AOD) provide spatial coverage that can compensate for the sparseness of ground-level measurements. Satellite retrievals can also help diagnose potential model or data problems in the upper troposphere. It is possible to achieve good model performance near the ground, but have, for example, erroneous sources or sinks in the upper troposphere that may result in misleading and unrealistic responses to emission reductions. Despite these advantages, satellite retrievals are rarely used in model performance evaluation, especially for regulatory modeling purposes, due to the high uncertainty in retrievals associated with various contaminations, for example by clouds. In this study, 2007 was selected as the base year for SIP modeling in the southeastern U.S. Performance of the Community Multiscale Air Quality (CMAQ) model, at a 12-km horizontal resolution, for this annual simulation is evaluated using both recommended ground-level measurements and non-traditional satellite retrievals. Evaluation results are assessed against recommended criteria and peer studies in the literature. Further analysis is conducted, based upon these assessments, to discover likely errors in model inputs and potential deficiencies in the model itself. Correlations as well as differences in input errors and model deficiencies revealed by ground-level measurements versus satellite observations are discussed. Additionally, sensitivity analyses are employed to investigate errors in emission-rate estimates using either ground-level measurements or satellite retrievals, and the results are compared against each other considering observational uncertainties. Recommendations are made for how to effectively utilize satellite retrievals in regulatory air quality modeling.
NASA Astrophysics Data System (ADS)
Pisek, Jan; Chen, Jing; Kobayashi, Hideki; Rautiainen, Miina; Schaepman, Michael; Karnieli, Arnon; Sprintsin, Michael; Ryu, Youngryel; Nikopensius, Maris; Raabe, Kairi
2016-04-01
Ground vegetation (understory) provides an essential contribution to the whole-stand reflectance signal in many boreal, sub-boreal, and temperate forests. Accurate knowledge about forest understory reflectance is urgently needed in various forest reflectance modelling efforts. However, systematic collections of understory reflectance data covering different sites and ecosystems are almost missing. Measurement of understory reflectance is a real challenge because of an extremely high variability of irradiance at the forest floor, weak signal in some parts of the spectrum, spectral separability issues of over- and understory and its variable nature. Understory can consist of several sub-layers (regenerated tree, shrub, grasses or dwarf shrub, mosses, lichens, litter, bare soil), it has spatially-temporally variable species composition and ground coverage. Additional challenges are introduced by patchiness of ground vegetation, ground surface roughness, and understory-overstory relations. Due to this variability, remote sensing might be the only means to provide consistent data at spatially relevant scales. In this presentation, we report on retrieving seasonal courses of understory Normalized Difference Vegetation Index (NDVI) from multi-angular MODIS BRDF/Albedo data. We compared satellite-based seasonal courses of understory NDVI against an extended collection of different types of forest sites with available in-situ understory reflectance measurements. These sites are distributed along a wide latitudinal gradient on the Northern hemisphere: a sparse and dense black spruce forests in Alaska and Canada, a northern European boreal forest in Finland, hemiboreal needleleaf and deciduous stands in Estonia, a mixed temperate forest in Switzerland, a cool temperate deciduous broadleaf forest in Korea, and a semi-arid pine plantation in Israel. Our results indicated the retrieval method performs well particularly over open forests of different types. We also demonstrated the limitations of the method for closed canopies, where the understory signal retrieval is much attenuated. The retrieval of understory signal can be used e.g. to improve the estimates of leaf area index (LAI), fAPAR in sparsely vegetated areas, and also to study the phenology of understory layer. Our results are particularly useful to producing Northern hemisphere maps of seasonal dynamics of forests, allowing to separately retrieve understory variability, being a main contributor to spring emergence and fall senescence uncertainty. The inclusion of understory variability in ecological models will ultimately improve prediction and forecast horizons of vegetation dynamics.
NASA Astrophysics Data System (ADS)
Stock, M.; Lapierre, J. L.; Zhu, Y.
2017-12-01
Recently, the Geostationary Lightning Mapper (GLM) began collecting optical data to locate lightning events and flashes over the North and South American continents. This new instrument promises uniformly high detection efficiency (DE) over its entire field of view, with location accuracy on the order of 10 km. In comparison, Earth Networks Total Lightning Networks (ENTLN) has a less uniform coverage, with higher DE in regions with dense sensor coverage, and lower DE with sparse sensor coverage. ENTLN also offers better location accuracy, lightning classification, and peak current estimation for their lightning locations. It is desirable to produce an integrated dataset, combining the strong points of GLM and ENTLN. The easiest way to achieve this is to simply match located lightning processes from each system using time and distance criteria. This simple method will be limited in scope by the uneven coverage of the ground based network. Instead, we will use GLM group locations to look up the electric field change data recorded by ground sensors near each GLM group, vastly increasing the coverage of the ground network. The ground waveforms can then be used for: improvements to differentiation between glint and lightning for GLM, higher precision lighting location, current estimation, and lightning process classification. Presented is an initial implementation of this type of integration using preliminary GLM data, and waveforms from ENTLN.
Monitoring Marine Weather Systems Using Quikscat and TRMM Data
NASA Technical Reports Server (NTRS)
Liu, W.; Tang, W.; Datta, A.; Hsu, C.
1999-01-01
We do not understand nor are able to predict marine storms, particularly tropical cyclones, sufficiently well because ground-based measurements are sparse and operational numerical weather prediction models do not have sufficient spatial resolution nor accurate parameterization of the physics.
Guo, Yi; Lebel, R Marc; Zhu, Yinghua; Lingala, Sajan Goud; Shiroishi, Mark S; Law, Meng; Nayak, Krishna
2016-05-01
To clinically evaluate a highly accelerated T1-weighted dynamic contrast-enhanced (DCE) MRI technique that provides high spatial resolution and whole-brain coverage via undersampling and constrained reconstruction with multiple sparsity constraints. Conventional (rate-2 SENSE) and experimental DCE-MRI (rate-30) scans were performed 20 minutes apart in 15 brain tumor patients. The conventional clinical DCE-MRI had voxel dimensions 0.9 × 1.3 × 7.0 mm(3), FOV 22 × 22 × 4.2 cm(3), and the experimental DCE-MRI had voxel dimensions 0.9 × 0.9 × 1.9 mm(3), and broader coverage 22 × 22 × 19 cm(3). Temporal resolution was 5 s for both protocols. Time-resolved images and blood-brain barrier permeability maps were qualitatively evaluated by two radiologists. The experimental DCE-MRI scans showed no loss of qualitative information in any of the cases, while achieving substantially higher spatial resolution and whole-brain spatial coverage. Average qualitative scores (from 0 to 3) were 2.1 for the experimental scans and 1.1 for the conventional clinical scans. The proposed DCE-MRI approach provides clinically superior image quality with higher spatial resolution and coverage than currently available approaches. These advantages may allow comprehensive permeability mapping in the brain, which is especially valuable in the setting of large lesions or multiple lesions spread throughout the brain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less
[Estimation of desert vegetation coverage based on multi-source remote sensing data].
Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui
2012-12-01
Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.
Accelerated High-Dimensional MR Imaging with Sparse Sampling Using Low-Rank Tensors
He, Jingfei; Liu, Qiegen; Christodoulou, Anthony G.; Ma, Chao; Lam, Fan
2017-01-01
High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI. PMID:27093543
Color Sparse Representations for Image Processing: Review, Models, and Prospects.
Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I
2015-11-01
Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.
Mentoring Temporal and Spatial Variations in Rainfall across Wadi Ar-Rumah, Saudi Arabia
NASA Astrophysics Data System (ADS)
Alharbi, T.; Ahmed, M.
2015-12-01
Across the Kingdom of Saudi Arabia (KSA), the fresh water resources are limited only to those found in aquifer systems. Those aquifers were believed to be recharged during the previous wet climatic period but still receiving modest local recharge in interleaving dry periods such as those prevailing at present. Quantifying temporal and spatial variabilities in rainfall patterns, magnitudes, durations, and frequencies is of prime importance when it comes to sustainable management of such aquifer systems. In this study, an integrated approach, using remote sensing and field data, was used to assess the past, the current, and the projected spatial and temporal variations in rainfall over one of the major watersheds in KSA, Wadi Ar-Rumah. This watershed was selected given its larger areal extent and population intensity. Rainfall data were extracted from (1) the Climate Prediction Centers (CPC) Merged Analysis of Precipitation (CMAP; spatial coverage: global; spatial resolution: 2.5° × 2.5°; temporal coverage: January 1979 to April 2015; temporal resolution: monthly), and (2) the Tropical Rainfall Measuring Mission (TRMM; spatial coverage: 50°N to 50°S; spatial resolution: 0.25° × 0.25°; temporal coverage: January 1998 to March 2015; temporal resolution: 3 hours) and calibrated against rainfall measurements extracted from rain gauges. Trends in rainfall patterns were examined over four main investigation periods: period I (01/1979 to 12/1985), period II (01/1986 to 12/1992), period III (01/1993 to 12/2002), and period IV (01/2003 to 12/2014). Our findings indicate: (1) a significant increase (+14.19 mm/yr) in rainfall rates were observed during period I, (2) a significant decrease in rainfall rates were observed during periods II (-5.80 mm/yr), III (-9.38 mm/yr), and IV (-2.46 mm/yr), and (3) the observed variations in rainfall rates are largely related to the temporal variations in the northerlies (also called northwesterlies) and the monsoonal wind regimes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less
Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.
Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi
2016-05-23
A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu
2017-11-01
This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.
NASA Astrophysics Data System (ADS)
Lin, Y.; Bajcsy, P.; Valocchi, A. J.; Kim, C.; Wang, J.
2007-12-01
Natural systems are complex, thus extensive data are needed for their characterization. However, data acquisition is expensive; consequently we develop models using sparse, uncertain information. When all uncertainties in the system are considered, the number of alternative conceptual models is large. Traditionally, the development of a conceptual model has relied on subjective professional judgment. Good judgment is based on experience in coordinating and understanding auxiliary information which is correlated to the model but difficult to be quantified into the mathematical model. For example, groundwater recharge and discharge (R&D) processes are known to relate to multiple information sources such as soil type, river and lake location, irrigation patterns and land use. Although hydrologists have been trying to understand and model the interaction between each of these information sources and R&D processes, it is extremely difficult to quantify their correlations using a universal approach due to the complexity of the processes, the spatiotemporal distribution and uncertainty. There is currently no single method capable of estimating R&D rates and patterns for all practical applications. Chamberlin (1890) recommended use of "multiple working hypotheses" (alternative conceptual models) for rapid advancement in understanding of applied and theoretical problems. Therefore, cross analyzing R&D rates and patterns from various estimation methods and related field information will likely be superior to using only a single estimation method. We have developed the Pattern Recognition Utility (PRU), to help GIS users recognize spatial patterns from noisy 2D image. This GIS plug-in utility has been applied to help hydrogeologists establish alternative R&D conceptual models in a more efficient way than conventional methods. The PRU uses numerical methods and image processing algorithms to estimate and visualize shallow R&D patterns and rates. It can provide a fast initial estimate prior to planning labor intensive and time consuming field R&D measurements. Furthermore, the Spatial Pattern 2 Learn (SP2L) was developed to cross analyze results from the PRU with ancillary field information, such as land coverage, soil type, topographic maps and previous estimates. The learning process of SP2L cross examines each initially recognized R&D pattern with the ancillary spatial dataset, and then calculates a quantifiable reliability index for each R&D map using a supervised machine learning technique called decision tree. This JAVA based software package is capable of generating alternative R&D maps if the user decides to apply certain conditions recognized by the learning process. The reliability indices from SP2L will improve the traditionally subjective approach to initiating conceptual models by providing objectively quantifiable conceptual bases for further probabilistic and uncertainty analyses. Both the PRU and SP2L have been designed to be user-friendly and universal utilities for pattern recognition and learning to improve model predictions from sparse measurements by computer-assisted integration of spatially dense geospatial image data and machine learning of model dependencies.
Jones, Benjamin A; Stanton, Timothy K; Colosi, John A; Gauss, Roger C; Fialkowski, Joseph M; Michael Jech, J
2017-06-01
For horizontal-looking sonar systems operating at mid-frequencies (1-10 kHz), scattering by fish with resonant gas-filled swimbladders can dominate seafloor and surface reverberation at long-ranges (i.e., distances much greater than the water depth). This source of scattering, which can be difficult to distinguish from other sources of scattering in the water column or at the boundaries, can add spatio-temporal variability to an already complex acoustic record. Sparsely distributed, spatially compact fish aggregations were measured in the Gulf of Maine using a long-range broadband sonar with continuous spectral coverage from 1.5 to 5 kHz. Observed echoes, that are at least 15 decibels above background levels in the horizontal-looking sonar data, are classified spectrally by the resonance features as due to swimbladder-bearing fish. Contemporaneous multi-frequency echosounder measurements (18, 38, and 120 kHz) and net samples are used in conjunction with physics-based acoustic models to validate this approach. Furthermore, the fish aggregations are statistically characterized in the long-range data by highly non-Rayleigh distributions of the echo magnitudes. These distributions are accurately predicted by a computationally efficient, physics-based model. The model accounts for beam-pattern and waveguide effects as well as the scattering response of aggregations of fish.
Cloud Statistics and Discrimination in the Polar Regions
NASA Astrophysics Data System (ADS)
Chan, M.; Comiso, J. C.
2012-12-01
Despite their important role in the climate system, cloud cover and their statistics are poorly known, especially in the polar regions, where clouds are difficult to discriminate from snow covered surfaces. The advent of the A-train, which included Aqua/MODIS, CALIPSO/CALIOP and CloudSat/CPR sensors has provided an opportunity to improve our ability to accurately characterize the cloud cover. MODIS provides global coverage at a relatively good temporal and spatial resolution while CALIOP and CPR provide limited nadir sampling but accurate characterization of the vertical structure and phase of the cloud cover. Over the polar regions, cloud detection from a passive sensors like MODIS is challenging because of the presence of cold and highly reflective surfaces such as snow, sea-ice, glaciers, and ice-sheet, which have surface signatures similar to those of clouds. On the other hand, active sensors such as CALIOP and CPR are not only very sensitive to the presence of clouds but can also provide information about its microphysical characteristics. However, these nadir-looking sensors have sparse spatial coverage and their global data can have data spatial gaps of up to 100 km. We developed a polar cloud detection system for MODIS that is trained using collocated data from CALIOP and CPR. In particular, we employ a machine learning system that reads the radiative profile observed by MODIS and determine whether the field of view is cloudy or clear. Results have shown that the improved cloud detection scheme performs better than typical cloud mask algorithms using a validation data set not used for training. A one-year data set was generated and results indicate that daytime cloud detection accuracies improved from 80.1% to 92.6% (over sea-ice) and 71.2% to 87.4% (over ice-sheet) with CALIOP data used as the baseline. Significant improvements are also observed during nighttime, where cloud detection accuracies increase by 19.8% (over sea-ice) and 11.6% (over ice-sheet). The immediate impact of the new algorithm is that it can minimize large biases of MODIS-derived cloud amount over the Polar Regions and thus a more realistic and high quality global cloud statistics. In particular, our results show that cloud fraction in the Arctic is typically 81.2 % during daytime and 84.0% during nighttime. This is significantly higher than the 71.8% and 58.5%, respectively, derived from standard MODIS cloud product.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe
2018-02-15
We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Analyses of global sea surface temperature 1856-1991
NASA Astrophysics Data System (ADS)
Kaplan, Alexey; Cane, Mark A.; Kushnir, Yochanan; Clement, Amy C.; Blumenthal, M. Benno; Rajagopalan, Balaji
1998-08-01
Global analyses of monthly sea surface temperature (SST) anomalies from 1856 to 1991 are produced using three statistically based methods: optimal smoothing (OS), the Kaiman filter (KF) and optimal interpolation (OI). Each of these is accompanied by estimates of the error covariance of the analyzed fields. The spatial covariance function these methods require is estimated from the available data; the timemarching model is a first-order autoregressive model again estimated from data. The data input for the analyses are monthly anomalies from the United Kingdom Meteorological Office historical sea surface temperature data set (MOHSST5) [Parker et al., 1994] of the Global Ocean Surface Temperature Atlas (GOSTA) [Bottomley et al., 1990]. These analyses are compared with each other, with GOSTA, and with an analysis generated by projection (P) onto a set of empirical orthogonal functions (as in Smith et al. [1996]). In theory, the quality of the analyses should rank in the order OS, KF, OI, P, and GOSTA. It is found that the first four give comparable results in the data-rich periods (1951-1991), but at times when data is sparse the first three differ significantly from P and GOSTA. At these times the latter two often have extreme and fluctuating values, prima facie evidence of error. The statistical schemes are also verified against data not used in any of the analyses (proxy records derived from corals and air temperature records from coastal and island stations). We also present evidence that the analysis error estimates are indeed indicative of the quality of the products. At most times the OS and KF products are close to the OI product, but at times of especially poor coverage their use of information from other times is advantageous. The methods appear to reconstruct the major features of the global SST field from very sparse data. Comparison with other indications of the El Niño-Southern Oscillation cycle show that the analyses provide usable information on interannual variability as far back as the 1860s.
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
Geographic patterns and dynamics of Alaskan climate interpolated from a sparse station record
Fleming, Michael D.; Chapin, F. Stuart; Cramer, W.; Hufford, Gary L.; Serreze, Mark C.
2000-01-01
Data from a sparse network of climate stations in Alaska were interpolated to provide 1-km resolution maps of mean monthly temperature and precipitation-variables that are required at high spatial resolution for input into regional models of ecological processes and resource management. The interpolation model is based on thin-plate smoothing splines, which uses the spatial data along with a digital elevation model to incorporate local topography. The model provides maps that are consistent with regional climatology and with patterns recognized by experienced weather forecasters. The broad patterns of Alaskan climate are well represented and include latitudinal and altitudinal trends in temperature and precipitation and gradients in continentality. Variations within these broad patterns reflect both the weakening and reduction in frequency of low-pressure centres in their eastward movement across southern Alaska during the summer, and the shift of the storm tracks into central and northern Alaska in late summer. Not surprisingly, apparent artifacts of the interpolated climate occur primarily in regions with few or no stations. The interpolation model did not accurately represent low-level winter temperature inversions that occur within large valleys and basins. Along with well-recognized climate patterns, the model captures local topographic effects that would not be depicted using standard interpolation techniques. This suggests that similar procedures could be used to generate high-resolution maps for other high-latitude regions with a sparse density of data.
Joint fMRI analysis and subject clustering using sparse dictionary learning
NASA Astrophysics Data System (ADS)
Kim, Seung-Jun; Dontaraju, Krishna K.
2017-08-01
Multi-subject fMRI data analysis methods based on sparse dictionary learning are proposed. In addition to identifying the component spatial maps by exploiting the sparsity of the maps, clusters of the subjects are learned by postulating that the fMRI volumes admit a subspace clustering structure. Furthermore, in order to tune the associated hyper-parameters systematically, a cross-validation strategy is developed based on entry-wise sampling of the fMRI dataset. Efficient algorithms for solving the proposed constrained dictionary learning formulations are developed. Numerical tests performed on synthetic fMRI data show promising results and provides insights into the proposed technique.
Fast sparsely synchronized brain rhythms in a scale-free neural network
NASA Astrophysics Data System (ADS)
Kim, Sang-Yoon; Lim, Woochang
2015-08-01
We consider a directed version of the Barabási-Albert scale-free network model with symmetric preferential attachment with the same in- and out-degrees and study the emergence of sparsely synchronized rhythms for a fixed attachment degree in an inhibitory population of fast-spiking Izhikevich interneurons. Fast sparsely synchronized rhythms with stochastic and intermittent neuronal discharges are found to appear for large values of J (synaptic inhibition strength) and D (noise intensity). For an intensive study we fix J at a sufficiently large value and investigate the population states by increasing D . For small D , full synchronization with the same population-rhythm frequency fp and mean firing rate (MFR) fi of individual neurons occurs, while for large D partial synchronization with fp>
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi, E-mail: yiguo@usc.edu; Zhu, Yinghua; Lingala, Sajan Goud
Purpose: To clinically evaluate a highly accelerated T1-weighted dynamic contrast-enhanced (DCE) MRI technique that provides high spatial resolution and whole-brain coverage via undersampling and constrained reconstruction with multiple sparsity constraints. Methods: Conventional (rate-2 SENSE) and experimental DCE-MRI (rate-30) scans were performed 20 minutes apart in 15 brain tumor patients. The conventional clinical DCE-MRI had voxel dimensions 0.9 × 1.3 × 7.0 mm{sup 3}, FOV 22 × 22 × 4.2 cm{sup 3}, and the experimental DCE-MRI had voxel dimensions 0.9 × 0.9 × 1.9 mm{sup 3}, and broader coverage 22 × 22 × 19 cm{sup 3}. Temporal resolution was 5 smore » for both protocols. Time-resolved images and blood–brain barrier permeability maps were qualitatively evaluated by two radiologists. Results: The experimental DCE-MRI scans showed no loss of qualitative information in any of the cases, while achieving substantially higher spatial resolution and whole-brain spatial coverage. Average qualitative scores (from 0 to 3) were 2.1 for the experimental scans and 1.1 for the conventional clinical scans. Conclusions: The proposed DCE-MRI approach provides clinically superior image quality with higher spatial resolution and coverage than currently available approaches. These advantages may allow comprehensive permeability mapping in the brain, which is especially valuable in the setting of large lesions or multiple lesions spread throughout the brain.« less
Validation of satellite-based rainfall in Kalahari
NASA Astrophysics Data System (ADS)
Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter
2018-06-01
Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing good spatio-temporal data coverage especially suitable for areas with limited rain gauges, such as the CKB, but also emphasized SREs' drawbacks, creating avenue for follow up research.
On the uncertainties associated with using gridded rainfall data as a proxy for observed
NASA Astrophysics Data System (ADS)
Tozer, C. R.; Kiem, A. S.; Verdon-Kidd, D. C.
2011-09-01
Gridded rainfall datasets are used in many hydrological and climatological studies, in Australia and elsewhere, including for hydroclimatic forecasting, climate attribution studies and climate model performance assessments. The attraction of the spatial coverage provided by gridded data is clear, particularly in Australia where the spatial and temporal resolution of the rainfall gauge network is sparse. However, the question that must be asked is whether it is suitable to use gridded data as a proxy for observed point data, given that gridded data is inherently "smoothed" and may not necessarily capture the temporal and spatial variability of Australian rainfall which leads to hydroclimatic extremes (i.e. droughts, floods)? This study investigates this question through a statistical analysis of three monthly gridded Australian rainfall datasets - the Bureau of Meteorology (BOM) dataset, the Australian Water Availability Project (AWAP) and the SILO dataset. To demonstrate the hydrological implications of using gridded data as a proxy for gauged data, a rainfall-runoff model is applied to one catchment in South Australia (SA) initially using gridded data as the source of rainfall input and then gauged rainfall data. The results indicate a markedly different runoff response associated with each of the different sources of rainfall data. It should be noted that this study does not seek to identify which gridded dataset is the "best" for Australia, as each gridded data source has its pros and cons, as does gauged or point data. Rather the intention is to quantify differences between various gridded data sources and how they compare with gauged data so that these differences can be considered and accounted for in studies that utilise these gridded datasets. Ultimately, if key decisions are going to be based on the outputs of models that use gridded data, an estimate (or at least an understanding) of the uncertainties relating to the assumptions made in the development of gridded data and how that gridded data compares with reality should be made.
NASA Astrophysics Data System (ADS)
Ivory, S.; Russell, J. L.; Cohen, A. S.
2010-12-01
Threats to tropical biodiversity with serious and costly implications for both ecosystems and human well-being in Africa have led the IPCC to classify this region as vulnerable to negative impacts from climate change. Yet little is known about how vegetation communities respond to altered patterns of rainfall and evaporation. Paleoclimate records within the tropics can help answer questions about how vegetation response to climate forcing changes over time. However, sparse spatial extent of records and uncertainty surrounding the climate-vegetation relationship complicate these insights. Understanding the climatic mechanisms involved in landscape change at all temporal scales creates the need for quantitative constraints of the modern relationship between climatic controls, hydrology, and vegetation. Though modern observational data can help elucidate this relationship, low resolution and complicated rainfall/vegetation associations make them less than ideal. Satellite data of vegetation productivity (NDVI) with continuous high-resolution spatial coverage provides a robust and elegant tool for identifying the link between global and regional controls and vegetation. We use regression analyses of variables either previously proposed or potentially important in regulating Afro-tropical vegetation (insolation, out-going long-wave radiation, geopotential height, Southern Oscillation Index, Indian Ocean Dipole, Indian Monsoon precipitation, sea-level pressure, surface wind, sea-surface temperature) on continuous, time-varying spatial fields of 8km NDVI for sub-Saharan Africa. These analyses show the importance of global atmospheric controls in producing regional intra-annual and inter-annual vegetation variability. Dipole patterns emerge primarily correlated with both the seasonal and inter-annual extent of the Intertropical Convergence Zone (ITCZ). Inter-annual ITCZ variability drives patterns in African vegetation resulting from the effect of insolation anomalies and ENSO events on atmospheric circulation rather than sea surface temperatures or teleconnections to mid/high latitudes. Global controls on tropical atmospheric circulation regulate vegetation throughout sub-Saharan Africa on many time scales through alteration of dry season length and moisture convergence, rather than precipitation amount.
Kalwij, Jesse M; Robertson, Mark P; Ronk, Argo; Zobel, Martin; Pärtel, Meelis
2014-01-01
Much ecological research relies on existing multispecies distribution datasets. Such datasets, however, can vary considerably in quality, extent, resolution or taxonomic coverage. We provide a framework for a spatially-explicit evaluation of geographical representation within large-scale species distribution datasets, using the comparison of an occurrence atlas with a range atlas dataset as a working example. Specifically, we compared occurrence maps for 3773 taxa from the widely-used Atlas Florae Europaeae (AFE) with digitised range maps for 2049 taxa of the lesser-known Atlas of North European Vascular Plants. We calculated the level of agreement at a 50-km spatial resolution using average latitudinal and longitudinal species range, and area of occupancy. Agreement in species distribution was calculated and mapped using Jaccard similarity index and a reduced major axis (RMA) regression analysis of species richness between the entire atlases (5221 taxa in total) and between co-occurring species (601 taxa). We found no difference in distribution ranges or in the area of occupancy frequency distribution, indicating that atlases were sufficiently overlapping for a valid comparison. The similarity index map showed high levels of agreement for central, western, and northern Europe. The RMA regression confirmed that geographical representation of AFE was low in areas with a sparse data recording history (e.g., Russia, Belarus and the Ukraine). For co-occurring species in south-eastern Europe, however, the Atlas of North European Vascular Plants showed remarkably higher richness estimations. Geographical representation of atlas data can be much more heterogeneous than often assumed. Level of agreement between datasets can be used to evaluate geographical representation within datasets. Merging atlases into a single dataset is worthwhile in spite of methodological differences, and helps to fill gaps in our knowledge of species distribution ranges. Species distribution dataset mergers, such as the one exemplified here, can serve as a baseline towards comprehensive species distribution datasets.
NASA Astrophysics Data System (ADS)
Carmona, J.; Mendoza, A.; Lozano, D.; Gupta, P.; Mejia, G.; Rios, J.; Hernández, I.
2017-12-01
Estimating ground-level PM2.5 from satellite-derived Aerosol Optical Depth (AOD) through statistical models is a promising method to evaluate the spatial and temporal distribution of PM2.5 in regions where there are no or few ground-based observations, i.e. Latin America. Although PM concentrations are most accurately measured using ground-based instrumentation, the spatial coverage is too sparse to determine local and regional variations in PM. AOD satellite data offer the opportunity to overcome the spatial limitation of ground-based measurements. However, estimating PM surface concentrations from AOD satellite data is challenging, since multiple factors can affect the relationship between the total-column of AOD and the surface-concentration of PM. In this study, an Assembled Multiple Linear Regression Model (MLR) and a Neural Network Model (NN) were performed to estimate the relationship between the AOD and ground-concentrations of PM2.5 within the Monterrey Metropolitan Area (MMA). The MMA is located in northeast Mexico and is the third most populated urban area in the country. Episodes of high PM pollution levels are frequent throughout the year at the MMA. Daily averages of meteorological and air quality parameters were determined from data recorded at 5 monitoring sites of the MMA air quality monitoring network. Daily AOD data were retrieved from the MODIS sensor onboard the Aqua satellite. Overall, the best performance of the models was obtained using an AOD at 550 µm from the MYD04_3k product in combination with Temperature, Relative Humidity, Wind Speed and Wind Direction ground-based data. For the MLR performed, a correlation coefficient of R 0.6 and % bias of -6% were obtained. The NN showed a better performance than the MLR, with a correlation coefficient of R 0.75 and % bias -4%. The results obtained confirmed that satellite-derived AOD in combination with meteorological fields may allow to estimate PM2.5 local distributions.
Exposure studies rely on detailed characterization of air quality, either from sparsely located routine ambient monitors or from central monitoring sites that may lack spatial representativeness. Alternatively, some studies use models of various complexities to characterize local...
NASA Astrophysics Data System (ADS)
Pohle, Ina; Glendell, Miriam; Stutter, Marc I.; Helliwell, Rachel C.
2017-04-01
An understanding of catchment response to climate and land use change at a regional scale is necessary for the assessment of mitigation and adaptation options addressing diffuse nutrient pollution. It is well documented that the physicochemical properties of a river ecosystem respond to change in a non-linear fashion. This is particularly important when threshold water concentrations, relevant to national and EU legislation, are exceeded. Large scale (regional) model assessments required for regulatory purposes must represent the key processes and mechanisms that are more readily understood in catchments with water quantity and water quality data monitored at high spatial and temporal resolution. While daily discharge data are available for most catchments in Scotland, nitrate and phosphorus are mostly available on a monthly basis only, as typified by regulatory monitoring. However, high resolution (hourly to daily) water quantity and water quality data exist for a limited number of research catchments. To successfully implement adaptation measures across Scotland, an upscaling from data-rich to data-sparse catchments is required. In addition, the widespread availability of spatial datasets affecting hydrological and biogeochemical responses (e.g. soils, topography/geomorphology, land use, vegetation etc.) provide an opportunity to transfer predictions between data-rich and data-sparse areas by linking processes and responses to catchment attributes. Here, we develop a framework of catchment typologies as a prerequisite for transferring information from data-rich to data-sparse catchments by focusing on how hydrological catchment similarity can be used as an indicator of grouped behaviours in water quality response. As indicators of hydrological catchment similarity we use flow indices derived from observed discharge data across Scotland as well as hydrological model parameters. For the latter, we calibrated the lumped rainfall-runoff model TUWModel using multiple objective functions. The relationships between indicators of hydrological catchment similarity, physical catchment characteristics and nitrate and phosphorus concentrations in rivers are then investigated using multivariate statistics. This understanding of the relationship between catchment characteristics, hydrological processes and water quality will allow us to implement more efficient regulatory water quality monitoring strategies, to improve existing water quality models and to model mitigation and adaptation scenarios to global change in data-sparse catchments.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, J.; Lee, J.; Yadav, V.
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Validation of Satellite Retrieved Land Surface Variables
NASA Technical Reports Server (NTRS)
Lakshmi, Venkataraman; Susskind, Joel
1999-01-01
The effective use of satellite observations of the land surface is limited by the lack of high spatial resolution ground data sets for validation of satellite products. Recent large scale field experiments include FIFE, HAPEX-Sahel and BOREAS which provide us with data sets that have large spatial coverage and long time coverage. It is the objective of this paper to characterize the difference between the satellite estimates and the ground observations. This study and others along similar lines will help us in utilization of satellite retrieved data in large scale modeling studies.
DOT National Transportation Integrated Search
2017-08-01
Transit ridership has long been studied, and the findings are concisely elucidated by Taylor & Fink (2003) when they say To sum, transit ridership is largely, though not completely, a product of factors outside the control of transit managers. ...
NASA Astrophysics Data System (ADS)
Macnae, J.; Ley-Cooper, Y.
2009-05-01
Sub-surface porosity is of importance in estimating fluid contant and salt-load parameters for hydrological modelling. While sparse boreholes may adequately sample the depth to a sub-horizontal water-table and usually also adequately sample ground-water salinity, they do not provide adequate sampling of the spatial variations in porosity or hydraulic permeability caused by spatial variations in sedimentary and other geological processes.. We show in this presentation that spatially detailed porosity can be estimated by applying Archie's law to conductivity estimates from airborne electromagnetic surveys with interpolated ground-water conductivity values. The prediction was tested on data from the Chowilla flood plain in the Murray-Darling Basin of South Australia. A frequency domain, helicopter-borne electromagnetic system collected data at 6 frequencies and 3 to 4 m spacings on lines spaced 100 m apart. This data was transformed into conductivity-depth sections, from which a 3D bulk-conductivity map could be created with about 30 m spatial resolution and 2 to 5 m vertical depth resolution. For that portion of the volume below the interpolated water-table, we predicted porosity in each cell using Archie's law. Generally, predicted porosities were in the 30 to 50 % range, consistent with expectations for the partially consolidated sediments in the floodplain. Porosities were directly measured on core from eight boreholes in the area, and compared quite well with the predictions. The predicted porosity map was spatially consistent, and when combined with measured salinities in the ground water, was able to provide a detailed 3D map of salt-loads in the saturated zone, and as such contribute to a hazard assessment of the saline threat to the river.
How Does the Sparse Memory "Engram" Neurons Encode the Memory of a Spatial-Temporal Event?
Guan, Ji-Song; Jiang, Jun; Xie, Hong; Liu, Kai-Yuan
2016-01-01
Episodic memory in human brain is not a fixed 2-D picture but a highly dynamic movie serial, integrating information at both the temporal and the spatial domains. Recent studies in neuroscience reveal that memory storage and recall are closely related to the activities in discrete memory engram (trace) neurons within the dentate gyrus region of hippocampus and the layer 2/3 of neocortex. More strikingly, optogenetic reactivation of those memory trace neurons is able to trigger the recall of naturally encoded memory. It is still unknown how the discrete memory traces encode and reactivate the memory. Considering a particular memory normally represents a natural event, which consists of information at both the temporal and spatial domains, it is unknown how the discrete trace neurons could reconstitute such enriched information in the brain. Furthermore, as the optogenetic-stimuli induced recall of memory did not depend on firing pattern of the memory traces, it is most likely that the spatial activation pattern, but not the temporal activation pattern of the discrete memory trace neurons encodes the memory in the brain. How does the neural circuit convert the activities in the spatial domain into the temporal domain to reconstitute memory of a natural event? By reviewing the literature, here we present how the memory engram (trace) neurons are selected and consolidated in the brain. Then, we will discuss the main challenges in the memory trace theory. In the end, we will provide a plausible model of memory trace cell network, underlying the conversion of neural activities between the spatial domain and the temporal domain. We will also discuss on how the activation of sparse memory trace neurons might trigger the replay of neural activities in specific temporal patterns.
NASA Remote Sensing Technologies for Improved Integrated Water Resources Management
NASA Astrophysics Data System (ADS)
Toll, D. L.; Doorn, B.; Searby, N. D.; Entin, J. K.; Lee, C. M.
2014-12-01
This presentation will emphasize NASA's water research, applications, and capacity building activities using satellites and models to contribute to water issues including water availability, transboundary water, flooding and droughts for improved Integrated Water Resources Management (IWRM). NASA's free and open exchange of Earth data observations and products helps engage and improve integrated observation networks and enables national and multi-national regional water cycle research and applications that are especially useful in data sparse regions of most developing countries. NASA satellite and modeling products provide a huge volume of valuable data extending back over 50 years across a broad range of spatial (local to global) and temporal (hourly to decadal) scales and include many products that are available in near real time (see earthdata.nasa.gov). To further accomplish these objectives NASA works to actively partner with public and private groups (e.g. federal agencies, universities, NGO's, and industry) in the U.S. and international community to ensure the broadest use of its satellites and related information and products and to collaborate with regional end users who know the regions and their needs best. Key objectives of this talk will highlight NASA's Water Resources and Capacity Building Programs with their objective to discover and demonstrate innovative uses and practical benefits of NASA's advanced system technologies for improved water management in national and international applications. The event will help demonstrate the strong partnering and the use of satellite data to provide synoptic and repetitive spatial coverage helping water managers' deal with complex issues. The presentation will also demonstrate how NASA is a major contributor to water tasks and activities in GEOSS (Global Earth Observing System of Systems) and GEO (Group on Earth Observations).
Virtual source reflection imaging of the Socorro Magma Body, New Mexico, using a dense seismic array
NASA Astrophysics Data System (ADS)
Finlay, T. S.; Worthington, L. L.; Schmandt, B.; Hansen, S. M.; Bilek, S. L.; Aster, R. C.; Ranasinghe, N. R.
2017-12-01
The Socorro Magma Body (SMB) is one of the largest known actively inflating continental magmatic intrusions. Previous studies have relied on sparse instrument coverage to determine its spatial extent, depth, and seismic signature, which characterized the body as a thin sill with a surface at 19 km below the Earth's surface. However, over the last two decades, InSAR and magneto-telluric (MT) studies have shed new light on the SMB and invigorated the scientific debate of the spatial distribution and uplift rate of the SMB. We return to seismic imaging of the SMB with the Sevilleta Array, a 12-day deployment of approximately 800 vertical component, 10-Hz geophones north of Socorro, New Mexico above and around the estimated northern half of the SMB. Teleseismic virtual source reflection profiling (TVR) employs the free surface reflection off of a teleseismic P as a virtual source in dense arrays, and has been used successfully to image basin structure and the Moho in multiple tectonic environments. The Sevilleta Array recorded 62 teleseismic events greater than M5. Applying TVR to the data collected by the Sevilleta Array, we present stacks from four events that produced the with high signal-to-noise ratios and simple source-time functions: the February 11, 2015 M6.7 in northern Argentina, the February 19, 2015 M5.4 in Kamchatka, Russia, and the February 21, 2015 M5.1 and February 22, 2015 M5.5 in western Colombia. Preliminary results suggest eastward-dipping reflectors at approximately 5 km depth near the Sierra Ladrones range in the northwestern corner of the array. Further analysis will focus on creating profiles across the area of maximum SMB uplift and constraining basin geometry.
Disentangling natural and anthropogenic influences on Patagonian pond water quality.
Epele, Luis B; Manzo, Luz M; Grech, Marta G; Macchi, Pablo; Claverie, Alfredo Ñ; Lagomarsino, Leonardo; Miserendino, M Laura
2018-02-01
The water quality of wetlands is governed not only by natural variability in hydrology and other factors, but also by anthropogenic activities. Patagonia is a vast sparsely-populated in which ponds are a key component of rural and urban landscapes because they provide several ecosystem services such as habitat for wildlife and watering for livestock. Integrating field-based and geospatial data of 109 ponds sampled across the region, we identified spatial trends and assessed the effects of anthropogenic and natural factors in pond water quality. The studied ponds were generally shallow, well oxygenated, with maximum nutrient values reported in sites used for livestock breeding. TN:TP ratio values were lower than 14 in >90% of the ponds, indicating nitrogen limitation. Water conductivity decreased from de east to the west, meanwhile pH and dissolved oxygen varied associated with the latitude. To assess Patagonian ponds water status we recommend the measure of total suspended solids and total nitrogen in the water, and evaluate the mallín (wetland vegetation) coverage in a 100m radius from the pond, since those features were significantly influenced by livestock land use. To evaluate the relative importance of natural variability and anthropogenic influences as driving factors of water quality we performed three generalized linear models (GLM) that encompassed the hydrology, hydroperiod and biome (to represent natural influences), and land use (to represent anthropogenic influences) as fixed effects. Our results revealed that at the Patagonian scale, ponds water quality would be strongly dependent on natural gradients. We synthetized spatial patterns of Patagonian pond water quality, and disentangled natural and anthropic factors finding that the dominant environmental influence is rainfall gradient. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Maltagliati, Luca; Montmessin, Franck; Fedorova, Anna; Bertaux, Jean-Loup; Korablev, Oleg
In pre-Mars Express era only very sparse measurements of the vertical profile of water vapor existed, with limited temporal and spatial coverage. Thus, knowledge of the H2 O distribution along the atmosphere relied almost exclusively on General Circulation Models. The vertical distribution of water vapor nonetheless allows to get otherwise unobtainable information on important characteristics of the Martian water cycle, such as the role of sources and sinks, phase changes, and the influence of clouds. Several other potentially significant phenomena, as the presence of supersaturation, the deposition of water vapor in the layer just below the saturation height, the formation of ice particles and water ice clouds, can be observed and studied in detail for the first time. The infrared channel of the SPICAM spectrometer onboard Mars Express, used in solar oc-cultation mode, allows to retrieve simultaneously the vertical profile of H2 O, CO2 , and aerosol properties. This dataset is thus perfectly suited to enhance our vertical knowledge of the at-mosphere of Mars, covering more than three full Martian years with good temporal and spatial distribution. We present the main results from the analysis of water vapor profiles, and their implication for the behavior of the water cycle on Mars. A comparison with the output from the state-of-the-art General Circulation Model developed at the Laboratoire de Météorologie Dynamique ee in Paris (LMD-GCM), is performed, in order to understand the consequences of this dataset on the current knowledge of physics and microphysics of water on Martian atmosphere. In particular, the currently accepted assumption that the distribution of water in the atmosphere is controlled by saturation physics is tested, and the consequences of the departure from this assumption are analysed in detail.
Biomass assessment of microbial surface communities by means of hyperspectral remote sensing data.
Rodríguez-Caballero, Emilio; Paul, Max; Tamm, Alexandra; Caesar, Jennifer; Büdel, Burkhard; Escribano, Paula; Hill, Joachim; Weber, Bettina
2017-05-15
Dryland vegetation developed morphological and physiological strategies to cope with drought. However, as aridity increases, vascular plant coverage gets sparse and microbially-dominated surface communities (MSC), comprising cyanobacteria, algae, lichens and bryophytes together with heterotropic bacteria, archaea and fungi, gain relevance. Nevertheless, the relevance of MSC net primary productivity has only rarely been considered in ecosystem scale studies, and detailed information on their contribution to the total photosynthetic biomass reservoir is largely missing. In this study, we mapped the spatial distribution of two different MSC (biological soil crusts and quartz fields hosting hypolithic crusts) at two different sites within the South African Succulent Karoo (Soebatsfontein and Knersvlakte). Then we characterized both types of MSC in terms of chlorophyll content, and combining these data with the biocrust and quartz field maps, we estimated total biomass values of MSCs and their spatial patterns within the two different ecosystems. Our results revealed that MSC are important vegetation components of the South African Karoo biome, revealing clear differences between the two sites. At Soebatsfontein, MSC occurred as biological soil crusts (biocrusts), which covered about one third of the landscape reaching an overall biomass value of ~480gha -1 of chlorophyll a+b at the landscape scale. In the Knersvlakte, which is characterized by harsher environmental conditions (i.e. higher solar radiation and potential evapotranspiration), MSC occurred as biocrusts, but also formed hypolithic crusts growing on the lower soil-immersed parts of translucent quartz pebbles. Whereas chlorophyll concentrations of biocrusts and hypolithic crusts where insignificantly lower in the Knersvlakte, the overall MSC biomass reservoir was by far larger with ~780gha -1 of chlorophyll a+b. Thus, the complementary microbially-dominated surface communities promoted biomass formation within the environmentally harsh Knersvlakte ecosystem. Copyright © 2017 Elsevier B.V. All rights reserved.
Daily mapping of 30m LAI and NDVI for grape yield prediction in California vineyards
USDA-ARS?s Scientific Manuscript database
Wine grape quality and quantity are affected by vine growing conditions during critical phenological stages. Field observations of vine growth stages are normally very sparse and cannot capture the spatial variability of vine conditions. Remote sensing data acquired from visible and near infrared ba...
Lugauer, Felix; Wetzl, Jens; Forman, Christoph; Schneider, Manuel; Kiefer, Berthold; Hornegger, Joachim; Nickel, Dominik; Maier, Andreas
2018-06-01
Our aim was to develop and validate a 3D Cartesian Look-Locker [Formula: see text] mapping technique that achieves high accuracy and whole-liver coverage within a single breath-hold. The proposed method combines sparse Cartesian sampling based on a spatiotemporally incoherent Poisson pattern and k-space segmentation, dedicated for high-temporal-resolution imaging. This combination allows capturing tissue with short relaxation times with volumetric coverage. A joint reconstruction of the 3D + inversion time (TI) data via compressed sensing exploits the spatiotemporal sparsity and ensures consistent quality for the subsequent multistep [Formula: see text] mapping. Data from the National Institute of Standards and Technology (NIST) phantom and 11 volunteers, along with reference 2D Look-Locker acquisitions, are used for validation. 2D and 3D methods are compared based on [Formula: see text] values in different abdominal tissues at 1.5 and 3 T. [Formula: see text] maps obtained from the proposed 3D method compare favorably with those from the 2D reference and additionally allow for reformatting or volumetric analysis. Excellent agreement is shown in phantom [bias[Formula: see text] < 2%, bias[Formula: see text] < 5% for (120; 2000) ms] and volunteer data (3D and 2D deviation < 4% for liver, muscle, and spleen) for clinically acceptable scan (20 s) and reconstruction times (< 4 min). Whole-liver [Formula: see text] mapping with high accuracy and precision is feasible in one breath-hold using spatiotemporally incoherent, sparse 3D Cartesian sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-10-01
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
The Cortex Transform as an image preprocessor for sparse distributed memory: An initial study
NASA Technical Reports Server (NTRS)
Olshausen, Bruno; Watson, Andrew
1990-01-01
An experiment is described which was designed to evaluate the use of the Cortex Transform as an image processor for Sparse Distributed Memory (SDM). In the experiment, a set of images were injected with Gaussian noise, preprocessed with the Cortex Transform, and then encoded into bit patterns. The various spatial frequency bands of the Cortex Transform were encoded separately so that they could be evaluated based on their ability to properly cluster patterns belonging to the same class. The results of this study indicate that by simply encoding the low pass band of the Cortex Transform, a very suitable input representation for the SDM can be achieved.
Spatial fuel data products of the LANDFIRE Project
Matt Reeves; Kevin C. Ryan; Matthew G. Rollins; Thomas G. Thompson
2009-01-01
The Landscape Fire and Resource Management Planning Tools (LANDFIRE) Project is mapping wildland fuels, vegetation, and fire regime characteristics across the United States. The LANDFIRE project is unique because of its national scope, creating an integrated product suite at 30-m spatial resolution and complete spatial coverage of all lands within the 50...
Fast sparsely synchronized brain rhythms in a scale-free neural network.
Kim, Sang-Yoon; Lim, Woochang
2015-08-01
We consider a directed version of the Barabási-Albert scale-free network model with symmetric preferential attachment with the same in- and out-degrees and study the emergence of sparsely synchronized rhythms for a fixed attachment degree in an inhibitory population of fast-spiking Izhikevich interneurons. Fast sparsely synchronized rhythms with stochastic and intermittent neuronal discharges are found to appear for large values of J (synaptic inhibition strength) and D (noise intensity). For an intensive study we fix J at a sufficiently large value and investigate the population states by increasing D. For small D, full synchronization with the same population-rhythm frequency fp and mean firing rate (MFR) fi of individual neurons occurs, while for large D partial synchronization with fp>〈fi〉 (〈fi〉: ensemble-averaged MFR) appears due to intermittent discharge of individual neurons; in particular, the case of fp>4〈fi〉 is referred to as sparse synchronization. For the case of partial and sparse synchronization, MFRs of individual neurons vary depending on their degrees. As D passes a critical value D* (which is determined by employing an order parameter), a transition to unsynchronization occurs due to the destructive role of noise to spoil the pacing between sparse spikes. For D
Lee, Young-Beom; Lee, Jeonghyeon; Tak, Sungho; Lee, Kangjoo; Na, Duk L; Seo, Sang Won; Jeong, Yong; Ye, Jong Chul
2016-01-15
Recent studies of functional connectivity MR imaging have revealed that the default-mode network activity is disrupted in diseases such as Alzheimer's disease (AD). However, there is not yet a consensus on the preferred method for resting-state analysis. Because the brain is reported to have complex interconnected networks according to graph theoretical analysis, the independency assumption, as in the popular independent component analysis (ICA) approach, often does not hold. Here, rather than using the independency assumption, we present a new statistical parameter mapping (SPM)-type analysis method based on a sparse graph model where temporal dynamics at each voxel position are described as a sparse combination of global brain dynamics. In particular, a new concept of a spatially adaptive design matrix has been proposed to represent local connectivity that shares the same temporal dynamics. If we further assume that local network structures within a group are similar, the estimation problem of global and local dynamics can be solved using sparse dictionary learning for the concatenated temporal data across subjects. Moreover, under the homoscedasticity variance assumption across subjects and groups that is often used in SPM analysis, the aforementioned individual and group analyses using sparse dictionary learning can be accurately modeled by a mixed-effect model, which also facilitates a standard SPM-type group-level inference using summary statistics. Using an extensive resting fMRI data set obtained from normal, mild cognitive impairment (MCI), and Alzheimer's disease patient groups, we demonstrated that the changes in the default mode network extracted by the proposed method are more closely correlated with the progression of Alzheimer's disease. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce themore » electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. As a result, the use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO 3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.« less
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; ...
2016-10-17
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce themore » electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.« less
Atmospheric Science Data Center
2014-04-25
AirMISR WISCONSIN 2000 Project Title: AirMISR Discipline: ... Platform: ER-2 Spatial Coverage: Wisconsin (35.92, 43.79)(-97.94, -90.23) Spatial Resolution: ... Order Data Readme Files: Readme Wisconsin Read Software Files : IDL Code ...
Monnat, Shannon M.
2016-01-01
Hispanics have the lowest health insurance rates of any racial/ethnic group, but rates vary significantly across the U.S. The unprecedented growth of the Hispanic population since 1990 in rural areas with previously small or non-existent Hispanic populations raises questions about disparities in access to health insurance coverage. Identifying spatial disparities in Hispanic health insurance rates can illuminate the specific contexts within which Hispanics are least likely to have health care access and inform policy approaches for increasing coverage in different spatial contexts. Using county-level data from the 2009/2013 American Community Survey, I find that early new destinations (i.e., those that experienced rapid Hispanic population growth during the 1990s) have the lowest Hispanic adult health insurance coverage rates, with little variation by metropolitan status. Conversely, among the most recent new destinations that experienced significant Hispanic population growth during the 2000s, metropolitan counties have Hispanic health insurance rates that are similar to established destinations, but rural counties have Hispanic health insurance rates that are significantly lower than those in established destinations. Findings demonstrate that the new destination disadvantage is driven entirely by higher concentrations of immigrant non-citizen Hispanics in these counties, but labor market conditions were salient drivers of the spatially uneven distribution of foreign-born non-citizen Hispanics to new destinations, particularly in rural areas. PMID:28479612
Spatial and diurnal below canopy evaporation in a desert vineyard: measurements and modeling
USDA-ARS?s Scientific Manuscript database
Evaporation from the soil surface (E) can be a significant source of water loss in arid areas. In sparsely vegetated systems, E is expected to be a function of soil, climate, irrigation regime, precipitation patterns, and plant canopy development, and will therefore change dynamically at both daily ...
Competing for Consciousness: Prolonged Mask Exposure Reduces Object Substitution Masking
ERIC Educational Resources Information Center
Goodhew, Stephanie C.; Visser, Troy A. W.; Lipp, Ottmar V.; Dux, Paul E.
2011-01-01
In object substitution masking (OSM) a sparse, temporally trailing 4-dot mask impairs target identification, even though it has different contours from, and does not spatially overlap with the target. Here, we demonstrate a previously unknown characteristic of OSM: Observers show reduced masking at prolonged (e.g., 640 ms) relative to intermediate…
The current study uses case studies of model-estimated regional precipitation and wet ion deposition to estimate errors in corresponding regional values derived from the means of site-specific values within regions of interest located in the eastern US. The mean of model-estimate...
Assimilation of spatially sparse in situ soil moisture networks into a continuous model domain
USDA-ARS?s Scientific Manuscript database
Growth in the availability of near-real-time soil moisture observations from ground-based networks has spurred interest in the assimilation of these observations into land surface models via a two-dimensional data assimilation system. However, the design of such systems is currently hampered by our ...
NASA Astrophysics Data System (ADS)
Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman
2018-02-01
The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.
Spatial-temporal variation of marginal land suitable for energy plants from 1990 to 2010 in China
NASA Astrophysics Data System (ADS)
Jiang, Dong; Hao, Mengmeng; Fu, Jingying; Zhuang, Dafang; Huang, Yaohuan
2014-07-01
Energy plants are the main source of bioenergy which will play an increasingly important role in future energy supplies. With limited cultivated land resources in China, the development of energy plants may primarily rely on the marginal land. In this study, based on the land use data from 1990 to 2010(every 5 years is a period) and other auxiliary data, the distribution of marginal land suitable for energy plants was determined using multi-factors integrated assessment method. The variation of land use type and spatial distribution of marginal land suitable for energy plants of different decades were analyzed. The results indicate that the total amount of marginal land suitable for energy plants decreased from 136.501 million ha to 114.225 million ha from 1990 to 2010. The reduced land use types are primarily shrub land, sparse forest land, moderate dense grassland and sparse grassland, and large variation areas are located in Guangxi, Tibet, Heilongjiang, Xinjiang and Inner Mongolia. The results of this study will provide more effective data reference and decision making support for the long-term planning of bioenergy resources.
Learning Low-Rank Decomposition for Pan-Sharpening With Spatial-Spectral Offsets.
Yang, Shuyuan; Zhang, Kai; Wang, Min
2017-08-25
Finding accurate injection components is the key issue in pan-sharpening methods. In this paper, a low-rank pan-sharpening (LRP) model is developed from a new perspective of offset learning. Two offsets are defined to represent the spatial and spectral differences between low-resolution multispectral and high-resolution multispectral (HRMS) images, respectively. In order to reduce spatial and spectral distortions, spatial equalization and spectral proportion constraints are designed and cast on the offsets, to develop a spatial and spectral constrained stable low-rank decomposition algorithm via augmented Lagrange multiplier. By fine modeling and heuristic learning, our method can simultaneously reduce spatial and spectral distortions in the fused HRMS images. Moreover, our method can efficiently deal with noises and outliers in source images, for exploring low-rank and sparse characteristics of data. Extensive experiments are taken on several image data sets, and the results demonstrate the efficiency of the proposed LRP.
Bilateral parietal contributions to spatial language.
Conder, Julie; Fridriksson, Julius; Baylis, Gordon C; Smith, Cameron M; Boiteau, Timothy W; Almor, Amit
2017-01-01
It is commonly held that language is largely lateralized to the left hemisphere in most individuals, whereas spatial processing is associated with right hemisphere regions. In recent years, a number of neuroimaging studies have yielded conflicting results regarding the role of language and spatial processing areas in processing language about space (e.g., Carpenter, Just, Keller, Eddy, & Thulborn, 1999; Damasio et al., 2001). In the present study, we used sparse scanning event-related functional magnetic resonance imaging (fMRI) to investigate the neural correlates of spatial language, that is; language used to communicate the spatial relationship of one object to another. During scanning, participants listened to sentences about object relationships that were either spatial or non-spatial in nature (color or size relationships). Sentences describing spatial relationships elicited more activation in the superior parietal lobule and precuneus bilaterally in comparison to sentences describing size or color relationships. Activation of the precuneus suggests that spatial sentences elicit spatial-mental imagery, while the activation of the SPL suggests sentences containing spatial language involve integration of two distinct sets of information - linguistic and spatial. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, M.; Jin, J.
2017-12-01
Vegetation phenology is one of the most sensitive bio-indicators of climate change, and it has received increasing interests in the context of global warming. As one of the most sensitive areas to global change, the Tibetan Plateau is a unique region to study the trends in vegetation phenology in response to climate change because of its unique vegetation composition, climate features and low-level human disturbance. Although some studies have aroused wide controversies about the actual plant phenology patterns in the Tibetan Plateau, yet the reasons remain unclear. In particular, the phenology characteristics of sparse herbaceous or sparse shrub and evergreen forest that are mostly located in the northwest and southeast of the Tibetan Plateau remain less studied. In this study, the spatio-temporal patterns of the start (SOS), end (EOS) and length (LOS) of the vegetation growing season for six vegetation types in the Tibetan Plateau, including evergreen broadleaf forests, evergreen coniferous forests, evergreen shrub, meadow, steppe and sparse herbaceous or sparse shrub, were quantified from 1982 to 2014 using NOAA/AVHRR NDVI data set at a spatial resolution of 0.05°×0.05° and 7-day intervals using NDVI relative change rate threshold and sixth order polynomial fit models. Assisted with the monthly precipitation and temperature data, the relative effects of changing climates on the variability of phenology were also examined. Diverse phenological changes were observed for different land cover types, with an advancing start of growing season (SOS), delaying end of growing season (EOS) and increasing length of growing season (LOS) in the eastern Tibetan Plateau where meadow was the dominant vegetation type, but with the opposite changes in the steppe and sparse herbaceous or sparse shrub regions which are mostly located in the northwestern and western edges of the Tibetan Plateau. Correlation analysis indicated that sufficient preseason precipitation may delay the SOS of evergreen forests in the southeastern Plateau and advance the SOS of steppe and sparse herbaceous or sparse shrub in relatively arid areas, while the advance of SOS in meadow areas could be related to higher preseason temperature.
NASA Astrophysics Data System (ADS)
Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen
2017-12-01
Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.
Atmospheric Science Data Center
2015-11-25
FIRE_CI2_ETL_RADAR Project Title: FIRE II CIRRUS Discipline: ... Platform: Ground Station Instrument: Radar Spatial Coverage: (37.06, -95.34) Spatial ... Order Data Guide Documents: ETL_RADAR Guide Readme Files: Readme ETL_RADAR (PS) ...
NASA Astrophysics Data System (ADS)
Cai, Lei; Alexeev, Vladimir A.; Arp, Christopher D.; Jones, Benjamin M.; Liljedahl, Anna K.; Gädeke, Anne
2018-01-01
Climate change is most pronounced in the northern high latitude region. Yet, climate observations are unable to fully capture regional-scale dynamics due to the sparse weather station coverage, which limits our ability to make reliable climate-based assessments. A set of simulated data products was therefore developed for the North Slope of Alaska through a dynamical downscaling approach. The polar-optimized Weather Research & Forecast (Polar WRF) model was forced by three sources: The ERA-interim reanalysis data (for 1979-2014), the Community Earth System Model 1.0 (CESM1.0) historical simulation (for 1950-2005), and the CESM1.0 projected (for 2006-2100) simulations in two Representative Concentration Pathways (RCP4.5 and RCP8.5) scenarios. Climatic variables were produced in a 10-km grid spacing and a 3-hour interval. The ERA-interim forced WRF (ERA-WRF) proves the value of dynamical downscaling, which yields more realistic topographical-induced precipitation and air temperature, as well as corrects underestimations in observed precipitation. In summary, dry and cold biases to the north of the Brooks Range are presented in ERA-WRF, while CESM forced WRF (CESM-WRF) holds wet and warm biases in its historical period. A linear scaling method allowed for an adjustment of the biases, while keeping the majority of the variability and extreme values of modeled precipitation and air temperature. CESM-WRF under RCP 4.5 scenario projects smaller increase in precipitation and air temperature than observed in the historical CESM-WRF product, while the CESM-WRF under RCP8.5 scenario shows larger changes. The fine spatial and temporal resolution, long temporal coverage, and multi-scenario projections jointly make the dataset appropriate to address a myriad of physical and biological changes occurring on the North Slope of Alaska.
Zhang, Yan; Bhamber, Ranjeet; Riba-Garcia, Isabel; Liao, Hanqing; Unwin, Richard D; Dowsey, Andrew W
2015-01-01
As data rates rise, there is a danger that informatics for high-throughput LC-MS becomes more opaque and inaccessible to practitioners. It is therefore critical that efficient visualisation tools are available to facilitate quality control, verification, validation, interpretation, and sharing of raw MS data and the results of MS analyses. Currently, MS data is stored as contiguous spectra. Recall of individual spectra is quick but panoramas, zooming and panning across whole datasets necessitates processing/memory overheads impractical for interactive use. Moreover, visualisation is challenging if significant quantification data is missing due to data-dependent acquisition of MS/MS spectra. In order to tackle these issues, we leverage our seaMass technique for novel signal decomposition. LC-MS data is modelled as a 2D surface through selection of a sparse set of weighted B-spline basis functions from an over-complete dictionary. By ordering and spatially partitioning the weights with an R-tree data model, efficient streaming visualisations are achieved. In this paper, we describe the core MS1 visualisation engine and overlay of MS/MS annotations. This enables the mass spectrometrist to quickly inspect whole runs for ionisation/chromatographic issues, MS/MS precursors for coverage problems, or putative biomarkers for interferences, for example. The open-source software is available from http://seamass.net/viz/. PMID:25663356
Sequential estimation of surface water mass changes from daily satellite gravimetry data
NASA Astrophysics Data System (ADS)
Ramillien, G. L.; Frappart, F.; Gratton, S.; Vasseur, X.
2015-03-01
We propose a recursive Kalman filtering approach to map regional spatio-temporal variations of terrestrial water mass over large continental areas, such as South America. Instead of correcting hydrology model outputs by the GRACE observations using a Kalman filter estimation strategy, regional 2-by-2 degree water mass solutions are constructed by integration of daily potential differences deduced from GRACE K-band range rate (KBRR) measurements. Recovery of regional water mass anomaly averages obtained by accumulation of information of daily noise-free simulated GRACE data shows that convergence is relatively fast and yields accurate solutions. In the case of cumulating real GRACE KBRR data contaminated by observational noise, the sequential method of step-by-step integration provides estimates of water mass variation for the period 2004-2011 by considering a set of suitable a priori error uncertainty parameters to stabilize the inversion. Spatial and temporal averages of the Kalman filter solutions over river basin surfaces are consistent with the ones computed using global monthly/10-day GRACE solutions from official providers CSR, GFZ and JPL. They are also highly correlated to in situ records of river discharges (70-95 %), especially for the Obidos station where the total outflow of the Amazon River is measured. The sparse daily coverage of the GRACE satellite tracks limits the time resolution of the regional Kalman filter solutions, and thus the detection of short-term hydrological events.
Determination of Differential Emission Measure from Solar Extreme Ultraviolet Images
NASA Astrophysics Data System (ADS)
Su, Yang; Veronig, Astrid M.; Hannah, Iain G.; Cheung, Mark C. M.; Dennis, Brian R.; Holman, Gordon D.; Gan, Weiqun; Li, Youping
2018-03-01
The Atmospheric Imaging Assembly (AIA) on board the Solar Dynamic Observatory (SDO) has been providing high-cadence, high-resolution, full-disk UV-visible/extreme ultraviolet (EUV) images since 2010, with the best time coverage among all the solar missions. A number of codes have been developed to extract plasma differential emission measures (DEMs) from AIA images. Although widely used, they cannot effectively constrain the DEM at flaring temperatures with AIA data alone. This often results in much higher X-ray fluxes than observed. One way to solve the problem is by adding more constraint from other data sets (such as soft X-ray images and fluxes). However, the spatial information of plasma DEMs are lost in many cases. In this Letter, we present a different approach to constrain the DEMs. We tested the sparse inversion code and show that the default settings reproduce X-ray fluxes that could be too high. Based on the tests with both simulated and observed AIA data, we provided recommended settings of basis functions and tolerances. The new DEM solutions derived from AIA images alone are much more consistent with (thermal) X-ray observations, and provide valuable information by mapping the thermal plasma from ∼0.3 to ∼30 MK. Such improvement is a key step in understanding the nature of individual X-ray sources, and particularly important for studies of flare initiation.
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
How Does the Sparse Memory “Engram” Neurons Encode the Memory of a Spatial–Temporal Event?
Guan, Ji-Song; Jiang, Jun; Xie, Hong; Liu, Kai-Yuan
2016-01-01
Episodic memory in human brain is not a fixed 2-D picture but a highly dynamic movie serial, integrating information at both the temporal and the spatial domains. Recent studies in neuroscience reveal that memory storage and recall are closely related to the activities in discrete memory engram (trace) neurons within the dentate gyrus region of hippocampus and the layer 2/3 of neocortex. More strikingly, optogenetic reactivation of those memory trace neurons is able to trigger the recall of naturally encoded memory. It is still unknown how the discrete memory traces encode and reactivate the memory. Considering a particular memory normally represents a natural event, which consists of information at both the temporal and spatial domains, it is unknown how the discrete trace neurons could reconstitute such enriched information in the brain. Furthermore, as the optogenetic-stimuli induced recall of memory did not depend on firing pattern of the memory traces, it is most likely that the spatial activation pattern, but not the temporal activation pattern of the discrete memory trace neurons encodes the memory in the brain. How does the neural circuit convert the activities in the spatial domain into the temporal domain to reconstitute memory of a natural event? By reviewing the literature, here we present how the memory engram (trace) neurons are selected and consolidated in the brain. Then, we will discuss the main challenges in the memory trace theory. In the end, we will provide a plausible model of memory trace cell network, underlying the conversion of neural activities between the spatial domain and the temporal domain. We will also discuss on how the activation of sparse memory trace neurons might trigger the replay of neural activities in specific temporal patterns. PMID:27601979
Satellite Remote Sensing of Cirrus: An Overview
NASA Technical Reports Server (NTRS)
Minnis, Patrick
1998-01-01
The determination of cirrus properties over relatively large spatial and temporal scales will, in most instances, require the use of satellite data. Global coverage, at resolutions as high as several meters are attainable with Landsat, while temporal coverage at 1-min intervals is now available with the latest Geostationary Operational Environmental Satellite (GOES) imagers. Cirrus can be analyzed via interpretation of the radiation that they reflect or emit over a wide range of the electromagnetic spectrum. Many of these spectra and high-resolution satellite data can be used to understand certain aspects of cirrus clouds in particular situations. Production of a global climatology of cirrus clouds, however, requires compromises in spatial, temporal, and spectral coverage. This paper summarizes the state of the art and the potential for future passive remote sensing systems for both understanding cirrus formation and acquiring sufficient statistics to constrain and refine weather and climate models.
NASA Astrophysics Data System (ADS)
Ferguson, Elaine A.; Hampson, Katie; Cleaveland, Sarah; Consunji, Ramona; Deray, Raffy; Friar, John; Haydon, Daniel T.; Jimenez, Joji; Pancipane, Marlon; Townsend, Sunny E.
2015-12-01
Understanding the factors influencing vaccination campaign effectiveness is vital in designing efficient disease elimination programmes. We investigated the importance of spatial heterogeneity in vaccination coverage and human-mediated dog movements for the elimination of endemic canine rabies by mass dog vaccination in Region VI of the Philippines (Western Visayas). Household survey data was used to parameterise a spatially-explicit rabies transmission model with realistic dog movement and vaccination coverage scenarios, assuming a basic reproduction number for rabies drawn from the literature. This showed that heterogeneous vaccination reduces elimination prospects relative to homogeneous vaccination at the same overall level. Had the three vaccination campaigns completed in Region VI in 2010-2012 been homogeneous, they would have eliminated rabies with high probability. However, given the observed heterogeneity, three further campaigns may be required to achieve elimination with probability 0.95. We recommend that heterogeneity be reduced in future campaigns through targeted efforts in low coverage areas, even at the expense of reduced coverage in previously high coverage areas. Reported human-mediated dog movements did not reduce elimination probability, so expending limited resources on restricting dog movements is unnecessary in this endemic setting. Enhanced surveillance will be necessary post-elimination, however, given the reintroduction risk from long-distance dog movements.
Learning partial differential equations via data discovery and sparse optimization
NASA Astrophysics Data System (ADS)
Schaeffer, Hayden
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.
Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott
2012-01-01
Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.
Learning partial differential equations via data discovery and sparse optimization.
Schaeffer, Hayden
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
Learning partial differential equations via data discovery and sparse optimization
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection. PMID:28265183
TES/Aura L3 Carbon Monoxide (CO) Monthly (TL3COM)
Atmospheric Science Data Center
2018-02-28
... TES Aura L1B Nadir Spatial Coverage: 5.3 x 8.5 km Spatial Resolution: 0.5 x 5 km ... Guide Documents: Data User's Guide (PDF): Level 3 Level 3 Algorithms, Requirements, & Products (PDF) ...
A Spatial Heterodyne Spectrometer for Laboratory Astrophysics; First Interferogram
NASA Technical Reports Server (NTRS)
Lawler, J. E.; Labby, Z. E.; Roesler, F. L.; Harlander, J.
2006-01-01
A Spatial Heterodyne Spectrometer with broad spectral coverage across the VUV - UV region and with a high (> 500,000 ) spectral resolving power is being built for laboratory measurements of spectroscopic data including emission branching fractions, improved level energies, and hyperfine/isotopic parameters.
Bang, Junhyeok; Meng, Sheng; Sun, Yi-Yang; West, Damien; Wang, Zhiguo; Gao, Fei; Zhang, S. B.
2013-01-01
Understanding and controlling of excited carrier dynamics is of fundamental and practical importance, particularly in photochemistry and solar energy applications. However, theory of energy relaxation of excited carriers is still in its early stage. Here, using ab initio molecular dynamics (MD) coupled with time-dependent density functional theory, we show a coverage-dependent energy transfer of photoexcited carriers in hydrogenated graphene, giving rise to distinctively different ion dynamics. Graphene with sparsely populated H is difficult to dissociate due to inefficient transfer of the excitation energy into kinetic energy of the H. In contrast, H can easily desorb from fully hydrogenated graphane. The key is to bring down the H antibonding state to the conduction band minimum as the band gap increases. These results can be contrasted to those of standard ground-state MD that predict H in the sparse case should be much less stable than that in fully hydrogenated graphane. Our findings thus signify the importance of carrying out explicit electronic dynamics in excited-state simulations. PMID:23277576
Superwide-angle coverage code-multiplexed optical scanner.
Riza, Nabeel A; Arain, Muzammil A
2004-05-01
A superwide-angle coverage code-multiplexed optical scanner is presented that has the potential to provide 4 pi-sr coverage. As a proof-of-concept experiment, an angular scan range of 288 degrees for six randomly distributed beams is demonstrated. The proposed scanner achieves its superwide coverage by exploiting a combination of phase-encoded transmission and reflection holography within an in-line hologram recording-retrieval geometry. The basic scanner unit consists of one phase-only digital mode spatial light modulator for code entry (i.e., beam scan control) and a holographic material from which we obtained what we believe is the first-of-a-kind extremely wide coverage, low component count, high speed (e.g., microsecond domain), and large aperture (e.g., > 1-cm diameter) scanner.
Brazier, Isabel; Kelman, Mark; Ward, Michael P
2014-08-29
The aim of this study was to describe the association between landscape and climate factors and the occurrence of tick paralysis cases in dogs and cats reported by veterinarians in Australia. Data were collated based on postcode of residence of the animal and the corresponding landscape (landcover and elevation) and climate (precipitation, temperature) information was derived. During the study period (October 2010-December 2012), a total of 5560 cases (4235 [76%] canine and 1325 [24%] feline cases) were reported from 341 postcodes, mostly along the eastern seaboard of Australia and from the states of New South Wales and Queensland. Significantly more cases were reported from postcodes which contained areas of broadleaved, evergreen tree coverage (P=0.0019); broadleaved, deciduous open tree coverage (P=0.0416); and water bodies (P=0.0394). Significantly fewer tick paralysis cases were reported from postcodes which contained areas of sparse herbaceous or sparse shrub coverage (P=0.0297) and areas that were cultivated and managed (P=0.0005). No significant (P=0.6998) correlation between number of tick paralysis cases reported per postcode and elevation was found. Strong positive correlations were found between number of cases reported per postcode and the annual minimum (rSP=0.9552, P<0.0001) and maximum (rSP=0.9075; P=0.0001) precipitation. Correlations between reported tick paralysis cases and temperature variables were much weaker than for precipitation, rSP<0.23. For maximum temperature, the strongest correlation between cases was found in winter (rSP=0.1877; P=0.0005) and for minimum temperature in autumn (rSP=0.2289: P<0.0001). Study findings suggest that tick paralysis cases are more likely to occur and be reported in certain eco-climatic zones, such as those with higher rainfall and containing tree cover and areas of water. Veterinarians and pet owners in these zones should be particularly alert for tick paralysis cases to maximize the benefits of early treatment, and to be vigilant to use chemical prophylaxis to reduce the risk of tick parasitism. Copyright © 2014 Elsevier B.V. All rights reserved.
Gravity Data from Newark Valley, White Pine County, Nevada
Mankinen, Edward A.; McKee, Edwin H.
2007-01-01
The Newark Valley area, eastern Nevada is one of thirteen major ground-water basins investigated by the BARCAS (Basin and Range Carbonate Aquifer Study) Project. Gravity data are being used to help characterize the geophysical framework of the region. Although gravity coverage was extensive over parts of the BARCAS study area, data were sparse for a number of the valleys, including the northern part of Newark Valley. We addressed this lack of data by establishing seventy new gravity stations in and around Newark Valley. All available gravity data were then evaluated to determine their reliability, prior to calculating an isostatic residual gravity map to be used for subsequent analyses. A gravity inversion method was used to calculate depths to pre-Cenozoic basement rock and estimates of maximum alluvial/volcanic fill. The enhanced gravity coverage and the incorporation of lithologic information from several deep oil and gas wells yields a view of subsurface shape of the basin and will provide information useful for the development of hydrogeologic models for the region.
A quantitative assessment of Arctic shipping in 2010–2014
Eguíluz, Victor M.; Fernández-Gracia, Juan; Irigoien, Xabier; Duarte, Carlos M.
2016-01-01
Rapid loss of sea ice is opening up the Arctic Ocean to shipping, a practice that is forecasted to increase rapidly by 2050 when many models predict that the Arctic Ocean will largely be free of ice toward the end of summer. These forecasts carry considerable uncertainty because Arctic shipping was previously considered too sparse to allow for adequate validation. Here, we provide quantitative evidence that the extent of Arctic shipping in the period 2011–2014 is already significant and that it is concentrated (i) in the Norwegian and Barents Seas, and (ii) predominantly accessed via the Northeast and Northwest Passages. Thick ice along the forecasted direct trans-Arctic route was still present in 2014, preventing transit. Although Arctic shipping remains constrained by the extent of ice coverage, during every September, this coverage is at a minimum, allowing the highest levels of shipping activity. Access to Arctic resources, particularly fisheries, is the most important driver of Arctic shipping thus far. PMID:27477878
Cherenkov detectors for spatial imaging applications using discrete-energy photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Paul B.; Erickson, Anna S., E-mail: erickson@gatech.edu
Cherenkov detectors can offer a significant advantage in spatial imaging applications when excellent timing response, low noise and cross talk, large area coverage, and the ability to operate in magnetic fields are required. We show that an array of Cherenkov detectors with crude energy resolution coupled with monochromatic photons resulting from a low-energy nuclear reaction can be used to produce a sharp image of material while providing large and inexpensive detector coverage. The analysis of the detector response to relative transmission of photons with various energies allows for reconstruction of material's effective atomic number further aiding in high-Z material identification.
The Area Coverage of Geophysical Fields as a Function of Sensor Field-of View
NASA Technical Reports Server (NTRS)
Key, Jeffrey R.
1994-01-01
In many remote sensing studies of geophysical fields such as clouds, land cover, or sea ice characteristics, the fractional area coverage of the field in an image is estimated as the proportion of pixels that have the characteristic of interest (i.e., are part of the field) as determined by some thresholding operation. The effect of sensor field-of-view on this estimate is examined by modeling the unknown distribution of subpixel area fraction with the beta distribution, whose two parameters depend upon the true fractional area coverage, the pixel size, and the spatial structure of the geophysical field. Since it is often not possible to relate digital number, reflectance, or temperature to subpixel area fraction, the statistical models described are used to determine the effect of pixel size and thresholding operations on the estimate of area fraction for hypothetical geophysical fields. Examples are given for simulated cumuliform clouds and linear openings in sea ice, whose spatial structures are described by an exponential autocovariance function. It is shown that the rate and direction of change in total area fraction with changing pixel size depends on the true area fraction, the spatial structure, and the thresholding operation used.
NASA Technical Reports Server (NTRS)
Xie, X.; Liu, W.; Hu, H.; Tang, W.
2001-01-01
The series of joint U.S.-Japan spaceborne scatterometers missions to provide continuous measurements of ocean wind vectors is reviewed. Examples of the scientific impact of the continuous effort in improving spatial resolution and coverage are provided. The plan for the future is reviewed.
Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.
Li, Yuhong; Jia, Fucang; Qin, Jing
2016-10-01
Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.
Spatiotemporal predictions of soil properties and states in variably saturated landscapes
NASA Astrophysics Data System (ADS)
Franz, Trenton E.; Loecke, Terrance D.; Burgin, Amy J.; Zhou, Yuzhen; Le, Tri; Moscicki, David
2017-07-01
Understanding greenhouse gas (GHG) fluxes from landscapes with variably saturated soil conditions is challenging given the highly dynamic nature of GHG fluxes in both space and time, dubbed hot spots, and hot moments. On one hand, our ability to directly monitor these processes is limited by sparse in situ and surface chamber observational networks. On the other hand, remote sensing approaches provide spatial data sets but are limited by infrequent imaging over time. We use a robust statistical framework to merge sparse sensor network observations with reconnaissance style hydrogeophysical mapping at a well-characterized site in Ohio. We find that combining time-lapse electromagnetic induction surveys with empirical orthogonal functions provides additional environmental covariates related to soil properties and states at high spatial resolutions ( 5 m). A cross-validation experiment using eight different spatial interpolation methods versus 120 in situ soil cores indicated an 30% reduction in root-mean-square error for soil properties (clay weight percent and total soil carbon weight percent) using hydrogeophysical derived environmental covariates with regression kriging. In addition, the hydrogeophysical derived environmental covariates were found to be good predictors of soil states (soil temperature, soil water content, and soil oxygen). The presented framework allows for temporal gap filling of individual sensor data sets as well as provides flexible geometric interpolation to complex areas/volumes. We anticipate that the framework, with its flexible temporal and spatial monitoring options, will be useful in designing future monitoring networks as well as support the next generation of hyper-resolution hydrologic and biogeochemical models.
TES/Aura L3 Methane (CH4) Monthly (TL3CH4M)
Atmospheric Science Data Center
2018-02-28
... TES Aura L1B Nadir Spatial Coverage: 5.3 x 8.5 km Spatial Resolution: 0.5 x 5 km ... Guide Documents: Data User's Guide (PDF): Level 3 Level 3 Algorithms, Requirements, & Products (PDF) ...
Stang, Christoph; Wieczorek, Matthias Valentin; Noss, Christian; Lorke, Andreas; Scherr, Frank; Goerlitz, Gerhard; Schulz, Ralf
2014-07-01
Quantitative information on the processes leading to the retention of plant protection products (PPPs) in surface waters is not available, particularly for flow-through systems. The influence of aquatic vegetation on the hydraulic- and sorption-mediated mitigation processes of three PPPs (triflumuron, pencycuron, and penflufen; logKOW 3.3-4.9) in 45-m slow-flowing stream mesocosms was investigated. Peak reductions were 35-38% in an unvegetated stream mesocosm, 60-62% in a sparsely vegetated stream mesocosm (13% coverage with Elodea nuttallii), and in a similar range of 57-69% in a densely vegetated stream mesocosm (100% coverage). Between 89% and 93% of the measured total peak reductions in the sparsely vegetated stream can be explained by an increase of vegetation-induced dispersion (estimated with the one-dimensional solute transport model OTIS), while 7-11% of the peak reduction can be attributed to sorption processes. However, dispersion contributed only 59-71% of the peak reductions in the densely vegetated stream mesocosm, where 29% to 41% of the total peak reductions can be attributed to sorption processes. In the densely vegetated stream, 8-27% of the applied PPPs, depending on the logKOW values of the compounds, were temporarily retained by macrophytes. Increasing PPP recoveries in the aqueous phase were accompanied by a decrease of PPP concentrations in macrophytes indicating kinetic desorption over time. This is the first study to provide quantitative data on how the interaction of dispersion and sorption, driven by aquatic macrophytes, influences the mitigation of PPP concentrations in flowing vegetated stream systems. Copyright © 2014 Elsevier Ltd. All rights reserved.
Structured networks support sparse traveling waves in rodent somatosensory cortex.
Moldakarimov, Samat; Bazhenov, Maxim; Feldman, Daniel E; Sejnowski, Terrence J
2018-05-15
Neurons responding to different whiskers are spatially intermixed in the superficial layer 2/3 (L2/3) of the rodent barrel cortex, where a single whisker deflection activates a sparse, distributed neuronal population that spans multiple cortical columns. How the superficial layer of the rodent barrel cortex is organized to support such distributed sensory representations is not clear. In a computer model, we tested the hypothesis that sensory representations in L2/3 of the rodent barrel cortex are formed by activity propagation horizontally within L2/3 from a site of initial activation. The model explained the observed properties of L2/3 neurons, including the low average response probability in the majority of responding L2/3 neurons, and the existence of a small subset of reliably responding L2/3 neurons. Sparsely propagating traveling waves similar to those observed in L2/3 of the rodent barrel cortex occurred in the model only when a subnetwork of strongly connected neurons was immersed in a much larger network of weakly connected neurons.
Compressive sensing for single-shot two-dimensional coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, E.; Spencer, A.; Spokoyny, B.
2017-02-01
In this work, we explore the use of compressive sensing for the rapid acquisition of two-dimensional optical spectra that encodes the electronic structure and ultrafast dynamics of condensed-phase molecular species. Specifically, we have developed a means to combine multiplexed single-element detection and single-shot and phase-resolved two-dimensional coherent spectroscopy. The method described, which we call Single Point Array Reconstruction by Spatial Encoding (SPARSE) eliminates the need for costly array detectors while speeding up acquisition by several orders of magnitude compared to scanning methods. Physical implementation of SPARSE is facilitated by combining spatiotemporal encoding of the nonlinear optical response and signal modulation by a high-speed digital micromirror device. We demonstrate the approach by investigating a well-characterized cyanine molecule and a photosynthetic pigment-protein complex. Hadamard and compressive sensing algorithms are demonstrated, with the latter achieving compression factors as high as ten. Both show good agreement with directly detected spectra. We envision a myriad of applications in nonlinear spectroscopy using SPARSE with broadband femtosecond light sources in so-far unexplored regions of the electromagnetic spectrum.
Multi-channel feature dictionaries for RGB-D object recognition
NASA Astrophysics Data System (ADS)
Lan, Xiaodong; Li, Qiming; Chong, Mina; Song, Jian; Li, Jun
2018-04-01
Hierarchical matching pursuit (HMP) is a popular feature learning method for RGB-D object recognition. However, the feature representation with only one dictionary for RGB channels in HMP does not capture sufficient visual information. In this paper, we propose multi-channel feature dictionaries based feature learning method for RGB-D object recognition. The process of feature extraction in the proposed method consists of two layers. The K-SVD algorithm is used to learn dictionaries in sparse coding of these two layers. In the first-layer, we obtain features by performing max pooling on sparse codes of pixels in a cell. And the obtained features of cells in a patch are concatenated to generate patch jointly features. Then, patch jointly features in the first-layer are used to learn the dictionary and sparse codes in the second-layer. Finally, spatial pyramid pooling can be applied to the patch jointly features of any layer to generate the final object features in our method. Experimental results show that our method with first or second-layer features can obtain a comparable or better performance than some published state-of-the-art methods.
Chen, Yong-jin; Chen, Ya-ning; Liu, Jia-zhen
2010-03-01
The variations vegetation coverage is the result of conjunct effects of inner and outer energy of the earth, however, the human activity always makes the coverage of vegetation change a lot. Based on the monitoring data of chemistry of groundwater and the coverage of vegetation from 2002 to 2007 in the lower reaches of Tarim River, relations between vegetation coverage and groundwater chemistry were studied. It is found that vegetation coverage at Sector A was more than 80%, and decreased from sector to sector, the coverage of Sector I was less than 10%. At the same sector, samples near to water source owned high coverage index, and samples far away from the river had low coverage index. The variations of pH in groundwater expressed similar regulation to vegetation coverage, that is, Sectors near the water source had higher pH index comparing than those far away. Regression between groundwater quality and vegetation coverage disclosed that the coverage of Populus euphratica climbed up along with increase of pH in groundwater, change of Tamarix ramosissima coverage expressed an opposite trend to the Populus euphratica with the same environmental factors. This phenomenon can interpret spatial distribution of Populus euphratica and Tamarix ramosissima in lower reaches of the Tarim River.
NASA Astrophysics Data System (ADS)
De Vleeschouwer, N.; Verhoest, N.; Pauwels, V. R. N.
2015-12-01
The continuous monitoring of soil moisture in a permanent network can yield an interesting data product for use in hydrological data assimilation. Major advantages of in situ observations compared to remote sensing products are the potential vertical extent of the measurements, the finer temporal resolution of the observation time series, the smaller impact of land cover variability on the observation bias, etc. However, two major disadvantages are the typical small integration volume of in situ measurements and the often large spacing between monitoring locations. This causes only a small part of the modelling domain to be directly observed. Furthermore, the spatial configuration of the monitoring network is typically temporally non-dynamic. Therefore two questions can be raised. Do spatially sparse in situ soil moisture observations contain a sufficient data representativeness to successfully assimilate them into the largely unobserved spatial extent of a distributed hydrological model? And if so, how is this assimilation best performed? Consequently two important factors that can influence the success of assimilating in situ monitored soil moisture are the spatial configuration of the monitoring network and the applied assimilation algorithm. In this research the influence of those factors is examined by means of synthetic data-assimilation experiments. The study area is the ± 100 km² catchment of the Bellebeek in Flanders, Belgium. The influence of the spatial configuration is examined by varying the amount of locations and their position in the landscape. The latter is performed using several techniques including temporal stability analysis and clustering. Furthermore the observation depth is considered by comparing assimilation of surface layer (5 cm) and deeper layer (50 cm) observations. The impact of the assimilation algorithm is assessed by comparing the performance obtained with two well-known algorithms: Newtonian nudging and the Ensemble Kalman Filter.
TES/Aura L3 Ammonia (NH3) Daily V3 (TL3NH3D)
Atmospheric Science Data Center
2018-03-14
... TES Aura L1B Nadir Spatial Coverage: 5.3 x 8.5 km Spatial Resolution: 0.5 x 5 km ... Guide Documents: Data User's Guide (PDF): Level 3 Level 3 Algorithms, Requirements, & Products (PDF) ...
Competition between plant functional types in the Canadian Terrestrial Ecosystem Model (CTEM) v. 2.0
NASA Astrophysics Data System (ADS)
Melton, J. R.; Arora, V. K.
2015-06-01
The Canadian Terrestrial Ecosystem Model (CTEM) is the interactive vegetation component in the Earth system model of the Canadian Centre for Climate Modelling and Analysis. CTEM models land-atmosphere exchange of CO2 through the response of carbon in living vegetation, and dead litter and soil pools, to changes in weather and climate at timescales of days to centuries. Version 1.0 of CTEM uses prescribed fractional coverage of plant functional types (PFTs) although, in reality, vegetation cover continually adapts to changes in climate, atmospheric composition, and anthropogenic forcing. Changes in the spatial distribution of vegetation occur on timescales of years to centuries as vegetation distributions inherently have inertia. Here, we present version 2.0 of CTEM which includes a representation of competition between PFTs based on a modified version of the Lotka-Volterra (L-V) predator-prey equations. Our approach is used to dynamically simulate the fractional coverage of CTEM's seven natural, non-crop PFTs which are then compared with available observation-based estimates. Results from CTEM v. 2.0 show the model is able to represent the broad spatial distributions of its seven PFTs at the global scale. However, differences remain between modelled and observation-based fractional coverages of PFTs since representing the multitude of plant species globally, with just seven non-crop PFTs, only captures the large scale climatic controls on PFT distributions. As expected, PFTs that exist in climate niches are difficult to represent either due to the coarse spatial resolution of the model, and the corresponding driving climate, or the limited number of PFTs used. We also simulate the fractional coverages of PFTs using unmodified L-V equations to illustrate its limitations. The geographic and zonal distributions of primary terrestrial carbon pools and fluxes from the versions of CTEM that use prescribed and dynamically simulated fractional coverage of PFTs compare reasonably well with each other and observation-based estimates. The parametrization of competition between PFTs in CTEM v. 2.0 based on the modified L-V equations behaves in a reasonably realistic manner and yields a tool with which to investigate the changes in spatial distribution of vegetation in response to future changes in climate.
Competition between plant functional types in the Canadian Terrestrial Ecosystem Model (CTEM) v. 2.0
NASA Astrophysics Data System (ADS)
Melton, J. R.; Arora, V. K.
2016-01-01
The Canadian Terrestrial Ecosystem Model (CTEM) is the interactive vegetation component in the Earth system model of the Canadian Centre for Climate Modelling and Analysis. CTEM models land-atmosphere exchange of CO2 through the response of carbon in living vegetation, and dead litter and soil pools, to changes in weather and climate at timescales of days to centuries. Version 1.0 of CTEM uses prescribed fractional coverage of plant functional types (PFTs) although, in reality, vegetation cover continually adapts to changes in climate, atmospheric composition and anthropogenic forcing. Changes in the spatial distribution of vegetation occur on timescales of years to centuries as vegetation distributions inherently have inertia. Here, we present version 2.0 of CTEM, which includes a representation of competition between PFTs based on a modified version of the Lotka-Volterra (L-V) predator-prey equations. Our approach is used to dynamically simulate the fractional coverage of CTEM's seven natural, non-crop PFTs, which are then compared with available observation-based estimates. Results from CTEM v. 2.0 show the model is able to represent the broad spatial distributions of its seven PFTs at the global scale. However, differences remain between modelled and observation-based fractional coverage of PFTs since representing the multitude of plant species globally, with just seven non-crop PFTs, only captures the large-scale climatic controls on PFT distributions. As expected, PFTs that exist in climate niches are difficult to represent either due to the coarse spatial resolution of the model, and the corresponding driving climate, or the limited number of PFTs used. We also simulate the fractional coverage of PFTs using unmodified L-V equations to illustrate its limitations. The geographic and zonal distributions of primary terrestrial carbon pools and fluxes from the versions of CTEM that use prescribed and dynamically simulated fractional coverage of PFTs compare reasonably well with each other and observation-based estimates. The parametrization of competition between PFTs in CTEM v. 2.0 based on the modified L-V equations behaves in a reasonably realistic manner and yields a tool with which to investigate the changes in spatial distribution of vegetation in response to future changes in climate.
An analysis of the lithology to resistivity relationships using airborne EM and boreholes
NASA Astrophysics Data System (ADS)
Barfod, Adrian A. S.; Christiansen, Anders V.; Møller, Ingelise
2014-05-01
We present a study of the relationship between dense airborne SkyTEM resistivity data and sparse lithological borehole data. Understanding the geological structures of the subsurface is of great importance to hydrogeological surveys. Large scale geological information can be gathered directly from boreholes or indirectly from large geophysical surveys. Borehole data provides detailed lithological information only at the position of the borehole and, due to the sparse nature of boreholes, they rarely provide sufficient information needed for high-accuracy groundwater models. Airborne geophysical data, on the other hand, provide dense spatial coverage, but are only indirectly bearing information on lithology through the resistivity models. Hitherherto, the integration of the geophysical data into geological and hydrogeological models has been often subjective, largely un-documented and painstakingly manual. This project presents a detailed study of the relationships between resistivity data and lithological borehole data. The purpose is to objectively describe the relationships between lithology and geophysical parameters and to document these relationships. This project has focused on utilizing preexisting datasets from the Danish national borehole database (JUPITER) and national geophysical database (GERDA). The study presented here is from the Norsminde catchment area (208 sq. km), situated in the municipality of Odder, Denmark. The Norsminde area contains a total of 758 boreholes and 106,770 SkyTEM soundings. The large amounts of data make the Norsminde area ideal for studying the relationship between geophysical data and lithological data. The subsurface is discretized into 20 cm horizontal sampling intervals from the highest elevation point to the depth of the deepest borehole. For each of these intervals a resistivity value is calculated at the position of the boreholes using a kriging formulation. The lithology data from the boreholes are then used to categorize the interpolated resistivity values according to lithology. The end result of this comparison is resistivity distributions for different lithology categories. The distributions provide detailed objective information of the resistivity properties of the subsurface and are a documentation of the resistivity imaging of the geological lithologies. We show that different lithologies are mapped at distinctively different resistivities but also that the geophysical inversion strategies influences the resulting distributions significantly.
NASA Astrophysics Data System (ADS)
Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.
2014-12-01
Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.
Artieri, Carlo G; Fraser, Hunter B
2014-12-01
The recent advent of ribosome profiling-sequencing of short ribosome-bound fragments of mRNA-has offered an unprecedented opportunity to interrogate the sequence features responsible for modulating translational rates. Nevertheless, numerous analyses of the first riboprofiling data set have produced equivocal and often incompatible results. Here we analyze three independent yeast riboprofiling data sets, including two with much higher coverage than previously available, and find that all three show substantial technical sequence biases that confound interpretations of ribosomal occupancy. After accounting for these biases, we find no effect of previously implicated factors on ribosomal pausing. Rather, we find that incorporation of proline, whose unique side-chain stalls peptide synthesis in vitro, also slows the ribosome in vivo. We also reanalyze a method that implicated positively charged amino acids as the major determinant of ribosomal stalling and demonstrate that it produces false signals of stalling in low-coverage data. Our results suggest that any analysis of riboprofiling data should account for sequencing biases and sparse coverage. To this end, we establish a robust methodology that enables analysis of ribosome profiling data without prior assumptions regarding which positions spanned by the ribosome cause stalling. © 2014 Artieri and Fraser; Published by Cold Spring Harbor Laboratory Press.
Cataract surgical coverage and outcome in the Tibet Autonomous Region of China
Bassett, K L; Noertjojo, K; Liu, L; Wang, F S; Tenzing, C; Wilkie, A; Santangelo, M; Courtright, P
2005-01-01
Background: A recently published, population based survey of the Tibet Autonomous Region (TAR) of China reported on low vision, blindness, and blinding conditions. This paper presents detailed findings from that survey regarding cataract, including prevalence, cataract surgical coverage, surgical outcome, and barriers to use of services. Methods: The Tibet Eye Care Assessment (TECA) was a prevalence survey of people from randomly selected households from three of the seven provinces of the TAR (Lhoka, Nakchu, and Lingzhr), representing its three main environmental regions. The survey, conducted in 1999 and 2000, assessed visual acuity, cause of vision loss, and eye care services. Results: Among the 15 900 people enumerated, 12 644 were examined (79.6%). Cataract prevalence was 5.2% and 13.8%, for the total population, and those over age 50, respectively. Cataract surgical coverage (vision <6/60) for people age 50 and older (85–90% of cataract blind) was 56% overall, 70% for men and 47% for women. The most common barriers to use of cataract surgical services were distance and cost. In the 216 eyes with cataract surgery, 60% were aphakic and 40% were pseudophakic. Pseudophakic surgery left 19% of eyes blind (<6/60) and an additional 20% of eyes with poor vision (6/24–6/60). Aphakic surgery left 24% of eyes blind and an additional 21% of eyes with poor vision. Even though more women remained blind than men, 28% versus 18% respectively, the different was not statistically significant (p = 0.25). Conclusions: Cataract surgical coverage was remarkably high despite the difficulty of providing services to such an isolated and sparse population. Cataract surgical outcome was poor for both aphakic and pseudophakic surgery. Two main priorities are improving cataract surgical quality and cataract surgical coverage, particularly for women. PMID:15615736
Dictionary learning-based spatiotemporal regularization for 3D dense speckle tracking
NASA Astrophysics Data System (ADS)
Lu, Allen; Zontak, Maria; Parajuli, Nripesh; Stendahl, John C.; Boutagy, Nabil; Eberle, Melissa; O'Donnell, Matthew; Sinusas, Albert J.; Duncan, James S.
2017-03-01
Speckle tracking is a common method for non-rigid tissue motion analysis in 3D echocardiography, where unique texture patterns are tracked through the cardiac cycle. However, poor tracking often occurs due to inherent ultrasound issues, such as image artifacts and speckle decorrelation; thus regularization is required. Various methods, such as optical flow, elastic registration, and block matching techniques have been proposed to track speckle motion. Such methods typically apply spatial and temporal regularization in a separate manner. In this paper, we propose a joint spatiotemporal regularization method based on an adaptive dictionary representation of the dense 3D+time Lagrangian motion field. Sparse dictionaries have good signal adaptive and noise-reduction properties; however, they are prone to quantization errors. Our method takes advantage of the desirable noise suppression, while avoiding the undesirable quantization error. The idea is to enforce regularization only on the poorly tracked trajectories. Specifically, our method 1.) builds data-driven 4-dimensional dictionary of Lagrangian displacements using sparse learning, 2.) automatically identifies poorly tracked trajectories (outliers) based on sparse reconstruction errors, and 3.) performs sparse reconstruction of the outliers only. Our approach can be applied on dense Lagrangian motion fields calculated by any method. We demonstrate the effectiveness of our approach on a baseline block matching speckle tracking and evaluate performance of the proposed algorithm using tracking and strain accuracy analysis.
Large-region acoustic source mapping using a movable array and sparse covariance fitting.
Zhao, Shengkui; Tuna, Cagdas; Nguyen, Thi Ngoc Tho; Jones, Douglas L
2017-01-01
Large-region acoustic source mapping is important for city-scale noise monitoring. Approaches using a single-position measurement scheme to scan large regions using small arrays cannot provide clean acoustic source maps, while deploying large arrays spanning the entire region of interest is prohibitively expensive. A multiple-position measurement scheme is applied to scan large regions at multiple spatial positions using a movable array of small size. Based on the multiple-position measurement scheme, a sparse-constrained multiple-position vectorized covariance matrix fitting approach is presented. In the proposed approach, the overall sample covariance matrix of the incoherent virtual array is first estimated using the multiple-position array data and then vectorized using the Khatri-Rao (KR) product. A linear model is then constructed for fitting the vectorized covariance matrix and a sparse-constrained reconstruction algorithm is proposed for recovering source powers from the model. The user parameter settings are discussed. The proposed approach is tested on a 30 m × 40 m region and a 60 m × 40 m region using simulated and measured data. Much cleaner acoustic source maps and lower sound pressure level errors are obtained compared to the beamforming approaches and the previous sparse approach [Zhao, Tuna, Nguyen, and Jones, Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP) (2016)].
NASA Astrophysics Data System (ADS)
Gong, Maoguo; Yang, Hailun; Zhang, Puzhao
2017-07-01
Ternary change detection aims to detect changes and group the changes into positive change and negative change. It is of great significance in the joint interpretation of spatial-temporal synthetic aperture radar images. In this study, sparse autoencoder, convolutional neural networks (CNN) and unsupervised clustering are combined to solve ternary change detection problem without any supervison. Firstly, sparse autoencoder is used to transform log-ratio difference image into a suitable feature space for extracting key changes and suppressing outliers and noise. And then the learned features are clustered into three classes, which are taken as the pseudo labels for training a CNN model as change feature classifier. The reliable training samples for CNN are selected from the feature maps learned by sparse autoencoder with certain selection rules. Having training samples and the corresponding pseudo labels, the CNN model can be trained by using back propagation with stochastic gradient descent. During its training procedure, CNN is driven to learn the concept of change, and more powerful model is established to distinguish different types of changes. Unlike the traditional methods, the proposed framework integrates the merits of sparse autoencoder and CNN to learn more robust difference representations and the concept of change for ternary change detection. Experimental results on real datasets validate the effectiveness and superiority of the proposed framework.
Changes in Arctic and Antarctic Sea Ice as a Microcosm of Global Climate Change
NASA Technical Reports Server (NTRS)
Parkinson, Claire L.
2014-01-01
Polar sea ice is a key element of the climate system and has now been monitored through satellite observations for over three and a half decades. The satellite observations reveal considerable information about polar ice and its changes since the late 1970s, including a prominent downward trend in Arctic sea ice coverage and a much lesser upward trend in Antarctic sea ice coverage, illustrative of the important fact that climate change entails spatial contrasts. The decreasing ice coverage in the Arctic corresponds well with contemporaneous Arctic warming and exhibits particularly large decreases in the summers of 2007 and 2012, influenced by both preconditioning and atmospheric conditions. The increasing ice coverage in the Antarctic is not as readily explained, but spatial differences in the Antarctic trends suggest a possible connection with atmospheric circulation changes that have perhaps been influenced by the Antarctic ozone hole. The changes in the polar ice covers and the issues surrounding those changes have many commonalities with broader climate changes and their surrounding issues, allowing the sea ice changes to be viewed in some important ways as a microcosm of global climate change.
Agrawal, M; Vasyuchka, V I; Serga, A A; Karenowska, A D; Melkov, G A; Hillebrands, B
2013-09-06
We present spatially resolved measurements of the magnon temperature in a magnetic insulator subject to a thermal gradient. Our data reveal an unexpectedly close correspondence between the spatial dependencies of the exchange magnon and phonon temperatures. These results indicate that if--as is currently thought--the transverse spin Seebeck effect is caused by a temperature difference between the magnon and phonon baths, it must be the case that the magnon temperature is spectrally nonuniform and that the effect is driven by the sparsely populated dipolar region of the magnon spectrum.
NASA Astrophysics Data System (ADS)
Yan, X.; Li, J.; Yang, Z.
2018-04-01
Chen Barag Banner is located in the typical farming-pastoral ecotone of Inner Mongolia, and it is also the core area of Hulunbuir steppe. Typical agricultural and pastoral staggered production mode so that the vegetation growth of the region not only determines the local ecological environment, and animal husbandry production, but also have a significant impact on the whole Hulunbuir ecological security and economic development. Therefore, it is necessary to monitor the change of vegetation in this area. Based on 17 MODIS Normalized Difference Vegetation Index (NDVI) images, the authors reconstructed the dynamic change characteristics of Fraction vegetation coverage (FVC) in Chen Barag Banner from 2000 to 2016. In this paper, first at all, Pixel Decomposition Models was introduced to inversion FVC, and the time series of vegetation coverage was reconstructed. Then we analyzed the temporal-spatial changes of FVC by employing transition matrix. Finally, through image analyzing and processing, the results showed that the vegetation coverage in the study area was influenced by effectors including climate, topography and human actives. In the past 17 years, the overall effect of vegetation coverage showed a downward trend of fluctuation. The average vegetation coverage decreased from 58.81 % in 2000 to 48.14 % in 2016, and the area of vegetation cover degradation accounts for 40.09 % of the total change area. Therefore, the overall degradation trend was obvious.
Fertility -- A new trend for a global business.
Farley, J U
1969-01-01
Cooperation between the public and private sectors in helping control population is possible. The usual public outlet for contraceptives, the clinic, is unsatisfactory for 3 reasons: coverage is sparse; there is no simple, repetitive supply activity; and more immediate medical problems take precedence. The public sector is not conversant with advertising and merchandising techniques and research which aid dissemination of both information and the product. Common marketing notions, e.g., 'trading up' may be relevant, i.e., many new aspects of oral contraceptives, IUDs, had already used conventional methods of contraception. The private sector is less sensitive to the political and religious aspects of contraception than the public sector.
Zimmerman, Emma; Racine, Eric
2012-01-01
Social neuroscience and its potential implications create an interesting case study for examining human research ethics policies on the topic of public communication of research. We reviewed mainstream national and international human research ethics guidelines and policies on issues of public communication of research. Our analysis relied on five thematic nets to capture the interactions between research and the public: public understanding, knowledge translation, public participation, social outcomes, and dual use. Coverage of these topics is sparse and inconsistent in mainstream policies and guidelines. We identify three options to address these gaps and analyze their strengths and weaknesses.
Asymmetry in Object Substitution Masking Occurs Relative to the Direction of Spatial Attention Shift
ERIC Educational Resources Information Center
Hirose, Nobuyuki; Osaka, Naoyuki
2010-01-01
A sparse mask that persists beyond the duration of a target can reduce its visibility, a phenomenon called "object substitution masking". Y. Jiang and M. M. Chun (2001a) found an asymmetric pattern of substitution masking such that a mask on the peripheral side of the target caused stronger substitution masking than on the central side.…
Soil carbon distribution in Alaska in relation to soil-forming factors
Kristofer D. Johnson; Jennifer Harden; A. David McGuire; Norman B. Bliss; James G. Bockheim; Mark Clark; Teresa Nettleton-Hollingsworth; M. Torre Jorgenson; Evan S. Kane; Michelle Mack; Johathan ODonnell; Chien-Lu Ping; Edward A.G. Schuur; Merritt R. Turetsky; David W. Valentine
2011-01-01
The direction and magnitude of soil organic carbon (SOC) changes in response to climate change remain unclear and depend on the spatial distribution of SOC across landscapes. Uncertainties regarding the fate of SOC are greater in high-latitude systems where data are sparse and the soils are affected by sub-zero temperatures. To address these issues in Alaska, a first-...
NASA Astrophysics Data System (ADS)
Chen, Duxin; Xu, Bowen; Zhu, Tao; Zhou, Tao; Zhang, Hai-Tao
2017-08-01
Coordination shall be deemed to the result of interindividual interaction among natural gregarious animal groups. However, revealing the underlying interaction rules and decision-making strategies governing highly coordinated motion in bird flocks is still a long-standing challenge. Based on analysis of high spatial-temporal resolution GPS data of three pigeon flocks, we extract the hidden interaction principle by using a newly emerging machine learning method, namely the sparse Bayesian learning. It is observed that the interaction probability has an inflection point at pairwise distance of 3-4 m closer than the average maximum interindividual distance, after which it decays strictly with rising pairwise metric distances. Significantly, the density of spatial neighbor distribution is strongly anisotropic, with an evident lack of interactions along individual velocity. Thus, it is found that in small-sized bird flocks, individuals reciprocally cooperate with a variational number of neighbors in metric space and tend to interact with closer time-varying neighbors, rather than interacting with a fixed number of topological ones. Finally, extensive numerical investigation is conducted to verify both the revealed interaction and decision-making principle during circular flights of pigeon flocks.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Spatial-temporal variation of marginal land suitable for energy plants from 1990 to 2010 in China
Jiang, Dong; Hao, Mengmeng; Fu, Jingying; Zhuang, Dafang; Huang, Yaohuan
2014-01-01
Energy plants are the main source of bioenergy which will play an increasingly important role in future energy supplies. With limited cultivated land resources in China, the development of energy plants may primarily rely on the marginal land. In this study, based on the land use data from 1990 to 2010(every 5 years is a period) and other auxiliary data, the distribution of marginal land suitable for energy plants was determined using multi-factors integrated assessment method. The variation of land use type and spatial distribution of marginal land suitable for energy plants of different decades were analyzed. The results indicate that the total amount of marginal land suitable for energy plants decreased from 136.501 million ha to 114.225 million ha from 1990 to 2010. The reduced land use types are primarily shrub land, sparse forest land, moderate dense grassland and sparse grassland, and large variation areas are located in Guangxi, Tibet, Heilongjiang, Xinjiang and Inner Mongolia. The results of this study will provide more effective data reference and decision making support for the long-term planning of bioenergy resources. PMID:25056520
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-01-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations. PMID:28845484
Uncovering representations of sleep-associated hippocampal ensemble spike activity
NASA Astrophysics Data System (ADS)
Chen, Zhe; Grosmark, Andres D.; Penagos, Hector; Wilson, Matthew A.
2016-08-01
Pyramidal neurons in the rodent hippocampus exhibit spatial tuning during spatial navigation, and they are reactivated in specific temporal order during sharp-wave ripples observed in quiet wakefulness or slow wave sleep. However, analyzing representations of sleep-associated hippocampal ensemble spike activity remains a great challenge. In contrast to wake, during sleep there is a complete absence of animal behavior, and the ensemble spike activity is sparse (low occurrence) and fragmental in time. To examine important issues encountered in sleep data analysis, we constructed synthetic sleep-like hippocampal spike data (short epochs, sparse and sporadic firing, compressed timescale) for detailed investigations. Based upon two Bayesian population-decoding methods (one receptive field-based, and the other not), we systematically investigated their representation power and detection reliability. Notably, the receptive-field-free decoding method was found to be well-tuned for hippocampal ensemble spike data in slow wave sleep (SWS), even in the absence of prior behavioral measure or ground truth. Our results showed that in addition to the sample length, bin size, and firing rate, number of active hippocampal pyramidal neurons are critical for reliable representation of the space as well as for detection of spatiotemporal reactivated patterns in SWS or quiet wakefulness.
Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2015-10-01
The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations.
Development and Evaluation of a Gridded CrIS/ATMS Visualization for Operational Forecasting
NASA Astrophysics Data System (ADS)
Zavodsky, B.; Smith, N.; Dostalek, J.; Stevens, E.; Nelson, K.; Weisz, E.; Berndt, E.; Line, W.; Barnet, C.; Gambacorta, A.; Reale, A.; Hoese, D.
2016-12-01
Upper-air observations from radiosondes are limited in spatial coverage and are primarily launched only at synoptic times, potentially missing evolving air masses. For forecast challenges which require diagnosis of the three-dimensional extent of the atmosphere, these observations may not be enough for forecasters. Currently, forecasters rely on model output alongside the sparse network of radiosondes for characterizing the three-dimensional atmosphere. However, satellite information can help fill in the spatial and temporal gaps in radiosonde observations. In particular, temperature and moisture retrievals from the NOAA-Unique Combined Atmospheric Processing System (NUCAPS), which combines infrared soundings from the Cross-track Infrared Sounder (CrIS) with the Advanced Technology Microwave Sounder (ATMS) to retrieve profiles of temperature and moisture. NUCAPS retrievals are available in a wide swath of observations with approximately 45-km spatial resolution at nadir and a local Equator crossing time of 1:30 A.M./P.M. enabling three-dimensional observations at asynoptic times. For forecasters to make the best use of these observations, these satellite-based soundings must be displayed in the National Weather Service's decision support system, the Advanced Weather Interactive Processing System (AWIPS). NUCAPS profiles are currently available in AWIPS as point observations that can be displayed on Skew-T diagrams. This presentation discusses the development of a new visualization capability for NUCAPS within AWIPS that will allow the data to be viewed in gridded horizontal maps or as vertical cross sections, giving forecasters additional tools for diagnosing atmospheric features. Forecaster feedback and examples of operational applications from two testbed activities will be highlighted. First is a product evaluation at the Hazardous Weather Testbed for severe weather—such as high winds, large hail, tornadoes—where the vertical distribution of temperature and moisture ahead of frontal boundaries was assessed. Second, is a product evaluation with the Alaska Center Weather Service Unit for cold air aloft—where the detection of the three-dimension extent of exterior aircraft temperatures lower than -65°C (temperatures at which jet fuel may begin to freeze)—was assessed.
Spatial Coverage Planning for a Planetary Rover
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara; Chouinard, Caroline
2008-01-01
We are developing onboard planning and execution technologies to support the exploration and characterization of geological features by autonomous rovers. In order to generate high quality mission plans, an autonomous rover must reason about the relative importance of the observations it can perform. In this paper we look at the scientific criteria of selecting observations that improve the quality of the area covered by samples. Our approach makes use of a priori information, if available, and allows scientists to mark sub-regions of the area with relative priorities for exploration. We use an efficient algorithm for prioritizing observations based on spatial coverage that allows the system to update observation rankings as new information is gained during execution.
Sen, Alper; Gümüsay, M Umit; Kavas, Aktül; Bulucu, Umut
2008-09-25
Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN.
Şen, Alper; Gümüşay, M. Ümit; Kavas, Aktül; Bulucu, Umut
2008-01-01
Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN. PMID:27873854
Globalization and multi-spatial trends in the coverage of protected-area conservation (1980-2000).
Zimmerer, Karl S; Galt, Ryan E; Buck, Margaret V
2004-12-01
This study is focused on the global expansion of protected-area coverage that occurred during the 1980--2000 period. We examine the multi-scale patterning of four of the basic facets of this expansion: i) estimated increases at the world-regional and country-level scales of total protected-area coverage; ii) transboundary protected areas; iii) conservation corridor projects; and iv) type of conservation management. Geospatial patterning of protected-area designations is a reflection of the priorities of global conservation organizations and the globalization of post-Cold War political and economic arrangements. Local and national-level factors (political leadership and infrastructure) as well as international relations such as multilateral and bilateral aid combine with these globalization processes to impact the extent, type, and location of protected-area designations. We conclude that the interaction of these factors led to the creation and reinforcement of marked spatial differences (rather than tendencies toward worldwide evenness or homogenization) in the course of protected-area expansion during the 1980--2000 period.
Valderrama-Ardila, Carlos; Alexander, Neal; Ferro, Cristina; Cadena, Horacio; Marín, Dairo; Holford, Theodore R.; Munstermann, Leonard E.; Ocampo, Clara B.
2010-01-01
Environmental risk factors for cutaneous leishmaniasis were investigated for the largest outbreak recorded in Colombia. The outbreak began in 2003 in Chaparral, and in the following five years produced 2,313 cases in a population of 56,228. Candidate predictor variables were land use, elevation, and climatic variables such as mean temperature and precipitation. Spatial analysis showed that incidence of cutaneous leishmaniasis was higher in townships with mean temperatures in the middle of the county's range. Incidence was independently associated with higher coverage with forest or shrubs (2.6% greater for each additional percent coverage, 95% credible interval [CI] = 0.5–4.9%), and lower population density (22% lower for each additional 100 persons/km2, 95% CI = 7–41%). The extent of forest or shrub coverage did not show major changes over time. These findings confirmed the roles of climate and land use in leishmaniasis transmission. However, environmental variables were not sufficient to explain the spatial variation in incidence. PMID:20134000
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Steinson, M.
2016-12-01
Accurate and reliable real-time monitoring and dissemination of observations of precipitation and surface weather conditions in general is critical for a variety of research studies and applications. Surface precipitation observations provide important reference information for evaluating satellite (e.g., GPM) precipitation estimates. High quality surface observations of precipitation, temperature, moisture, and winds are important for applications such as agriculture, water resource monitoring, health, and hazardous weather early warning systems. In many regions of the World, surface weather station and precipitation gauge networks are sparsely located and/or of poor quality. Existing stations have often been sited incorrectly, not well-maintained, and have limited communications established at the site for real-time monitoring. The University Corporation for Atmospheric Research (UCAR)/National Center for Atmospheric Research (NCAR), with support from USAID, has started an initiative to develop and deploy low-cost weather instrumentation including tipping bucket and weighing-type precipitation gauges in sparsely observed regions of the world. The goal is to improve the number of observations (temporally and spatially) for the evaluation of satellite precipitation estimates in data-sparse regions and to improve the quality of applications for environmental monitoring and early warning alert systems on a regional to global scale. One important aspect of this initiative is to make the data open to the community. The weather station instrumentation have been developed using innovative new technologies such as 3D printers, Raspberry Pi computing systems, and wireless communications. An initial pilot project have been implemented in the country of Zambia. This effort could be expanded to other data sparse regions around the globe. The presentation will provide an overview and demonstration of 3D printed weather station development and initial evaluation of observed precipitation datasets.
Watanabe, Mirai; Miura, Shingo; Hasegawa, Shun; Koshikawa, Masami K; Takamatsu, Takejiro; Kohzu, Ayato; Imai, Akio; Hayashi, Seiji
2018-04-28
High concentrations of nitrate have been detected in streams flowing from nitrogen-saturated forests; however, the spatial variations of nitrate leaching within those forests and its causes remain poorly explored. The aim of this study is to evaluate the influences of catchment topography and coniferous coverage on stream nitrate concentrations in a nitrogen-saturated forest. We measured nitrate concentrations in the baseflow of headwater streams at 40 montane forest catchments on Mount Tsukuba in central Japan, at three-month intervals for 1 year, and investigated their relationship with catchment topography and with coniferous coverage. Although stream nitrate concentrations varied from 0.5 to 3.0 mgN L -1 , those in 31 catchments consistently exceeded 1 mgN L -1 , indicating that this forest had experienced nitrogen saturation. A classification and regression tree analysis with multiple environmental factors showed that the mean slope gradient and coniferous coverage were the best and second best, respectively, at explaining inter-catchment variance of stream nitrate concentrations. This analysis suggested that the catchments with steep topography and high coniferous coverage tend to have high nitrate concentrations. Moreover, in the three-year observation period for five adjacent catchments, the two catchments with relatively higher coniferous coverage consistently had higher stream nitrate concentrations. Thus, the spatial variations in stream nitrate concentrations were primarily regulated by catchment steepness and, to a lesser extent, coniferous coverage in this nitrogen-saturated forest. Our results suggest that a decrease in coniferous coverage could potentially contribute to a reduction in nitrate leaching from this nitrogen-saturated forest, and consequently reduce the risk of nitrogen overload for the downstream ecosystems. This information will allow land managers and researchers to develop improved management plans for this and similar forests in Japan and elsewhere. Copyright © 2018 Elsevier B.V. All rights reserved.
Garske, Tini; Van Kerkhove, Maria D; Yactayo, Sergio; Ronveaux, Olivier; Lewis, Rosamund F; Staples, J Erin; Perea, William; Ferguson, Neil M
2014-05-01
Yellow fever is a vector-borne disease affecting humans and non-human primates in tropical areas of Africa and South America. While eradication is not feasible due to the wildlife reservoir, large scale vaccination activities in Africa during the 1940s to 1960s reduced yellow fever incidence for several decades. However, after a period of low vaccination coverage, yellow fever has resurged in the continent. Since 2006 there has been substantial funding for large preventive mass vaccination campaigns in the most affected countries in Africa to curb the rising burden of disease and control future outbreaks. Contemporary estimates of the yellow fever disease burden are lacking, and the present study aimed to update the previous estimates on the basis of more recent yellow fever occurrence data and improved estimation methods. Generalised linear regression models were fitted to a dataset of the locations of yellow fever outbreaks within the last 25 years to estimate the probability of outbreak reports across the endemic zone. Environmental variables and indicators for the surveillance quality in the affected countries were used as covariates. By comparing probabilities of outbreak reports estimated in the regression with the force of infection estimated for a limited set of locations for which serological surveys were available, the detection probability per case and the force of infection were estimated across the endemic zone. The yellow fever burden in Africa was estimated for the year 2013 as 130,000 (95% CI 51,000-380,000) cases with fever and jaundice or haemorrhage including 78,000 (95% CI 19,000-180,000) deaths, taking into account the current level of vaccination coverage. The impact of the recent mass vaccination campaigns was assessed by evaluating the difference between the estimates obtained for the current vaccination coverage and for a hypothetical scenario excluding these vaccination campaigns. Vaccination campaigns were estimated to have reduced the number of cases and deaths by 27% (95% CI 22%-31%) across the region, achieving up to an 82% reduction in countries targeted by these campaigns. A limitation of our study is the high level of uncertainty in our estimates arising from the sparseness of data available from both surveillance and serological surveys. With the estimation method presented here, spatial estimates of transmission intensity can be combined with vaccination coverage levels to evaluate the impact of past or proposed vaccination campaigns, thereby helping to allocate resources efficiently for yellow fever control. This method has been used by the Global Alliance for Vaccines and Immunization (GAVI Alliance) to estimate the potential impact of future vaccination campaigns.
Garske, Tini; Van Kerkhove, Maria D.; Yactayo, Sergio; Ronveaux, Olivier; Lewis, Rosamund F.; Staples, J. Erin; Perea, William; Ferguson, Neil M.
2014-01-01
Background Yellow fever is a vector-borne disease affecting humans and non-human primates in tropical areas of Africa and South America. While eradication is not feasible due to the wildlife reservoir, large scale vaccination activities in Africa during the 1940s to 1960s reduced yellow fever incidence for several decades. However, after a period of low vaccination coverage, yellow fever has resurged in the continent. Since 2006 there has been substantial funding for large preventive mass vaccination campaigns in the most affected countries in Africa to curb the rising burden of disease and control future outbreaks. Contemporary estimates of the yellow fever disease burden are lacking, and the present study aimed to update the previous estimates on the basis of more recent yellow fever occurrence data and improved estimation methods. Methods and Findings Generalised linear regression models were fitted to a dataset of the locations of yellow fever outbreaks within the last 25 years to estimate the probability of outbreak reports across the endemic zone. Environmental variables and indicators for the surveillance quality in the affected countries were used as covariates. By comparing probabilities of outbreak reports estimated in the regression with the force of infection estimated for a limited set of locations for which serological surveys were available, the detection probability per case and the force of infection were estimated across the endemic zone. The yellow fever burden in Africa was estimated for the year 2013 as 130,000 (95% CI 51,000–380,000) cases with fever and jaundice or haemorrhage including 78,000 (95% CI 19,000–180,000) deaths, taking into account the current level of vaccination coverage. The impact of the recent mass vaccination campaigns was assessed by evaluating the difference between the estimates obtained for the current vaccination coverage and for a hypothetical scenario excluding these vaccination campaigns. Vaccination campaigns were estimated to have reduced the number of cases and deaths by 27% (95% CI 22%–31%) across the region, achieving up to an 82% reduction in countries targeted by these campaigns. A limitation of our study is the high level of uncertainty in our estimates arising from the sparseness of data available from both surveillance and serological surveys. Conclusions With the estimation method presented here, spatial estimates of transmission intensity can be combined with vaccination coverage levels to evaluate the impact of past or proposed vaccination campaigns, thereby helping to allocate resources efficiently for yellow fever control. This method has been used by the Global Alliance for Vaccines and Immunization (GAVI Alliance) to estimate the potential impact of future vaccination campaigns. Please see later in the article for the Editors' Summary PMID:24800812
Estimating Carbon Storage and Sequestration by Urban Trees at Multiple Spatial Resolutions
NASA Astrophysics Data System (ADS)
Wu, J.; Tran, A.; Liao, A.
2010-12-01
Urban forests are an important component of urban-suburban environments. Urban trees provide not only a full range of social and psychological benefits to city dwellers, but also valuable ecosystem services to communities, such as removing atmospheric carbon dioxide, improving air quality, and reducing storm water runoff. There is an urgent need for developing strategic conservation plans for environmentally sustainable urban-suburban development based on the scientific understanding of the extent and function of urban forests. However, several challenges remain to accurately quantify various environmental benefits provided by urban trees, among which is to deal with the effect of changing spatial resolution and/or scale. In this study, we intended to examine the uncertainties of carbon storage and sequestration associated with the tree canopy coverage of different spatial resolutions. Multi-source satellite imagery data were acquired for the City of Fullerton, located in Orange County of California. The tree canopy coverage of the study area was classified at three spatial resolutions, ranging from 30 m (Landsat-5 Thematic Mapper), 15 m (Advanced Spaceborne Thermal Emission and Reflection Radiometer), to 2.5 m (QuickBird). We calculated the amount of carbon stored in the trees represented on the individual tree coverage maps and the annual carbon taken up by the trees with a model (i.e., CITYgreen) developed by the U.S. Forest Service. The results indicate that urban trees account for significant proportions of land cover in the study area even with the low spatial resolution data. The estimated carbon fixation benefits vary greatly depending on the details of land use and land cover classification. The extrapolation of estimation from the fine-resolution stand-level to the low-resolution landscape-scale will likely not preserve reasonable accuracy.
The Gradual Expansion Muscle Flap
2014-01-01
acute shortening and angulation of the tibia and rotational muscle flap coverage and split thickness skin grafting of the soft tissue defect...is also amenable to split-thickness skin grafting after tissue incorporation.11 In addition to donor site morbidity, free tissue transfer is dependent...necessary soft tissue coverage. In the second stage, after the flap has adequately set and overlying skin graft has full adherence, a Taylor Spatial
Pullan, Rachel L.; Freeman, Matthew C.; Gething, Peter W.; Brooker, Simon J.
2014-01-01
Background Understanding geographic inequalities in coverage of drinking-water supply and sanitation (WSS) will help track progress towards universal coverage of water and sanitation by identifying marginalized populations, thus helping to control a large number of infectious diseases. This paper uses household survey data to develop comprehensive maps of WSS coverage at high spatial resolution for sub-Saharan Africa (SSA). Analysis is extended to investigate geographic heterogeneity and relative geographic inequality within countries. Methods and Findings Cluster-level data on household reported use of improved drinking-water supply, sanitation, and open defecation were abstracted from 138 national surveys undertaken from 1991–2012 in 41 countries. Spatially explicit logistic regression models were developed and fitted within a Bayesian framework, and used to predict coverage at the second administrative level (admin2, e.g., district) across SSA for 2012. Results reveal substantial geographical inequalities in predicted use of water and sanitation that exceed urban-rural disparities. The average range in coverage seen between admin2 within countries was 55% for improved drinking water, 54% for use of improved sanitation, and 59% for dependence upon open defecation. There was also some evidence that countries with higher levels of inequality relative to coverage in use of an improved drinking-water source also experienced higher levels of inequality in use of improved sanitation (rural populations r = 0.47, p = 0.002; urban populations r = 0.39, p = 0.01). Results are limited by the quantity of WSS data available, which varies considerably by country, and by the reliability and utility of available indicators. Conclusions This study identifies important geographic inequalities in use of WSS previously hidden within national statistics, confirming the necessity for targeted policies and metrics that reach the most marginalized populations. The presented maps and analysis approach can provide a mechanism for monitoring future reductions in inequality within countries, reflecting priorities of the post-2015 development agenda. Please see later in the article for the Editors' Summary PMID:24714528
Pullan, Rachel L; Freeman, Matthew C; Gething, Peter W; Brooker, Simon J
2014-04-01
Understanding geographic inequalities in coverage of drinking-water supply and sanitation (WSS) will help track progress towards universal coverage of water and sanitation by identifying marginalized populations, thus helping to control a large number of infectious diseases. This paper uses household survey data to develop comprehensive maps of WSS coverage at high spatial resolution for sub-Saharan Africa (SSA). Analysis is extended to investigate geographic heterogeneity and relative geographic inequality within countries. Cluster-level data on household reported use of improved drinking-water supply, sanitation, and open defecation were abstracted from 138 national surveys undertaken from 1991-2012 in 41 countries. Spatially explicit logistic regression models were developed and fitted within a Bayesian framework, and used to predict coverage at the second administrative level (admin2, e.g., district) across SSA for 2012. Results reveal substantial geographical inequalities in predicted use of water and sanitation that exceed urban-rural disparities. The average range in coverage seen between admin2 within countries was 55% for improved drinking water, 54% for use of improved sanitation, and 59% for dependence upon open defecation. There was also some evidence that countries with higher levels of inequality relative to coverage in use of an improved drinking-water source also experienced higher levels of inequality in use of improved sanitation (rural populations r = 0.47, p = 0.002; urban populations r = 0.39, p = 0.01). Results are limited by the quantity of WSS data available, which varies considerably by country, and by the reliability and utility of available indicators. This study identifies important geographic inequalities in use of WSS previously hidden within national statistics, confirming the necessity for targeted policies and metrics that reach the most marginalized populations. The presented maps and analysis approach can provide a mechanism for monitoring future reductions in inequality within countries, reflecting priorities of the post-2015 development agenda. Please see later in the article for the Editors' Summary.
NASA Astrophysics Data System (ADS)
Sloan, B.; Ebtehaj, A. M.; Guala, M.
2017-12-01
The understanding of heat and water vapor transfer from the land surface to the atmosphere by evapotranspiration (ET) is crucial for predicting the hydrologic water balance and climate forecasts used in water resources decision-making. However, the complex distribution of vegetation, soil and atmospheric conditions makes large-scale prognosis of evaporative fluxes difficult. Current ET models, such as Penman-Monteith and flux-gradient methods, are challenging to apply at the microscale due to ambiguity in determining resistance factors to momentum, heat and vapor transport for realistic landscapes. Recent research has made progress in modifying Monin-Obukhov similarity theory for dense plant canopies as well as providing clearer description of diffusive controls on evaporation at a smooth soil surface, which both aid in calculating more accurate resistance parameters. However, in nature, surfaces typically tend to be aerodynamically rough and vegetation is a mixture of sparse and dense canopies in non-uniform configurations. The goal of our work is to parameterize the resistances to evaporation based on spatial distributions of sparse plant canopies using novel wind tunnel experimentation at the St. Anthony Falls Laboratory (SAFL). The state-of-the-art SAFL wind tunnel was updated with a retractable soil box test section (shown in Figure 1), complete with a high-resolution scale and soil moisture/temperature sensors for recording evaporative fluxes and drying fronts. The existing capabilities of the tunnel were used to create incoming non-neutral stability conditions and measure 2-D velocity fields as well as momentum and heat flux profiles through PIV and hotwire anemometry, respectively. Model trees (h = 5 cm) were placed in structured and random configurations based on a probabilistic spacing that was derived from aerial imagery. The novel wind tunnel dataset provides the surface energy budget, turbulence statistics and spatial soil moisture data under varying atmospheric stability for each sparse canopy configuration. We will share initial data results and progress toward the development of new parametrizations that can account for the evolution of a canopy roughness sublayer on the momentum, heat and vapor resistance terms as a function of a stochastic representation of canopy spacing.
Sparsely-distributed organization of face and limb activations in human ventral temporal cortex
Weiner, Kevin S.; Grill-Spector, Kalanit
2011-01-01
Functional magnetic resonance imaging (fMRI) has identified face- and body part-selective regions, as well as distributed activation patterns for object categories across human ventral temporal cortex (VTC), eliciting a debate regarding functional organization in VTC and neural coding of object categories. Using high-resolution fMRI, we illustrate that face- and limb-selective activations alternate in a series of largely nonoverlapping clusters in lateral VTC along the inferior occipital gyrus (IOG), fusiform gyrus (FG), and occipitotemporal sulcus (OTS). Both general linear model (GLM) and multivoxel pattern (MVP) analyses show that face- and limb-selective activations minimally overlap and that this organization is consistent across experiments and days. We provide a reliable method to separate two face-selective clusters on the middle and posterior FG (mFus and pFus), and another on the IOG using their spatial relation to limb-selective activations and retinotopic areas hV4, VO-1/2, and hMT+. Furthermore, these activations show a gradient of increasing face selectivity and decreasing limb selectivity from the IOG to the mFus. Finally, MVP analyses indicate that there is differential information for faces in lateral VTC (containing weakly- and highly-selective voxels) relative to non-selective voxels in medial VTC. These findings suggest a sparsely-distributed organization where sparseness refers to the presence of several face- and limb-selective clusters in VTC, and distributed refers to the presence of different amounts of information in highly-, weakly-, and non-selective voxels. Consequently, theories of object recognition should consider the functional and spatial constraints of neural coding across a series of nonoverlapping category-selective clusters that are themselves distributed. PMID:20457261
Tensor-based dynamic reconstruction method for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.
2017-03-01
Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.
Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K
2017-05-01
In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia
2013-09-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions whichmore » can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.« less
Wellman, Tristan P.; Poeter, Eileen P.
2006-01-01
Computational limitations and sparse field data often mandate use of continuum representation for modeling hydrologic processes in large‐scale fractured aquifers. Selecting appropriate element size is of primary importance because continuum approximation is not valid for all scales. The traditional approach is to select elements by identifying a single representative elementary scale (RES) for the region of interest. Recent advances indicate RES may be spatially variable, prompting unanswered questions regarding the ability of sparse data to spatially resolve continuum equivalents in fractured aquifers. We address this uncertainty of estimating RES using two techniques. In one technique we employ data‐conditioned realizations generated by sequential Gaussian simulation. For the other we develop a new approach using conditioned random walks and nonparametric bootstrapping (CRWN). We evaluate the effectiveness of each method under three fracture densities, three data sets, and two groups of RES analysis parameters. In sum, 18 separate RES analyses are evaluated, which indicate RES magnitudes may be reasonably bounded using uncertainty analysis, even for limited data sets and complex fracture structure. In addition, we conduct a field study to estimate RES magnitudes and resulting uncertainty for Turkey Creek Basin, a crystalline fractured rock aquifer located 30 km southwest of Denver, Colorado. Analyses indicate RES does not correlate to rock type or local relief in several instances but is generally lower within incised creek valleys and higher along mountain fronts. Results of this study suggest that (1) CRWN is an effective and computationally efficient method to estimate uncertainty, (2) RES predictions are well constrained using uncertainty analysis, and (3) for aquifers such as Turkey Creek Basin, spatial variability of RES is significant and complex.
NASA Astrophysics Data System (ADS)
Han, Ruimei; Zou, Youfeng; Ma, Chao; Liu, Pei
2014-11-01
Ordos area is the desert-wind erosion desertification steppe transition zone and the complex ecological zone. As the research area, Ordos City has the similar natural geographic environment to ShenDong coalfield. To research its ecological patterns and natural evolution law, it has instructive to reveal temporal and spatial changes of ecological environment with artificial disturbance in western mining. In this paper, a time series of AVHRR-NDVI(Normalized Difference Vegetation Index) data was used to monitor the change of vegetation temporal and spatial dynamics from 1981 to 2006 in Ordos City and ShenDong coalfield, where were as the research area. The MVC (Maximum Value Composites) method, average operation, linear regression, and gradation for NDVI change trend were used to obtained some results, as follows: ¬vegetation coverage had obvious characteristics with periodic change in research area for 26 years, and vegetation growth peak appeared on August, while the lowest appeared on January. The extreme values in Ordos City were 0.2351 and 0.1176, while they were 0.2657 and 0.1272 in ShenDong coalfield. The NDVI value fluctuation was a modest rise trend overall in research area. The extreme values were 0.3071 and 0.1861 in Ordos City, while they were 0.3454 and 0.1904 in ShenDong coalfield. In spatial distribution, slight improvement area and slight degradation area were accounting for 42.49% and 8.37% in Ordos City, while slight improvement area moderate improvement area were accounting for 70.59% and 29.41% in ShenDong coalfield. Above of results indicated there was less vegetation coverage in research area, which reflected the characteristics of fragile natural geographical environment. In addition, vegetation coverage was with a modest rise on the whole, which reflected the natural environment change.
The EarthServer Geology Service: web coverage services for geosciences
NASA Astrophysics Data System (ADS)
Laxton, John; Sen, Marcus; Passmore, James
2014-05-01
The EarthServer FP7 project is implementing web coverage services using the OGC WCS and WCPS standards for a range of earth science domains: cryospheric; atmospheric; oceanographic; planetary; and geological. BGS is providing the geological service (http://earthserver.bgs.ac.uk/). Geoscience has used remote sensed data from satellites and planes for some considerable time, but other areas of geosciences are less familiar with the use of coverage data. This is rapidly changing with the development of new sensor networks and the move from geological maps to geological spatial models. The BGS geology service is designed initially to address two coverage data use cases and three levels of data access restriction. Databases of remote sensed data are typically very large and commonly held offline, making it time-consuming for users to assess and then download data. The service is designed to allow the spatial selection, editing and display of Landsat and aerial photographic imagery, including band selection and contrast stretching. This enables users to rapidly view data, assess is usefulness for their purposes, and then enhance and download it if it is suitable. At present the service contains six band Landsat 7 (Blue, Green, Red, NIR 1, NIR 2, MIR) and three band false colour aerial photography (NIR, green, blue), totalling around 1Tb. Increasingly 3D spatial models are being produced in place of traditional geological maps. Models make explicit spatial information implicit on maps and thus are seen as a better way of delivering geosciences information to non-geoscientists. However web delivery of models, including the provision of suitable visualisation clients, has proved more challenging than delivering maps. The EarthServer geology service is delivering 35 surfaces as coverages, comprising the modelled superficial deposits of the Glasgow area. These can be viewed using a 3D web client developed in the EarthServer project by Fraunhofer. As well as remote sensed imagery and 3D models, the geology service is also delivering DTM coverages which can be viewed in the 3D client in conjunction with both imagery and models. The service is accessible through a web GUI which allows the imagery to be viewed against a range of background maps and DTMs, and in the 3D client; spatial selection to be carried out graphically; the results of image enhancement to be displayed; and selected data to be downloaded. The GUI also provides access to the Glasgow model in the 3D client, as well as tutorial material. In the final year of the project it is intended to increase the volume of data to 20Tb and enhance the WCPS processing, including depth and thickness querying of 3D models. We have also investigated the use of GeoSciML, developed to describe and interchange the information on geological maps, to describe model surface coverages. EarthServer is developing a combined WCPS and xQuery query language, and we will investigate applying this to the GeoSciML described surfaces to answer questions such as 'find all units with a predominant sand lithology within 25m of the surface'.
USDA-ARS?s Scientific Manuscript database
Soil moisture (SM) can be retrieved from active microwave (AM)-, passive microwave (PM)- and thermal infrared (TIR)-observations, each having their unique spatial- and temporal-coverage. A limitation of TIR-based SM retrievals is its dependency on cloud-free conditions, while microwave retrievals ar...
EPA announced the availability of the final report,
Impacts of field of view configuration of Cross-track Infrared Sounder on clear-sky observations.
Wang, Likun; Chen, Yong; Han, Yong
2016-09-01
Hyperspectral infrared radiance measurements from satellite sensors contain valuable information on atmospheric temperature and humidity profiles and greenhouse gases, and therefore are directly assimilated into numerical weather prediction (NWP) models as inputs for weather forecasting. However, data assimilations in current operational NWP models still mainly rely on cloud-free observations due to the challenge of simulating cloud-contaminated radiances when using hyperspectral radiances. The limited spatial coverage of the 3×3 field of views (FOVs) in one field of regard (FOR) (i.e., spatial gap among FOVs) as well as relatively large footprint size (14 km) in current Cross-track Infrared Sounder (CrIS) instruments limits the amount of clear-sky observations. This study explores the potential impacts of future CrIS FOV configuration (including FOV size and spatial coverage) on the amount of clear-sky observations by simulation experiments. The radiance measurements and cloud mask products (VCM) from the Visible Infrared Imager Radiometer Suite (VIIRS) are used to simulate CrIS clear-sky observation under different FOV configurations. The results indicate that, given the same FOV coverage (e.g., 3×3), the percentage of clear-sky FOVs and the percentage of clear-sky FORs (that contain at least one clear-sky FOV) both increase as the FOV size decreases. In particular, if the CrIS FOV size were reduced from 14 km to 7 km, the percentage of clear-sky FOVs increases from 9.02% to 13.51% and the percentage of clear-sky FORs increases from 18.24% to 27.51%. Given the same FOV size but with increasing FOV coverage in each FOR, the clear-sky FOV observations increases proportionally with the increasing sampling FOVs. Both reducing FOV size and increasing FOV coverage can result in more clear-sky FORs, which benefit data utilization of NWP data assimilation.
Atmospheric Science Data Center
2015-11-25
... Buoy Instrument: Barometer Sonic Anemometer Thermistor Spatial Coverage: (34.60, ... Earthdata Search Parameters: Dry Bulb Temperature Pressure Sea Surface Temperature Wet Bulb Temperature ...
NASA Technical Reports Server (NTRS)
Gong, Gavin; Entekhabi, Dara; Salvucci, Guido D.
1994-01-01
Simulated climates using numerical atmospheric general circulation models (GCMs) have been shown to be highly sensitive to the fraction of GCM grid area assumed to be wetted during rain events. The model hydrologic cycle and land-surface water and energy balance are influenced by the parameter bar-kappa, which is the dimensionless fractional wetted area for GCM grids. Hourly precipitation records for over 1700 precipitation stations within the contiguous United States are used to obtain observation-based estimates of fractional wetting that exhibit regional and seasonal variations. The spatial parameter bar-kappa is estimated from the temporal raingauge data using conditional probability relations. Monthly bar-kappa values are estimated for rectangular grid areas over the contiguous United States as defined by the Goddard Institute for Space Studies 4 deg x 5 deg GCM. A bias in the estimates is evident due to the unavoidably sparse raingauge network density, which causes some storms to go undetected by the network. This bias is corrected by deriving the probability of a storm escaping detection by the network. A Monte Carlo simulation study is also conducted that consists of synthetically generated storm arrivals over an artificial grid area. It is used to confirm the bar-kappa estimation procedure and to test the nature of the bias and its correction. These monthly fractional wetting estimates, based on the analysis of station precipitation data, provide an observational basis for assigning the influential parameter bar-kappa in GCM land-surface hydrology parameterizations.
Interferometry in the Era of Very Large Telescopes
NASA Technical Reports Server (NTRS)
Barry, Richard K.
2010-01-01
Research in modern stellar interferometry has focused primarily on ground-based observatories, with very long baselines or large apertures, that have benefited from recent advances in fringe tracking, phase reconstruction, adaptive optics, guided optics, and modern detectors. As one example, a great deal of effort has been put into development of ground-based nulling interferometers. The nulling technique is the sparse aperture equivalent of conventional coronography used in filled aperture telescopes. In this mode the stellar light itself is suppressed by a destructive fringe, effectively enhancing the contrast of the circumstellar material located near the star. Nulling interferometry has helped to advance our understanding of the astrophysics of many distant objects by providing the spatial resolution necessary to localize the various faint emission sources near bright objects. We illustrate the current capabilities of this technique by describing the first scientific results from the Keck Interferometer Nuller that combines the light from the two largest optical telescopes in the world including new, unpublished measurements of exozodiacal dust disks. We discuss prospects in the near future for interferometry in general, the capabilities of secondary masking interferometry on very large telescopes, and of nulling interferometry using outriggers on very large telescopes. We discuss future development of a simplified space-borne NIR nulling architecture, the Fourier-Kelvin Stellar Interferometer, capable of detecting and characterizing an Earth twin in the near future and how such a mission would benefit from the optical wavelength coverage offered by large, ground-based instruments.
Rodman R. Linn; Carolyn H. Sieg; Chad M. Hoffman; Judith L. Winterkamp; Joel D. McMillin
2013-01-01
We used a physics-based model, HIGRAD/FIRETEC, to explore changes in within-stand wind behavior and fire propagation associated with three time periods in pinyon-juniper woodlands following a drought-induced bark beetle outbreak and subsequent tree mortality. Pinyon-juniper woodland fuel complexes are highly heterogeneous. Trees often are clumped, with sparse patches...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Kai; Ma, Ying -Zhong; Simpson, Mary Jane
Charge carrier trapping degrades the performance of organometallic halide perovskite solar cells. To characterize the locations of electronic trap states in a heterogeneous photoactive layer, a spatially resolved approach is essential. Here, we report a comparative study on methylammonium lead tri-iodide perovskite thin films subject to different thermal annealing times using a combined photoluminescence (PL) and femtosecond transient absorption microscopy (TAM) approach to spatially map trap states. This approach coregisters the initially populated electronic excited states with the regions that recombine radiatively. Although the TAM images are relatively homogeneous for both samples, the corresponding PL images are highly structured. Themore » remarkable variation in the PL intensities as compared to transient absorption signal amplitude suggests spatially dependent PL quantum efficiency, indicative of trapping events. Furthermore, detailed analysis enables identification of two trapping regimes: a densely packed trapping region and a sparse trapping area that appear as unique spatial features in scaled PL maps.« less
Fenimore, Paul W.; Foley, Brian T.; Bakken, Russell R.; Thurmond, James R.; Yusim, Karina; Yoon, Hyejin; Parker, Michael; Hart, Mary Kate; Dye, John M.; Korber, Bette; Kuiken, Carla
2012-01-01
We report the rational design and in vivo testing of mosaic proteins for a polyvalent pan-filoviral vaccine using a computational strategy designed for the Human Immunodeficiency Virus type 1 (HIV-1) but also appropriate for Hepatitis C virus (HCV) and potentially other diverse viruses. Mosaics are sets of artificial recombinant proteins that are based on natural proteins. The recombinants are computationally selected using a genetic algorithm to optimize the coverage of potential cytotoxic T lymphocyte (CTL) epitopes. Because evolutionary history differs markedly between HIV-1 and filoviruses, we devised an adapted computational technique that is effective for sparsely sampled taxa; our first significant result is that the mosaic technique is effective in creating high-quality mosaic filovirus proteins. The resulting coverage of potential epitopes across filovirus species is superior to coverage by any natural variants, including current vaccine strains with demonstrated cross-reactivity. The mosaic cocktails are also robust: mosaics substantially outperformed natural strains when computationally tested against poorly sampled species and more variable genes. Furthermore, in a computational comparison of cross-reactive potential a design constructed prior to the Bundibugyo outbreak performed nearly as well against all species as an updated design that included Bundibugyo. These points suggest that the mosaic designs would be more resilient than natural-variant vaccines against future Ebola outbreaks dominated by novel viral variants. We demonstrate in vivo immunogenicity and protection against a heterologous challenge in a mouse model. This design work delineates the likely requirements and limitations on broadly-protective filoviral CTL vaccines. PMID:23056184
Estimating Highway Volumes Using Vehicle Probe Data - Proof of Concept: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yi; Young, Stanley E; Sadabadi, Kaveh
This paper examines the feasibility of using sampled commercial probe data in combination with validated continuous counter data to accurately estimate vehicle volume across the entire roadway network, for any hour during the year. Currently either real time or archived volume data for roadways at specific times are extremely sparse. Most volume data are average annual daily traffic (AADT) measures derived from the Highway Performance Monitoring System (HPMS). Although methods to factor the AADT to hourly averages for typical day of week exist, actual volume data is limited to a sparse collection of locations in which volumes are continuously recorded.more » This paper explores the use of commercial probe data to generate accurate volume measures that span the highway network providing ubiquitous coverage in space, and specific point-in-time measures for a specific date and time. The paper examines the need for the data, fundamental accuracy limitations based on a basic statistical model that take into account the sampling nature of probe data, and early results from a proof of concept exercise revealing the potential of probe type data calibrated with public continuous count data to meet end user expectations in terms of accuracy of volume estimates.« less
Chen, Hsin-Yu; Larson, Peder E Z; Gordon, Jeremy W; Bok, Robert A; Ferrone, Marcus; van Criekinge, Mark; Carvajal, Lucas; Cao, Peng; Pauly, John M; Kerr, Adam B; Park, Ilwoo; Slater, James B; Nelson, Sarah J; Munster, Pamela N; Aggarwal, Rahul; Kurhanewicz, John; Vigneron, Daniel B
2018-03-25
The purpose of this study was to develop a new 3D dynamic carbon-13 compressed sensing echoplanar spectroscopic imaging (EPSI) MR sequence and test it in phantoms, animal models, and then in prostate cancer patients to image the metabolic conversion of hyperpolarized [1- 13 C]pyruvate to [1- 13 C]lactate with whole gland coverage at high spatial and temporal resolution. A 3D dynamic compressed sensing (CS)-EPSI sequence with spectral-spatial excitation was designed to meet the required spatial coverage, time and spatial resolution, and RF limitations of the 3T MR scanner for its clinical translation for prostate cancer patient imaging. After phantom testing, animal studies were performed in rats and transgenic mice with prostate cancers. For patient studies, a GE SPINlab polarizer (GE Healthcare, Waukesha, WI) was used to produce hyperpolarized sterile GMP [1- 13 C]pyruvate. 3D dynamic 13 C CS-EPSI data were acquired starting 5 s after injection throughout the gland with a spatial resolution of 0.5 cm 3 , 18 time frames, 2-s temporal resolution, and 36 s total acquisition time. Through preclinical testing, the 3D CS-EPSI sequence developed in this project was shown to provide the desired spectral, temporal, and spatial 5D HP 13 C MR data. In human studies, the 3D dynamic HP CS-EPSI approach provided first-ever simultaneously volumetric and dynamic images of the LDH-catalyzed conversion of [1- 13 C]pyruvate to [1- 13 C]lactate in a biopsy-proven prostate cancer patient with full gland coverage. The results demonstrate the feasibility to characterize prostate cancer metabolism in animals, and now patients using this new 3D dynamic HP MR technique to measure k PL , the kinetic rate constant of [1- 13 C]pyruvate to [1- 13 C]lactate conversion. © 2018 International Society for Magnetic Resonance in Medicine.
Design of sparse Halbach magnet arrays for portable MRI using a genetic algorithm.
Cooley, Clarissa Zimmerman; Haskell, Melissa W; Cauley, Stephen F; Sappo, Charlotte; Lapierre, Cristen D; Ha, Christopher G; Stockmann, Jason P; Wald, Lawrence L
2018-01-01
Permanent magnet arrays offer several attributes attractive for the development of a low-cost portable MRI scanner for brain imaging. They offer the potential for a relatively lightweight, low to mid-field system with no cryogenics, a small fringe field, and no electrical power requirements or heat dissipation needs. The cylindrical Halbach array, however, requires external shimming or mechanical adjustments to produce B 0 fields with standard MRI homogeneity levels (e.g., 0.1 ppm over FOV), particularly when constrained or truncated geometries are needed, such as a head-only magnet where the magnet length is constrained by the shoulders. For portable scanners using rotation of the magnet for spatial encoding with generalized projections, the spatial pattern of the field is important since it acts as the encoding field. In either a static or rotating magnet, it will be important to be able to optimize the field pattern of cylindrical Halbach arrays in a way that retains construction simplicity. To achieve this, we present a method for designing an optimized cylindrical Halbach magnet using the genetic algorithm to achieve either homogeneity (for standard MRI applications) or a favorable spatial encoding field pattern (for rotational spatial encoding applications). We compare the chosen designs against a standard, fully populated sparse Halbach design, and evaluate optimized spatial encoding fields using point-spread-function and image simulations. We validate the calculations by comparing to the measured field of a constructed magnet. The experimentally implemented design produced fields in good agreement with the predicted fields, and the genetic algorithm was successful in improving the chosen metrics. For the uniform target field, an order of magnitude homogeneity improvement was achieved compared to the un-optimized, fully populated design. For the rotational encoding design the resolution uniformity is improved by 95% compared to a uniformly populated design.
Parameter Estimation for a Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Lapointe, Caelan; Grooms, Ian; Rieker, Gregory B.; Hamlington, Peter E.
2016-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other "truth" data to be used for the prediction of unknown model parameters in numerical simulations of real-world engineering systems. In this presentation, we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a simulation with known boundary conditions and problem parameters. Using spatially-sparse temperature statistics from the 2D buoyant jet truth simulation, we show that the ABC method provides accurate predictions of the true jet inflow temperature. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for engineering fluid dynamics research.
Ghosh, A
1988-08-01
Lanczos and conjugate gradient algorithms are important in computational linear algebra. In this paper, a parallel pipelined realization of these algorithms on a ring of optical linear algebra processors is described. The flow of data is designed to minimize the idle times of the optical multiprocessor and the redundancy of computations. The effects of optical round-off errors on the solutions obtained by the optical Lanczos and conjugate gradient algorithms are analyzed, and it is shown that optical preconditioning can improve the accuracy of these algorithms substantially. Algorithms for optical preconditioning and results of numerical experiments on solving linear systems of equations arising from partial differential equations are discussed. Since the Lanczos algorithm is used mostly with sparse matrices, a folded storage scheme to represent sparse matrices on spatial light modulators is also described.
Reweighted mass center based object-oriented sparse subspace clustering for hyperspectral images
NASA Astrophysics Data System (ADS)
Zhai, Han; Zhang, Hongyan; Zhang, Liangpei; Li, Pingxiang
2016-10-01
Considering the inevitable obstacles faced by the pixel-based clustering methods, such as salt-and-pepper noise, high computational complexity, and the lack of spatial information, a reweighted mass center based object-oriented sparse subspace clustering (RMC-OOSSC) algorithm for hyperspectral images (HSIs) is proposed. First, the mean-shift segmentation method is utilized to oversegment the HSI to obtain meaningful objects. Second, a distance reweighted mass center learning model is presented to extract the representative and discriminative features for each object. Third, assuming that all the objects are sampled from a union of subspaces, it is natural to apply the SSC algorithm to the HSI. Faced with the high correlation among the hyperspectral objects, a weighting scheme is adopted to ensure that the highly correlated objects are preferred in the procedure of sparse representation, to reduce the representation errors. Two widely used hyperspectral datasets were utilized to test the performance of the proposed RMC-OOSSC algorithm, obtaining high clustering accuracies (overall accuracy) of 71.98% and 89.57%, respectively. The experimental results show that the proposed method clearly improves the clustering performance with respect to the other state-of-the-art clustering methods, and it significantly reduces the computational time.
Subject-Specific Sparse Dictionary Learning for Atlas-Based Brain MRI Segmentation.
Roy, Snehashis; He, Qing; Sweeney, Elizabeth; Carass, Aaron; Reich, Daniel S; Prince, Jerry L; Pham, Dzung L
2015-09-01
Quantitative measurements from segmentations of human brain magnetic resonance (MR) images provide important biomarkers for normal aging and disease progression. In this paper, we propose a patch-based tissue classification method from MR images that uses a sparse dictionary learning approach and atlas priors. Training data for the method consists of an atlas MR image, prior information maps depicting where different tissues are expected to be located, and a hard segmentation. Unlike most atlas-based classification methods that require deformable registration of the atlas priors to the subject, only affine registration is required between the subject and training atlas. A subject-specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches leading to tissue memberships at each voxel. The combination of prior information in an example-based framework enables us to distinguish tissues having similar intensities but different spatial locations. We demonstrate the efficacy of the approach on the application of whole-brain tissue segmentation in subjects with healthy anatomy and normal pressure hydrocephalus, as well as lesion segmentation in multiple sclerosis patients. For each application, quantitative comparisons are made against publicly available state-of-the art approaches.
NASA Astrophysics Data System (ADS)
Han-Ming, Zhang; Lin-Yuan, Wang; Lei, Li; Bin, Yan; Ai-Long, Cai; Guo-En, Hu
2016-07-01
The additional sparse prior of images has been the subject of much research in problems of sparse-view computed tomography (CT) reconstruction. A method employing the image gradient sparsity is often used to reduce the sampling rate and is shown to remove the unwanted artifacts while preserve sharp edges, but may cause blocky or patchy artifacts. To eliminate this drawback, we propose a novel sparsity exploitation-based model for CT image reconstruction. In the presented model, the sparse representation and sparsity exploitation of both gradient and nonlocal gradient are investigated. The new model is shown to offer the potential for better results by introducing a similarity prior information of the image structure. Then, an effective alternating direction minimization algorithm is developed to optimize the objective function with a robust convergence result. Qualitative and quantitative evaluations have been carried out both on the simulation and real data in terms of accuracy and resolution properties. The results indicate that the proposed method can be applied for achieving better image-quality potential with the theoretically expected detailed feature preservation. Project supported by the National Natural Science Foundation of China (Grant No. 61372172).
Segmentation of High Angular Resolution Diffusion MRI using Sparse Riemannian Manifold Clustering
Wright, Margaret J.; Thompson, Paul M.; Vidal, René
2015-01-01
We address the problem of segmenting high angular resolution diffusion imaging (HARDI) data into multiple regions (or fiber tracts) with distinct diffusion properties. We use the orientation distribution function (ODF) to represent HARDI data and cast the problem as a clustering problem in the space of ODFs. Our approach integrates tools from sparse representation theory and Riemannian geometry into a graph theoretic segmentation framework. By exploiting the Riemannian properties of the space of ODFs, we learn a sparse representation for each ODF and infer the segmentation by applying spectral clustering to a similarity matrix built from these representations. In cases where regions with similar (resp. distinct) diffusion properties belong to different (resp. same) fiber tracts, we obtain the segmentation by incorporating spatial and user-specified pairwise relationships into the formulation. Experiments on synthetic data evaluate the sensitivity of our method to image noise and the presence of complex fiber configurations, and show its superior performance compared to alternative segmentation methods. Experiments on phantom and real data demonstrate the accuracy of the proposed method in segmenting simulated fibers, as well as white matter fiber tracts of clinical importance in the human brain. PMID:24108748
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
The dark matter of galaxy voids
NASA Astrophysics Data System (ADS)
Sutter, P. M.; Lavaux, Guilhem; Wandelt, Benjamin D.; Weinberg, David H.; Warren, Michael S.
2014-03-01
How do observed voids relate to the underlying dark matter distribution? To examine the spatial distribution of dark matter contained within voids identified in galaxy surveys, we apply Halo Occupation Distribution models representing sparsely and densely sampled galaxy surveys to a high-resolution N-body simulation. We compare these galaxy voids to voids found in the halo distribution, low-resolution dark matter and high-resolution dark matter. We find that voids at all scales in densely sampled surveys - and medium- to large-scale voids in sparse surveys - trace the same underdensities as dark matter, but they are larger in radius by ˜20 per cent, they have somewhat shallower density profiles and they have centres offset by ˜ 0.4Rv rms. However, in void-to-void comparison we find that shape estimators are less robust to sampling, and the largest voids in sparsely sampled surveys suffer fragmentation at their edges. We find that voids in galaxy surveys always correspond to underdensities in the dark matter, though the centres may be offset. When this offset is taken into account, we recover almost identical radial density profiles between galaxies and dark matter. All mock catalogues used in this work are available at http://www.cosmicvoids.net.
Atmospheric Science Data Center
2015-11-25
... Analyzer IR CO2 Analyzer Optical Counter Platinum Resistance Spectrometer Spatial Coverage: (32.34, ... Diameter Particle Number Concentration Potential Temperature Sulfate Sulfur Dioxide Temperature Order Data: ...
HTM Spatial Pooler With Memristor Crossbar Circuits for Sparse Biometric Recognition.
James, Alex Pappachen; Fedorova, Irina; Ibrayev, Timur; Kudithipudi, Dhireesha
2017-06-01
Hierarchical Temporal Memory (HTM) is an online machine learning algorithm that emulates the neo-cortex. The development of a scalable on-chip HTM architecture is an open research area. The two core substructures of HTM are spatial pooler and temporal memory. In this work, we propose a new Spatial Pooler circuit design with parallel memristive crossbar arrays for the 2D columns. The proposed design was validated on two different benchmark datasets, face recognition, and speech recognition. The circuits are simulated and analyzed using a practical memristor device model and 0.18 μm IBM CMOS technology model. The databases AR, YALE, ORL, and UFI, are used to test the performance of the design in face recognition. TIMIT dataset is used for the speech recognition.
Validation of the CHIRPS Satellite Rainfall Estimates over Eastern of Africa
NASA Astrophysics Data System (ADS)
Dinku, T.; Funk, C. C.; Tadesse, T.; Ceccato, P.
2017-12-01
Long and temporally consistent rainfall time series are essential in climate analyses and applications. Rainfall data from station observations are inadequate over many parts of the world due to sparse or non-existent observation networks, or limited reporting of gauge observations. As a result, satellite rainfall estimates have been used as an alternative or as a supplement to station observations. However, many satellite-based rainfall products with long time series suffer from coarse spatial and temporal resolutions and inhomogeneities caused by variations in satellite inputs. There are some satellite rainfall products with reasonably consistent time series, but they are often limited to specific geographic areas. The Climate Hazards Group Infrared Precipitation (CHIRP) and CHIRP combined with station observations (CHIRPS) are recently produced satellite-based rainfall products with relatively high spatial and temporal resolutions and quasi-global coverage. In this study, CHIRP and CHIRPS were evaluated over East Africa at daily, dekadal (10-day) and monthly time scales. The evaluation was done by comparing the satellite products with rain gauge data from about 1200 stations. The is unprecedented number of validation stations for this region covering. The results provide a unique region-wide understanding of how satellite products perform over different climatic/geographic (low lands, mountainous regions, and coastal) regions. The CHIRP and CHIRPS products were also compared with two similar satellite rainfall products: the African Rainfall Climatology version 2 (ARC2) and the latest release of the Tropical Applications of Meteorology using Satellite data (TAMSAT). The results show that both CHIRP and CHIRPS products are significantly better than ARC2 with higher skill and low or no bias. These products were also found to be slightly better than the latest version of the TAMSAT product. A comparison was also done between the latest release of the TAMSAT product (TAMSAT3) and the earlier version(TAMSAT2), which has shown that the latest version is a substantial improvement over the previous one, particularly with regards to the bias statistics.
NASA Astrophysics Data System (ADS)
Reyes, J.; Vizuete, W.; Serre, M. L.; Xu, Y.
2015-12-01
The EPA employs a vast monitoring network to measure ambient PM2.5 concentrations across the United States with one of its goals being to quantify exposure within the population. However, there are several areas of the country with sparse monitoring spatially and temporally. One means to fill in these monitoring gaps is to use PM2.5 modeled estimates from Chemical Transport Models (CTMs) specifically the Community Multi-scale Air Quality (CMAQ) model. CMAQ is able to provide complete spatial coverage but is subject to systematic and random error due to model uncertainty. Due to the deterministic nature of CMAQ, often these uncertainties are not quantified. Much effort is employed to quantify the efficacy of these models through different metrics of model performance. Currently evaluation is specific to only locations with observed data. Multiyear studies across the United States are challenging because the error and model performance of CMAQ are not uniform over such large space/time domains. Error changes regionally and temporally. Because of the complex mix of species that constitute PM2.5, CMAQ error is also a function of increasing PM2.5 concentration. To address this issue we introduce a model performance evaluation for PM2.5 CMAQ that is regionalized and non-linear. This model performance evaluation leads to error quantification for each CMAQ grid. Areas and time periods of error being better qualified. The regionalized error correction approach is non-linear and is therefore more flexible at characterizing model performance than approaches that rely on linearity assumptions and assume homoscedasticity of CMAQ predictions errors. Corrected CMAQ data are then incorporated into the modern geostatistical framework of Bayesian Maximum Entropy (BME). Through cross validation it is shown that incorporating error-corrected CMAQ data leads to more accurate estimates than just using observed data by themselves.
NASA Astrophysics Data System (ADS)
Iezzi, A. M.; Schwaiger, H. F.; Fee, D.; Haney, M. M.
2015-12-01
Alaska's over 50 historically active volcanoes span 2,500 kilometers, and their eruptions pose great threats to the aviation industry. This makes both prompt observations of explosion onsets and changes in intensity a necessity. Due to their expansive range and remoteness, these volcanoes are predominantly monitored by local seismic networks, remote observations including satellite imagery and infrasound sensors. Infrasound is an especially crucial tool in this area because infrasound data collection is not obstructed by frequent cloud cover (as in satellite imagery) and infrasound waves can travel hundreds to thousands of kilometers. However, infrasound station coverage is relatively sparse and strong wind and temperature gradients in the atmosphere create multiple waveguides and shadow zones where the propagation of infrasound is enhanced and diminished, respectively. To accurately constrain volcanic source information and the long-range propagation of infrasound waves, a detailed characterization of the spatial and temporal variability of the atmosphere is vital. These properties can be constrained using a ground-to-space model similar to that of Drob et al. (2003) based upon varied meteorological observations and applied to infrasound waves to model the propagation of infrasound. Here we present the first results of a re-analysis system constructed by the Alaska Volcano Observatory to accurately characterize and model long-range infrasound propagation from volcanic eruptions. We select a number of case studies to examine infrasound detections (or lack thereof) from recent eruptions of Alaskan volcanoes, including the November 2014 eruption of Pavlof Volcano and July 2015 eruption of Cleveland Volcano. Detailed examination of the acoustic propagation conditions will provide additional insight into detection capability and eruption dynamics with future work aiming to implement real-time long-range infrasound propagation modeling.Drob, Douglas P., J. M. Picone, and M. Garcés. "Global morphology of infrasound propagation." Journal of Geophysical Research: Atmospheres (1984-2012) 108.D21 (2003).
Regional droughts and food security relationships in the Zambezi River Basin
NASA Astrophysics Data System (ADS)
Tirivarombo, S.; Hughes, D. A.
Analyses of long records of rainfall data indicate that the African climate has always been variable both intra-seasonally and inter-seasonally. Associated with this variability are extreme flood and drought events that have impacted negatively on the availability and use of water resources. It is necessary to put into perspective the historical variability so as to provide a background against which future projections and a basis for adaptive management can be made. In Africa this process is complicated by the fact that data availability is sparse and of limited spatial coverage thus posing some degree of uncertainty. These limitations have in some cases compelled researchers to resort to different sources of data but the outcomes may be fraught with inconsistencies between the datasets. Three monthly rainfall data sets CRU, GHCN and locally gauged data for the period 1960-2002 were used to generate standardized precipitation indices (SPI) for a comparative analysis of an agricultural drought in relation to food security in selected parts of the Zambezi River Basin. The aim of the study was to calibrate a rainfall based drought index for crop production forecasts, to check whether the approach (using global data sets) could be used with climate change data for future predictions and to establish the best predictor combination of drought indices. Standardized Precipitation Indices (SPIs) appropriate to the life cycle of a crop were generated using the SPATSIM (spatial and time series information modeling) software package and these were examined for detectable trends during the planting and growing stages. FAO crop production statistics were used to validate the results. The results indicated that the SPI could be used as a drought monitoring tool if used in conjunction with other drought indices. There was no significant difference between the uses of different sets of data in the generation of the drought indices.
NASA Astrophysics Data System (ADS)
Chang, K. L.; Petropavlovskikh, I. V.; Cooper, O. R.; Schultz, M.; Wang, T.
2017-12-01
Surface ozone is a greenhouse gas and pollutant detrimental to human health and crop and ecosystem productivity. The Tropospheric Ozone Assessment Report (TOAR) is designed to provide the research community with an up-to-date observation-based overview of tropospheric ozone's global distribution and trends. The TOAR Surface Ozone Database contains ozone metrics at thousands of monitoring sites around the world, densely clustered across mid-latitude North America, western Europe and East Asia. Calculating regional ozone trends across these locations is challenging due to the uneven spacing of the monitoring sites across urban and rural areas. To meet this challenge we conducted a spatial and temporal trend analysis of several TOAR ozone metrics across these three regions for summertime (April-September) 2000-2014, using the generalized additive mixed model (GAMM). Our analysis indicates that East Asia has the greatest human and plant exposure to ozone pollution among investigating regions, with increasing ozone levels through 2014. The results also show that ozone mixing ratios continue to decline significantly over eastern North America and Europe, however, there is less evidence for decreases of daytime average ozone at urban sites. The present-day spatial coverage of ozone monitors in East Asia (South Korea and Japan) and eastern North America is adequate for estimating regional trends by simply taking the average of the individual trends at each site. However the European network is more sparsely populated across its northern and eastern regions and therefore a simple average of the individual trends at each site does not yield an accurate regional trend. This analysis demonstrates that the GAMM technique can be used to assess the regional representativeness of existing monitoring networks, indicating those networks for which a regional trend can be obtained by simply averaging the trends of all individual sites and those networks that require a more sophisticated statistical approach.
NASA Technical Reports Server (NTRS)
Ni, Wenjian; Ranson, Kenneth Jon; Zhang, Zhiyu; Sun, Guoqing
2014-01-01
LiDAR waveform data from airborne LiDAR scanners (ALS) e.g. the Land Vegetation and Ice Sensor (LVIS) havebeen successfully used for estimation of forest height and biomass at local scales and have become the preferredremote sensing dataset. However, regional and global applications are limited by the cost of the airborne LiDARdata acquisition and there are no available spaceborne LiDAR systems. Some researchers have demonstrated thepotential for mapping forest height using aerial or spaceborne stereo imagery with very high spatial resolutions.For stereo imageswith global coverage but coarse resolution newanalysis methods need to be used. Unlike mostresearch based on digital surface models, this study concentrated on analyzing the features of point cloud datagenerated from stereo imagery. The synthesizing of point cloud data from multi-view stereo imagery increasedthe point density of the data. The point cloud data over forested areas were analyzed and compared to small footprintLiDAR data and large-footprint LiDAR waveform data. The results showed that the synthesized point clouddata from ALOSPRISM triplets produce vertical distributions similar to LiDAR data and detected the verticalstructure of sparse and non-closed forests at 30mresolution. For dense forest canopies, the canopy could be capturedbut the ground surface could not be seen, so surface elevations from other sourceswould be needed to calculatethe height of the canopy. A canopy height map with 30 m pixels was produced by subtracting nationalelevation dataset (NED) fromthe averaged elevation of synthesized point clouds,which exhibited spatial featuresof roads, forest edges and patches. The linear regression showed that the canopy height map had a good correlationwith RH50 of LVIS data with a slope of 1.04 and R2 of 0.74 indicating that the canopy height derived fromPRISM triplets can be used to estimate forest biomass at 30 m resolution.
Estimation of daily minimum land surface air temperature using MODIS data in southern Iran
NASA Astrophysics Data System (ADS)
Didari, Shohreh; Norouzi, Hamidreza; Zand-Parsa, Shahrokh; Khanbilvardi, Reza
2017-11-01
Land surface air temperature (LSAT) is a key variable in agricultural, climatological, hydrological, and environmental studies. Many of their processes are affected by LSAT at about 5 cm from the ground surface (LSAT5cm). Most of the previous studies tried to find statistical models to estimate LSAT at 2 m height (LSAT2m) which is considered as a standardized height, and there is not enough study for LSAT5cm estimation models. Accurate measurements of LSAT5cm are generally acquired from meteorological stations, which are sparse in remote areas. Nonetheless, remote sensing data by providing rather extensive spatial coverage can complement the spatiotemporal shortcomings of meteorological stations. The main objective of this study was to find a statistical model from the previous day to accurately estimate spatial daily minimum LSAT5cm, which is very important in agricultural frost, in Fars province in southern Iran. Land surface temperature (LST) data were obtained using the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Aqua and Terra satellites at daytime and nighttime periods with normalized difference vegetation index (NDVI) data. These data along with geometric temperature and elevation information were used in a stepwise linear model to estimate minimum LSAT5cm during 2003-2011. The results revealed that utilization of MODIS Aqua nighttime data of previous day provides the most applicable and accurate model. According to the validation results, the accuracy of the proposed model was suitable during 2012 (root mean square difference ( RMSD) = 3.07 °C, {R}_{adj}^2 = 87 %). The model underestimated (overestimated) high (low) minimum LSAT5cm. The accuracy of estimation in the winter time was found to be lower than the other seasons ( RMSD = 3.55 °C), and in summer and winter, the errors were larger than in the remaining seasons.
Fine Ice Sheet margins topography from swath processing of CryoSat SARIn mode data
NASA Astrophysics Data System (ADS)
Gourmelen, N.; Escorihuela, M. J.; Shepherd, A.; Foresta, L.; Muir, A.; Briggs, K.; Hogg, A. E.; Roca, M.; Baker, S.; Drinkwater, M. R.
2014-12-01
Reference and repeat-observations of Glacier and Ice Sheet Margin (GISM) topography are critical to identify changes in ice thickness, provide estimates of mass gain or loss and thus quantify the contribution of the cryosphere to sea level change. The lack of such sustained observations was identified in the Integrated Global Observing Strategy (IGOS) Cryosphere Theme Report as a major shortcoming. Conventional altimetry measurements over GISMs exist, but coverage has been sparse and characterized by coarse ground resolution. Additionally, and more importantly, they proved ineffective in the presence of steep slopes, a typical feature of GISM areas. Since the majority of Antarctic and Greenland ice sheet mass loss is estimated to lie within 100 km from the coast, but only about 10% is surveyed, there is the need for more robust and dense observations of GISMs, in both time and space. The ESA Altimetry mission CryoSat aims at gaining better insight into the evolution of the Cryosphere. CryoSat's revolutionary design features a Synthetic Interferometric Radar Altimeter (SIRAL), with two antennas for interferometry. The corresponding SAR Interferometer (SARIn) mode of operation increases spatial resolution while resolving the angular origin of off-nadir echoes occurring over sloping terrain. The SARIn mode is activated over GISMs and the elevation for the Point Of Closest Approach (POCA) is a standard product of the CryoSat mission. Here we present an approach for more comprehensively exploiting the SARIn mode of CryoSat and produce an ice elevation product with enhanced spatial resolution compared to standard CryoSat-2 height products. In this so called L2-swath processing approach, the full CryoSat waveform is exploited under specific conditions of signal and surface characteristics. We will present the rationale, validation exercises and preliminary results from the Eurpean Space Agency's STSE CryoTop study over selected test regions of the margins of the Greenland and Antarctic Ice Sheets.
Snow Coverage Analysis Using ASTER over the Sierra Nevada Mountain Range
NASA Astrophysics Data System (ADS)
Ross, B.
2017-12-01
Snow has strong impacts on human behavior, state and local activities, and the economy. The Sierra Nevada snowpack is California's most important natural reservoir of water. Such snow is melting sooner and faster. A recent California drought study showed that there was a deficit of 1.5 million acre-feet of water in 2014 due to the fast melting rates. Scientists have been using the Moderate Resolution Imaging Spectrometer (MODIS) which is available at the spatial resolution of 500-meter, to analyze the changes in snow coverage. While such analysis provides us with the valuable information, it would be more beneficial to employ the imageries at a higher spatial resolution for snow studies. Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER), which acquires the high-resolution imageries ranging from 15-meter to 90-meter, has recently become freely available to the public. Our study utilized two scenes obtained from ASTER to investigate the changes in snow extent over the Sierra Nevada's mountain area for an 8-year period. These two scenes were collected on April 11, 2007 and April 16, 2015 covering the same geographic region. Normalized Difference Snow Index (NDSI) was adopted to delineate the snow coverage in each scene. Our study shows a substantial decrease of snow coverage in the studied geographic region by pixel count.
EEG source localization: Sensor density and head surface coverage.
Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don
2015-12-30
The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Pesticides in U.S. streams and groundwater
Gilliom, Robert J.
2007-01-01
A 10-year study by the U.S. Geological Survey’s (USGS’s) National Water-Quality Assessment (NAWQA) Program provides a national-scale view of pesticide occurrence in streams and groundwater. The 1992-2001 study builds upon a preliminary analysis from NAWQA’s first phase of studies during 1992-1996 (1, 2). Pesticide data available from various studies prior to 1992 did not allow national assessment because of limited and variable geographic coverage (usually focusing on individual states or regions), sparse and inconsistent inclusion of pesticides in use, and variable sampling designs (3-5). The expanded geographic coverage and improved data following 10 years of study (Figure 1) confirm and reinforce previously reported findings and enable more detailed analyses of each topic. This article summarizes selected findings from a comprehensive report (6), with a focus on the nature of pesticide occurrence and potential significance to human health and stream ecosystems. Information on study design and methods as well as additional analysis of geographic patterns and trends in relation to use and management practices are available in the full report (6).
SDN-enabled hybrid emergency message transmission architecture in internet-of-vehicles
NASA Astrophysics Data System (ADS)
Zhu, Wanting; Gao, Deyun; Zhao, Weicheng; Zhang, Hongke; Chiang, Hua-Pei
2018-04-01
With the increasing number of vehicles connected to the Internet-of-Things (IoT), Internet-of-Vehicles (IoV) is becoming a hot research topic. It can improve traffic safety and efficiency and promote the development of the intelligent transportation that is a very important element in Smart Cities. As an important part of the safety application in IoV, the emergency message transmission is designed to inform all the vehicles in the relevant area timely of the accident information through the multi-hop broadcast communication. In this paper, we propose a hybrid emergency message transmission (HEMT), which introduces the SDN technology into the vehicular network environment and utilizes the flexibility of inter-vehicle communication. By deploying SDN-enabled central controller and RSU switches, we can obtain reliable and fast emergency message dissemination. Moreover, considering the space between the coverages of RSUs caused by the sparse deployment, we also use inter-vehicle multi-hop broadcast communication to improve the message coverage ratio by adding the packet modification module on the RSU switch. Simulation results show the feasibility and effectiveness of our proposed scheme.
Geophysical Data from Spring Valley to Delamar Valley, East-Central Nevada
Mankinen, Edward A.; Roberts, Carter W.; McKee, Edwin H.; Chuchel, Bruce A.; Morin, Robert L.
2007-01-01
Cenozoic basins in eastern Nevada and western Utah constitute major ground-water recharge areas in the eastern part of the Great Basin and these were investigated to characterize the geologic framework of the region. Prior to these investigations, regional gravity coverage was variable over the region, adequate in some areas and very sparse in others. Cooperative studies described herein have established 1,447 new gravity stations in the region, providing a detailed description of density variations in the middle to upper crust. All previously available gravity data for the study area were evaluated to determine their reliability, prior to combining with our recent results and calculating an up-to-date isostatic residual gravity map of the area. A gravity inversion method was used to calculate depths to pre-Cenozoic basement rock and estimates of maximum alluvial/volcanic fill in the major valleys of the study area. The enhanced gravity coverage and the incorporation of lithologic information from several deep oil and gas wells yields a much improved view of subsurface shapes of these basins and provides insights useful for the development of hydrogeologic models for the region.
TES/Aura L2 Ozone (O3) Limb V6 (TL2O3LS)
Atmospheric Science Data Center
2018-03-01
TES/Aura L2 Ozone (O3) Limb (TL2O3LS) News: TES News Join ... Project Title: TES Discipline: Tropospheric Composition Version: V6 Level: L2 Platform: TES/Aura L2 Ozone Spatial Coverage: 27 x 23 km Limb Spatial ...
TES/Aura L2 Ozone (O3) Limb V6 (TL2O3L)
Atmospheric Science Data Center
2018-03-01
TES/Aura L2 Ozone (O3) Limb (TL2O3L) News: TES News Join TES ... Project Title: TES Discipline: Tropospheric Composition Version: V6 Level: L2 Platform: TES/Aura L2 Ozone Spatial Coverage: 27 x 23 km Limb Spatial ...
Ozone retrievals from MAGEAQ GEO TIR+VIS for air quality
NASA Astrophysics Data System (ADS)
Quesada-Ruiz, Samuel; Attié, Jean-Luc; Lahoz, William A.; Abida, Rachid; El-Amraoui, Laaziz; Ricaud, Philippe; Zbinden, Regina; Spurr, Robert; da Silva, Arlindo M.
2016-04-01
Nowadays, air quality monitoring is based on the use of ground-based stations (GBS) or satellite measurements. GBS provide accurate measurements of pollutant concentrations, especially in the planetary boundary layer (PBL), but usually the spatial coverage is sparse. Polar-orbiting satellites provide good spatial resolution but low temporal coverage -this is insufficient for tracking pollutants exhibiting a diurnal cycle (Lahoz et al., 2012). However, pollutant concentrations can be measured by instruments placed on board a geostationary satellite, which can provide sufficiently high temporal and spatial resolutions (e.g. Hache et al., 2014). In this work, we investigate the potentiality of a possible future geostationary instrument, MAGEAQ (Monitoring the Atmosphere from Geostationary orbit for European Air Quality), for retrieving ozone measurements over Europe. In particular, MAGEAQ can provide 1-hour temporal sampling at 10x10km pixel resolution for measurements in both visible (VIS) and thermal infrared (TIR) bands -thus, we will be able to measure during the day and at night. MAGEAQ synthetic radiance observations are obtained through radiative transfer (RT) simulations using the VLIDORT discrete ordinate RT model (Spurr, 2006) based on output from the GEOS-5 Nature Run (Gelaro et al., 2015) providing optical information, plus a suitable instrument model. Ozone is retrieved from these synthetic measurements using the optimal estimation inversion scheme of Levenberg-Marquardt. Finally, we examine an application of the air quality concept based on these ozone retrievals during the heatwave event of July 2006 over Europe. REFERENCES Gelaro, R., Putman, W. M., Pawson, S., Draper, C., Molod, A., Norris, P. M., Ott, L., Privé, N., Reale, O., Achuthavarier, D., Bosilovich, M., Buchard, V., Chao, W., Coy, L., Cullather, R., da Silva, A., Darmenov, A., Errico, R. M., Fuentes, M., Kim, M-J., Koster, R., McCarty, W., Nattala, J., Partyka, G., Schubert, S., Vernieres, G., Vikhliaev, Y., and Wargan, K.. Evaluation of the 7-km GEOS-5 Nature Run. NASA/TM-2014-104606, Vol. 36., 2015. Hache, E., Attié, J.L., Tourneur, C., Ricaud, P., Coret, L., Lahoz, W.A., El Amraoui, L., Josse, B., Hamer, P., Warner, J., Liu, X., Chance, K., Höpfner, M., Spurr, R., Natraj, V., Kulawik, S., Eldering, A. and Orphal, J.. The added value of a visible channel to a geostationary thermal infrared instrument to monitor ozone for air quality. Atmos. Meas. Tech., 7, 2185-2201, 2014. Lahoz, W. A., Peuch, V. H., Orphal, J., Attie, J.L., Chance, K., Liu, X., Edwards, D., Elbern, H., Flaud, J. M., Claeyman, M., and El Amraoui, L.. Monitoring Air Quality from Space: The Case for the Geostationay Platform. Bulletin of the American Meteorological Society, 93, 221-233, 2012. Spurr, R. J. D.. VLIDORT: A Linearized Pseudo-Spherical Vector Discrete Ordinate Radiative Transfer Code for Forward Model and Retrieval Studies in Multilayer Multiple Scattering Media. Journal of Quantitative Spectroscopy & Radiative Transfer, 102, 316-342, 2006.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, D; Mutic, S; Hu, Y
2014-06-01
Purpose: To develop an imaging technique that enables us to acquire T2- weighted 4D Magnetic Resonance Imaging (4DMRI) with sufficient spatial coverage, temporal resolution and spatial resolution for clinical evaluation. Methods: T2-weighed 4DMRI images were acquired from a healthy volunteer using a respiratory amplitude triggered T2-weighted Turbo Spin Echo sequence. 10 respiratory states were used to equally sample the respiratory range based on amplitude (0%, 20%i, 40%i, 60%i, 80%i, 100%, 80%e, 60%e, 40%e and 20%e). To avoid frequent scanning halts, a methodology was devised that split 10 respiratory states into two packages in an interleaved manner and packages were acquiredmore » separately. Sixty 3mm sagittal slices at 1.5mm in-plane spatial resolution were acquired to offer good spatial coverage and reasonable spatial resolution. The in-plane field of view was 375mm × 260mm with nominal scan time of 3 minutes 42 seconds. Acquired 2D images at the same respiratory state were combined to form the 3D image set corresponding to that respiratory state and reconstructed in the coronal view to evaluate whether all slices were at the same respiratory state. 3D image sets of 10 respiratory states represented a complete 4D MRI image set. Results: T2-weighted 4DMRI image were acquired in 10 minutes which was within clinical acceptable range. Qualitatively, the acquired MRI images had good image quality for delineation purposes. There were no abrupt position changes in reconstructed coronal images which confirmed that all sagittal slices were in the same respiratory state. Conclusion: We demonstrated it was feasible to acquire T2-weighted 4DMRI image set within a practical amount of time (10 minutes) that had good temporal resolution (10 respiratory states), spatial resolution (1.5mm × 1.5mm × 3.0mm) and spatial coverage (60 slices) for future clinical evaluation.« less
NASA Astrophysics Data System (ADS)
Zhu, Qing; Liao, Kaihua; Doolittle, James; Lin, Henry
2014-05-01
Hydropedological dynamics including soil moisture variation, subsurface flow, and spatial distributions of different soil properties are important parameters in ecological, environmental, hydrological, and agricultural modeling and applications. However, technical gap exists in mapping these dynamics at intermediate spatial scale (e.g., farm and catchment scales). At intermediate scales, in-situ monitoring provides detailed data, but is restricted in number and spatial coverage; while remote sensing provides more acceptable spatial coverage, but has comparatively low spatial resolution, limited observation depths, and is greatly influenced by the surface condition and climate. As a non-invasive, fast, and convenient geophysical tool, electromagnetic induction (EMI) measures soil apparent electrical conductivity (ECa) and has great potential to bridge this technical gap. In this presentation, principles of different EMI meters are briefly introduced. Then, case studies of using repeated EMI to detect spatial distributions of subsurface convergent flow, soil moisture dynamics, soil types and their transition zones, and different soil properties are presented. The suitability, effectiveness, and accuracy of EMI are evaluated for mapping different hydropedological dynamics. Lastly, contributions of different hydropedological and terrain properties on soil ECa are quantified under different wetness conditions, seasons, and land use types using Classification and Regression Tree model. Trend removal and residual analysis are then used for further mining of EMI survey data. Based on these analyses, proper EMI survey designs and data processing are proposed.
Spatial Coverage Planning and Optimization for Planetary Exploration
NASA Technical Reports Server (NTRS)
Gaines, Daniel M.; Estlin, Tara; Chouinard, Caroline
2008-01-01
We are developing onboard planning and scheduling technology to enable in situ robotic explorers, such as rovers and aerobots, to more effectively assist scientists in planetary exploration. In our current work, we are focusing on situations in which the robot is exploring large geographical features such as craters, channels or regional boundaries. In to develop valid and high quality plans, the robot must take into account a range of scientific and engineering constraints and preferences. We have developed a system that incorporates multiobjective optimization and planning allowing the robot to generate high quality mission operations plans that respect resource limitations and mission constraints while attempting to maximize science and engineering objectives. An important scientific objective for the exploration of geological features is selecting observations that spatially cover an area of interest. We have developed a metric to enable an in situ explorer to reason about and track the spatial coverage quality of a plan. We describe this technique and show how it is combined in the overall multiobjective optimization and planning algorithm.
NASA Astrophysics Data System (ADS)
Priebe, Elizabeth H.; Neville, C. J.; Rudolph, D. L.
2018-03-01
The spatial coverage of hydraulic conductivity ( K) values for large-scale groundwater investigations is often poor because of the high costs associated with hydraulic testing and the large areas under investigation. Domestic water wells are ubiquitous and their well logs represent an untapped resource of information that includes mandatory specific-capacity tests, from which K can be estimated. These specific-capacity tests are routinely conducted at such low pumping rates that well losses are normally insignificant. In this study, a simple and practical approach to augmenting high-quality K values with reconnaissance-level K values from water-well specific-capacity tests is assessed. The integration of lesser quality K values from specific-capacity tests with a high-quality K data set is assessed through comparisons at two different scales: study-area-wide (a 600-km2 area in Ontario, Canada) and in a single geological formation within a portion of the broader study area (200 km2). Results of the comparisons demonstrate that reconnaissance-level K estimates from specific-capacity tests approximate the ranges and distributions of the high-quality K values. Sufficient detail about the physical basis and assumptions that are invoked in the development of the approach are presented here so that it can be applied with confidence by practitioners seeking to enhance their spatial coverage of K values with specific-capacity tests.
Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin
2014-01-01
In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150
Robust registration of sparsely sectioned histology to ex-vivo MRI of temporal lobe resections
NASA Astrophysics Data System (ADS)
Goubran, Maged; Khan, Ali R.; Crukley, Cathie; Buchanan, Susan; Santyr, Brendan; deRibaupierre, Sandrine; Peters, Terry M.
2012-02-01
Surgical resection of epileptic foci is a typical treatment for drug-resistant epilepsy, however, accurate preoperative localization is challenging and often requires invasive sub-dural or intra-cranial electrode placement. The presence of cellular abnormalities in the resected tissue can be used to validate the effectiveness of multispectralMagnetic Resonance Imaging (MRI) in pre-operative foci localization and surgical planning. If successful, these techniques can lead to improved surgical outcomes and less invasive procedures. Towards this goal, a novel pipeline is presented here for post-operative imaging of temporal lobe specimens involving MRI and digital histology, and present and evaluate methods for bringing these images into spatial correspondence. The sparsely-sectioned histology images of resected tissue represents a challenge for 3D reconstruction which we address with a combined 3D and 2D rigid registration algorithm that alternates between slice-based and volume-based registration with the ex-vivo MRI. We also evaluate four methods for non-rigid within-plane registration using both images and fiducials, with the top performing method resulting in a target registration error of 0.87 mm. This work allows for the spatially-local comparison of histology with post-operative MRI and paves the way for eventual registration with pre-operative MRI images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rafti, Matías; Imbihl, Ronald, E-mail: imbihl@pci.uni-hannover.de
2014-12-07
By means of photoemission electron microscopy as spatially resolving method, the effect of high coverages of coadsorbed potassium (0.16 ≤ θ{sub K} ≤ 0.21) on the dynamical behavior of the H{sub 2} + O{sub 2} reaction over a Rh(110) surface was investigated. We observe that the originally bistable system is transformed into an excitable system as evidenced by the formation of target patterns and spiral waves. At K coverages close to saturation (θ{sub K} ≈ 0.21) mass transport of potassium with pulses is seen.
Atmospheric Science Data Center
2017-04-26
... NCAR C-130 Instrument: Radiation Measurement System Spatial Coverage: Fairbanks, Alaska and the ... Parameters: Upwelling and Downwelling Total Solar Flux Infrared Flux and Narrowband Flux Order Data: ...
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Solutions for extracting file level spatial metadata from airborne mission data
NASA Astrophysics Data System (ADS)
Schwab, M. J.; Stanley, M.; Pals, J.; Brodzik, M.; Fowler, C.; Icebridge Engineering/Spatial Metadata
2011-12-01
Authors: Michael Stanley Mark Schwab Jon Pals Mary J. Brodzik Cathy Fowler Collaboration: Raytheon EED and NSIDC Raytheon / EED 5700 Rivertech Court Riverdale, MD 20737 NSIDC University of Colorado UCB 449 Boulder, CO 80309-0449 Data sets acquired from satellites and aircraft may differ in many ways. We will focus on the differences in spatial coverage between the two platforms. Satellite data sets over a given period typically cover large geographic regions. These data are collected in a consistent, predictable and well understood manner due to the uniformity of satellite orbits. Since satellite data collection paths are typically smooth and uniform the data from satellite instruments can usually be described with simple spatial metadata. Subsequently, these spatial metadata can be stored and searched easily and efficiently. Conversely, aircraft have significantly more freedom to change paths, circle, overlap, and vary altitude all of which add complexity to the spatial metadata. Aircraft are also subject to wind and other elements that result in even more complicated and unpredictable spatial coverage areas. This unpredictability and complexity makes it more difficult to extract usable spatial metadata from data sets collected on aircraft missions. It is not feasible to use all of the location data from aircraft mission data sets for use as spatial metadata. The number of data points in typical data sets poses serious performance problems for spatial searching. In order to provide efficient spatial searching of the large number of files cataloged in our systems, we need to extract approximate spatial descriptions as geo-polygons from a small number of vertices (fewer than two hundred). We present some of the challenges and solutions for creating airborne mission-derived spatial metadata. We are implementing these methods to create the spatial metadata for insertion of IceBridge mission data into ECS for public access through NSIDC and ECHO but, they are potentially extensible to any aircraft mission data.
TES/Aura L2 Ammonia (NH3) Lite Nadir V6 (TL2NH3LN)
Atmospheric Science Data Center
2017-07-20
TES/Aura L2 Ammonia (NH3) Lite Nadir (TL2NH3LN) News: TES News ... Level: L2 Instrument: TES/Aura L2 Ammonia Spatial Coverage: 5.3 km nadir Spatial ... OPeNDAP Access: OPeNDAP Parameters: Ammonia Order Data: Earthdata Search: Order Data ...
Archetypal Analysis for Sparse Representation-Based Hyperspectral Sub-Pixel Quantification
NASA Astrophysics Data System (ADS)
Drees, L.; Roscher, R.
2017-05-01
This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m×30m. For this, sparse representation is applied, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. The elementary spectra are determined from image reference data using simplex volume maximization, which is a fast heuristic technique for archetypal analysis. In the experiments, the estimation of class fractions based on the archetypal spectral library is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions and the number of used elementary spectra. We will show, that a collection of archetypes can be an adequate and efficient alternative to the spectral library with respect to mentioned criteria.
Discover mouse gene coexpression landscapes using dictionary learning and sparse coding.
Li, Yujie; Chen, Hanbo; Jiang, Xi; Li, Xiang; Lv, Jinglei; Peng, Hanchuan; Tsien, Joe Z; Liu, Tianming
2017-12-01
Gene coexpression patterns carry rich information regarding enormously complex brain structures and functions. Characterization of these patterns in an unbiased, integrated, and anatomically comprehensive manner will illuminate the higher-order transcriptome organization and offer genetic foundations of functional circuitry. Here using dictionary learning and sparse coding, we derived coexpression networks from the space-resolved anatomical comprehensive in situ hybridization data from Allen Mouse Brain Atlas dataset. The key idea is that if two genes use the same dictionary to represent their original signals, then their gene expressions must share similar patterns, thereby considering them as "coexpressed." For each network, we have simultaneous knowledge of spatial distributions, the genes in the network and the extent a particular gene conforms to the coexpression pattern. Gene ontologies and the comparisons with published gene lists reveal biologically identified coexpression networks, some of which correspond to major cell types, biological pathways, and/or anatomical regions.
Sparse genetic tracing reveals regionally specific functional organization of mammalian nociceptors.
Olson, William; Abdus-Saboor, Ishmail; Cui, Lian; Burdge, Justin; Raabe, Tobias; Ma, Minghong; Luo, Wenqin
2017-10-12
The human distal limbs have a high spatial acuity for noxious stimuli but a low density of pain-sensing neurites. To elucidate mechanisms underlying regional differences in processing nociception, we sparsely traced non-peptidergic nociceptors across the body using a newly generated Mrgprd CreERT2 mouse line. We found that mouse plantar paw skin is also innervated by a low density of Mrgprd + nociceptors, while individual arbors in different locations are comparable in size. Surprisingly, the central arbors of plantar paw and trunk innervating nociceptors have distinct morphologies in the spinal cord. This regional difference is well correlated with a heightened signal transmission for plantar paw circuits, as revealed by both spinal cord slice recordings and behavior assays. Taken together, our results elucidate a novel somatotopic functional organization of the mammalian pain system and suggest that regional central arbor structure could facilitate the "enlarged representation" of plantar paw regions in the CNS.
Zhang, Guosong; Hovem, Jens M.; Dong, Hefeng
2012-01-01
Underwater communication channels are often complicated, and in particular multipath propagation may cause intersymbol interference (ISI). This paper addresses how to remove ISI, and evaluates the performance of three different receiver structures and their implementations. Using real data collected in a high-frequency (10–14 kHz) field experiment, the receiver structures are evaluated by off-line data processing. The three structures are multichannel decision feedback equalizer (DFE), passive time reversal receiver (passive-phase conjugation (PPC) with a single channel DFE), and the joint PPC with multichannel DFE. In sparse channels, dominant arrivals represent the channel information, and the matching pursuit (MP) algorithm which exploits the channel sparseness has been investigated for PPC processing. In the assessment, it is found that: (1) it is advantageous to obtain spatial gain using the adaptive multichannel combining scheme; and (2) the MP algorithm improves the performance of communications using PPC processing. PMID:22438755
A guided wave dispersion compensation method based on compressed sensing
NASA Astrophysics Data System (ADS)
Xu, Cai-bin; Yang, Zhi-bo; Chen, Xue-feng; Tian, Shao-hua; Xie, Yong
2018-03-01
The ultrasonic guided wave has emerged as a promising tool for structural health monitoring (SHM) and nondestructive testing (NDT) due to their capability to propagate over long distances with minimal loss and sensitivity to both surface and subsurface defects. The dispersion effect degrades the temporal and spatial resolution of guided waves. A novel ultrasonic guided wave processing method for both single mode and multi-mode guided waves dispersion compensation is proposed in this work based on compressed sensing, in which a dispersion signal dictionary is built by utilizing the dispersion curves of the guided wave modes in order to sparsely decompose the recorded dispersive guided waves. Dispersion-compensated guided waves are obtained by utilizing a non-dispersion signal dictionary and the results of sparse decomposition. Numerical simulations and experiments are implemented to verify the effectiveness of the developed method for both single mode and multi-mode guided waves.
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
NASA Astrophysics Data System (ADS)
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
A, Duo; Zhao, Wenji; Qu, Xinyuan; Jing, Ran; Xiong, Kai
2016-12-01
Global climate change has led to significant vegetation changes in the past half century. North China Plain, the most important grain production base of china, is undergoing a process of prominent warming and drying. The vegetation coverage, which is used to monitor vegetation change, can respond to climate change (temperature and precipitation). In this study, GIMMS (Global Inventory Modelling and Mapping Studies)-NDVI (Normalized Difference Vegetation Index) data, MODIS (Moderate-resolution Imaging Spectroradiometer) - NDVI data and climate data, during 1981-2013, were used to investigate the spatial distribution and changes of vegetation. The relationship between climate and vegetation on different spatial (agriculture, forest and grassland) and temporal (yearly, decadal and monthly) scales were also analyzed in North China Plain. (1) It was found that temperature exhibiting a slight increase trend (0.20 °C/10a, P < 0.01). This may be due to the disappearance of 0 °C isotherm, the rise of spring temperature. At the same time, precipitation showed a significant reduction trend (-1.75 mm/10a, P > 0.05). The climate mutation period was during 1991-1994. (2) Vegetation coverage slight increase was observed in the 55% of total study area, with a change rate of 0.00039/10a. Human activities may not only accelerate the changes of the vegetation coverage, but also c effect to the rate of these changes. (3) Overall, the correlation between the vegetation coverage and climatic factor is higher in monthly scale than yearly scale. The correlation analysis between vegetation coverage and climate changes showed that annual vegetation coverage was better correlatend with precipitation in grassland biome; but it showed a better correlated with temperature i the agriculture biome and forest biome. In addition, the vegetation coverage had sensitive time-effect respond to precipitation. (4) The vegetation coverage showed the same increasing trend before and after the climatic variations, but the rate of increase slowed down. From the vegetation coverage point of view, the grassland ecological zone had an obvious response to the climatic variations, but the agricultural ecological zones showed a significant response from the vegetation coverage change rate point of view. The effect of human activity in degradation region was higher than that in improvement area. But after the climate abruptly changing, the effect of human activity in improvement area was higher than that in degradation region, and the influence of human activity will continue in the future.
NARSTO PAC2001 GOLDEN EARS GAS PM DATA
Atmospheric Science Data Center
2018-04-09
... Parameters: Atmospheric Pressure Measurements Air Temperature Humidity Ozone Aerosol Particle Properties Surface ... Data: Spatial Coverage: Canada Pacific 2001 Air Quality Study SCAR-B Block: SCAR-B ...
NARSTO PAC2001 LANGLEY GAS PM MET DATA
Atmospheric Science Data Center
2018-04-09
... Parameters: Atmospheric Pressure Measurements Air Temperature Humidity Surface Winds Ozone Aerosol Particle ... Data: Spatial Coverage: Canada Pacific 2001 Air Quality Study SCAR-B Block: SCAR-B ...
Improved Detection of Local Earthquakes in the Vienna Basin (Austria), using Subspace Detectors
NASA Astrophysics Data System (ADS)
Apoloner, Maria-Theresia; Caffagni, Enrico; Bokelmann, Götz
2016-04-01
The Vienna Basin in Eastern Austria is densely populated and highly-developed; it is also a region of low to moderate seismicity, yet the seismological network coverage is relatively sparse. This demands improving our capability of earthquake detection by testing new methods, enlarging the existing local earthquake catalogue. This contributes to imaging tectonic fault zones for better understanding seismic hazard, also through improved earthquake statistics (b-value, magnitude of completeness). Detection of low-magnitude earthquakes or events for which the highest amplitudes slightly exceed the signal-to-noise-ratio (SNR), may be possible by using standard methods like the short-term over long-term average (STA/LTA). However, due to sparse network coverage and high background noise, such a technique may not detect all potentially recoverable events. Yet, earthquakes originating from the same source region and relatively close to each other, should be characterized by similarity in seismic waveforms, at a given station. Therefore, waveform similarity can be exploited by using specific techniques such as correlation-template based (also known as matched filtering) or subspace detection methods (based on the subspace theory). Matching techniques basically require a reference or template event, usually characterized by high waveform coherence in the array receivers, and high SNR, which is cross-correlated with the continuous data. Instead, subspace detection methods overcome in principle the necessity of defining template events as single events, but use a subspace extracted from multiple events. This approach theoretically should be more robust in detecting signals that exhibit a strong variability (e.g. because of source or magnitude). In this study we scan the continuous data recorded in the Vienna Basin with a subspace detector to identify additional events. This will allow us to estimate the increase of the seismicity rate in the local earthquake catalogue, therefore providing an evaluation of network performance and efficiency of the method.
Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui
2015-01-19
Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.
Deploying temporary networks for upscaling of sparse network stations
NASA Astrophysics Data System (ADS)
Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Kelly, Victoria; Hall, Mark; Palecki, Michael A.; Temimi, Marouane
2016-10-01
Soil observations networks at the national scale play an integral role in hydrologic modeling, drought assessment, agricultural decision support, and our ability to understand climate change. Understanding soil moisture variability is necessary to apply these measurements to model calibration, business and consumer applications, or even human health issues. The installation of soil moisture sensors as sparse, national networks is necessitated by limited financial resources. However, this results in the incomplete sampling of the local heterogeneity of soil type, vegetation cover, topography, and the fine spatial distribution of precipitation events. To this end, temporary networks can be installed in the areas surrounding a permanent installation within a sparse network. The temporary networks deployed in this study provide a more representative average at the 3 km and 9 km scales, localized about the permanent gauge. The value of such temporary networks is demonstrated at test sites in Millbrook, New York and Crossville, Tennessee. The capacity of a single U.S. Climate Reference Network (USCRN) sensor set to approximate the average of a temporary network at the 3 km and 9 km scales using a simple linear scaling function is tested. The capacity of a temporary network to provide reliable estimates with diminishing numbers of sensors, the temporal stability of those networks, and ultimately, the relationship of the variability of those networks to soil moisture conditions at the permanent sensor are investigated. In this manner, this work demonstrates the single-season installation of a temporary network as a mechanism to characterize the soil moisture variability at a permanent gauge within a sparse network.
Impact of vaccination on the incidence of measles in Mozambique in the period 2000 to 2011.
Muloliwa, Artur Manuel; Camacho, Luiz Antonio Bastos; Verani, José Fernando Souza; Simões, Taynãna César; Dgedge, Martinho do Carmo
2013-02-01
The aim of this study was to contribute to the better planning of measles elimination actions in Mozambique, by considering the impact of vaccination actions over the period 2000 to 2011. Descriptive and ecological studies and case records made available by the Ministry of Health were used to analyze measles vaccination coverage. Statistical analysis was performed using time series and spatial analysis. Vaccine coverage rates ranged from 82% to 99%. Coverage rates in Maputo city were under 70% and in Niassa province they were over 100%. Coverage showed a clustered pattern in the districts. The measles incidence rate was 1.58 per 100,000 inhabitants (0.00-40.08 per 100,000 inhabitants); districts bordering neighboring countries presented high incidence rates. Although measles morbidity and mortality has decreased in Mozambique, vaccine coverage has been insufficient to interrupt measles transmission. Enhanced surveillance, including investigation of cases and outbreaks, and improvements in measles vaccination are recommended in order to achieve a homogenous coverage rate of ≥ 95% for both routine and mass vaccination campaigns.
NASA Astrophysics Data System (ADS)
Jacobsen, S.; Lehner, S.; Hieronimus, J.; Schneemann, J.; Kuhn, M.
2015-04-01
The increasing demand for renewable energy resources has promoted the construction of offshore wind farms e.g. in the North Sea. While the wind farm layout consists of an array of large turbines, the interrelation of wind turbine wakes with the remaining array is of substantial interest. The downstream spatial evolution of turbulent wind turbine wakes is very complex and depends on manifold parameters such as wind speed, wind direction and ambient atmospheric stability conditions. To complement and validate existing numerical models, corresponding observations are needed. While in-situ measurements with e.g. anemometers provide a time-series at the given location, the merits of ground-based and space- or airborne remote sensing techniques are indisputable in terms of spatial coverage. Active microwave devices, such as Scatterometer and Synthetic Aperture Radar (SAR), have proven their capabilities of providing sea surface wind measurements and particularly SAR images reveal wind variations at a high spatial resolution while retaining the large coverage area. Platform-based Doppler LiDAR can resolve wind fields with a high spatial coverage and repetition rates of seconds to minutes. In order to study the capabilities of both methods for the investigation of small scale wind field structures, we present a direct comparison of observations obtained by high resolution TerraSAR-X (TS-X) X-band SAR data and platform-based LiDAR devices at the North Sea wind farm alpha ventus. We furthermore compare the results with meteorological data from the COSMO-DE model run by the German Weather Service DWD. Our study indicates that the overall agreement between SAR and LiDAR wind fields is good and that under appropriate conditions small scale wind field variations compare significantly well.
On the Challenge of Observing Pelagic Sargassum in Coastal Oceans: A Multi-sensor Assessment
NASA Astrophysics Data System (ADS)
Hu, C.; Feng, L.; Hardy, R.; Hochberg, E. J.
2016-02-01
Remote detection of pelagic Sargassum is often hindered by its spectral similarity to other floating materials and by the inadequate spatial resolution. Using measurements from multi-spectral satellite sensors (Moderate Resolution Imaging Spectroradiometer or MODIS), Landsat, WorldView-2 (or WV-2) as well as hyperspectral sensors (Hyperspectral Imager for the Coastal Ocean or HICO, Airborne Visible-InfraRed Imaging Spectrometer or AVIRIS) and airborne digital photos, we analyze and compare their ability (in terms of spectral and spatial resolutions) to detect Sargassum and to differentiate from other floating materials such as Trichodesmium, Syringodium, Ulva, garbage, and emulsified oil. Field measurements suggest that Sargassum has a distinctive reflectance curvature around 630 nm due to its chlorophyll c pigments, which provides a unique spectral signature when combined with the reflectance ratio between brown ( 650 nm) and green ( 555 nm) wavelengths. For a 10-nm resolution sensor on the hyperspectral HyspIRI mission currently being planned by NASA, a stepwise rule to examine several indexes established from 6 bands (centered at 555, 605, 625, 645, 685, 755 nm) is shown to be effective to unambiguously differentiate Sargassum from all other floating materials Numerical simulations using spectral endmembers and noise in the satellite-derived reflectance suggest that spectral discrimination is degraded when a pixel is mixed between Sargassum and water. A minimum of 20-30% Sargassum coverage within a pixel is required to retain such ability, while the partial coverage can be as low as 1-2% when detecting floating materials without spectral discrimination. With its expected signal-to-noise ratios (SNRs 200:1), the hyperspectral HyspIRI mission may provide a compromise between spatial resolution and spatial coverage to improve our capacity to detect, discriminate, and quantify Sargassum.
Mapping the Distribution of Anthrax in Mainland China, 2005-2013.
Chen, Wan-Jun; Lai, Sheng-Jie; Yang, Yang; Liu, Kun; Li, Xin-Lou; Yao, Hong-Wu; Li, Yu; Zhou, Hang; Wang, Li-Ping; Mu, Di; Yin, Wen-Wu; Fang, Li-Qun; Yu, Hong-Jie; Cao, Wu-Chun
2016-04-01
Anthrax, a global re-emerging zoonotic disease in recent years is enzootic in mainland China. Despite its significance to the public health, spatiotemporal distributions of the disease in human and livestock and its potential driving factors remain poorly understood. Using the national surveillance data of human and livestock anthrax from 2005 to 2013, we conducted a retrospective epidemiological study and risk assessment of anthrax in mainland China. The potential determinants for the temporal and spatial distributions of human anthrax were also explored. We found that the majority of human anthrax cases were located in six provinces in western and northeastern China, and five clustering areas with higher incidences were identified. The disease mostly peaked in July or August, and males aged 30-49 years had higher incidence than other subgroups. Monthly incidence of human anthrax was positively correlated with monthly average temperature, relative humidity and monthly accumulative rainfall with lags of 0-2 months. A boosted regression trees (BRT) model at the county level reveals that densities of cattle, sheep and human, coverage of meadow, coverage of typical grassland, elevation, coverage of topsoil with pH > 6.1, concentration of organic carbon in topsoil, and the meteorological factors have contributed substantially to the spatial distribution of the disease. The model-predicted probability of occurrence of human cases in mainland China was mapped at the county level. Anthrax in China was characterized by significant seasonality and spatial clustering. The spatial distribution of human anthrax was largely driven by livestock husbandry, human density, land cover, elevation, topsoil features and climate. Enhanced surveillance and intervention for livestock and human anthrax in the high-risk regions, particularly on the Qinghai-Tibetan Plateau, is the key to the prevention of human infections.
Mapping the Distribution of Anthrax in Mainland China, 2005–2013
Yang, Yang; Liu, Kun; Li, Xin-Lou; Yao, Hong-Wu; Li, Yu; Zhou, Hang; Wang, Li-Ping; Mu, Di; Yin, Wen-Wu; Fang, Li-Qun; Yu, Hong-Jie; Cao, Wu-Chun
2016-01-01
Background Anthrax, a global re-emerging zoonotic disease in recent years is enzootic in mainland China. Despite its significance to the public health, spatiotemporal distributions of the disease in human and livestock and its potential driving factors remain poorly understood. Methodology/Principal Findings Using the national surveillance data of human and livestock anthrax from 2005 to 2013, we conducted a retrospective epidemiological study and risk assessment of anthrax in mainland China. The potential determinants for the temporal and spatial distributions of human anthrax were also explored. We found that the majority of human anthrax cases were located in six provinces in western and northeastern China, and five clustering areas with higher incidences were identified. The disease mostly peaked in July or August, and males aged 30–49 years had higher incidence than other subgroups. Monthly incidence of human anthrax was positively correlated with monthly average temperature, relative humidity and monthly accumulative rainfall with lags of 0–2 months. A boosted regression trees (BRT) model at the county level reveals that densities of cattle, sheep and human, coverage of meadow, coverage of typical grassland, elevation, coverage of topsoil with pH > 6.1, concentration of organic carbon in topsoil, and the meteorological factors have contributed substantially to the spatial distribution of the disease. The model-predicted probability of occurrence of human cases in mainland China was mapped at the county level. Conclusions/Significance Anthrax in China was characterized by significant seasonality and spatial clustering. The spatial distribution of human anthrax was largely driven by livestock husbandry, human density, land cover, elevation, topsoil features and climate. Enhanced surveillance and intervention for livestock and human anthrax in the high-risk regions, particularly on the Qinghai-Tibetan Plateau, is the key to the prevention of human infections. PMID:27097318
NARSTO PAC2001 SLOCAN PARK GAS PM MET DATA
Atmospheric Science Data Center
2018-04-09
... Parameters: Atmospheric Pressure Measurements Air Temperature Humidity Surface Winds Ozone Aerosol Particle ... Data: Spatial Coverage: Canada Pacific 2001 Air Quality Study SCAR-B Block: SCAR-B ...
Color normalization of histology slides using graph regularized sparse NMF
NASA Astrophysics Data System (ADS)
Sha, Lingdao; Schonfeld, Dan; Sethi, Amit
2017-03-01
Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.
Strategies for satellite-based monitoring of CO2 from distributed area and point sources
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David
2014-05-01
Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.
NASA Astrophysics Data System (ADS)
Peukert, Anne; Schoening, Timm; Alevizos, Evangelos; Köser, Kevin; Kwasnitschka, Tom; Greinert, Jens
2018-04-01
In this study, ship- and autonomous underwater vehicle (AUV)-based multibeam data from the German ferromanganese-nodule (Mn-nodule) license area in the Clarion-Clipperton Zone (CCZ; eastern Pacific) are linked to ground-truth data from optical imaging. Photographs obtained by an AUV enable semi-quantitative assessments of nodule coverage at a spatial resolution in the range of meters. Together with high-resolution AUV bathymetry, this revealed a correlation of small-scale terrain variations ( < 5 m horizontally, < 1 m vertically) with nodule coverage. In the presented data set, increased nodule coverage could be correlated with slopes > 1.8° and concave terrain. On a more regional scale, factors such as the geological setting (existence of horst and graben structures, sediment thickness, outcropping basement) and influence of bottom currents seem to play an essential role for the spatial variation of nodule coverage and the related hard substrate habitat. AUV imagery was also successfully employed to map the distribution of resettled sediment following a disturbance and sediment cloud generation during a sampling deployment of an epibenthic sledge. Data from before and after the disturbance
allow a direct assessment of the impact. Automated image processing analyzed the nodule coverage at the seafloor, revealing nodule blanketing by resettling of suspended sediment within 16 h after the disturbance. The visually detectable impact was spatially limited to a maximum of 100 m distance from the disturbance track, downstream of the bottom water current. A correlation with high-resolution AUV bathymetry reveals that the blanketing pattern varies in extent by tens of meters, strictly following the bathymetry, even in areas of only slightly undulating seafloor ( < 1 m vertical change). These results highlight the importance of detailed terrain knowledge when engaging in resource assessment studies for nodule abundance estimates and defining mineable areas. At the same time, it shows the importance of high-resolution mapping for detailed benthic habitat studies that show a heterogeneity at scales of 10 to 100 m. Terrain knowledge is also needed to determine the scale of the impact by seafloor sediment blanketing during mining operations.
Baker, Daniel H; Meese, Tim S
2016-07-27
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.
NASA Astrophysics Data System (ADS)
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects-15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing-168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Salient Object Detection via Structured Matrix Decomposition.
Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J
2016-05-04
Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.
Baker, Daniel H.; Meese, Tim S.
2016-01-01
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures. PMID:27460430
Synthesizing spatiotemporally sparse smartphone sensor data for bridge modal identification
NASA Astrophysics Data System (ADS)
Ozer, Ekin; Feng, Maria Q.
2016-08-01
Smartphones as vibration measurement instruments form a large-scale, citizen-induced, and mobile wireless sensor network (WSN) for system identification and structural health monitoring (SHM) applications. Crowdsourcing-based SHM is possible with a decentralized system granting citizens with operational responsibility and control. Yet, citizen initiatives introduce device mobility, drastically changing SHM results due to uncertainties in the time and the space domains. This paper proposes a modal identification strategy that fuses spatiotemporally sparse SHM data collected by smartphone-based WSNs. Multichannel data sampled with the time and the space independence is used to compose the modal identification parameters such as frequencies and mode shapes. Structural response time history can be gathered by smartphone accelerometers and converted into Fourier spectra by the processor units. Timestamp, data length, energy to power conversion address temporal variation, whereas spatial uncertainties are reduced by geolocation services or determining node identity via QR code labels. Then, parameters collected from each distributed network component can be extended to global behavior to deduce modal parameters without the need of a centralized and synchronous data acquisition system. The proposed method is tested on a pedestrian bridge and compared with a conventional reference monitoring system. The results show that the spatiotemporally sparse mobile WSN data can be used to infer modal parameters despite non-overlapping sensor operation schedule.
4D Infant Cortical Surface Atlas Construction using Spherical Patch-based Sparse Representation.
Wu, Zhengwang; Li, Gang; Meng, Yu; Wang, Li; Lin, Weili; Shen, Dinggang
2017-09-01
The 4D infant cortical surface atlas with densely sampled time points is highly needed for neuroimaging analysis of early brain development. In this paper, we build the 4D infant cortical surface atlas firstly covering 6 postnatal years with 11 time points (i.e., 1, 3, 6, 9, 12, 18, 24, 36, 48, 60, and 72 months), based on 339 longitudinal MRI scans from 50 healthy infants. To build the 4D cortical surface atlas, first , we adopt a two-stage groupwise surface registration strategy to ensure both longitudinal consistency and unbiasedness. Second , instead of simply averaging over the co-registered surfaces, a spherical patch-based sparse representation is developed to overcome possible surface registration errors across different subjects. The central idea is that, for each local spherical patch in the atlas space, we build a dictionary, which includes the samples of current local patches and their spatially-neighboring patches of all co-registered surfaces, and then the current local patch in the atlas is sparsely represented using the built dictionary. Compared to the atlas built with the conventional methods, the 4D infant cortical surface atlas constructed by our method preserves more details of cortical folding patterns, thus leading to boosted accuracy in registration of new infant cortical surfaces.
Sun, Yankui; Li, Shan; Sun, Zhongyang
2017-01-01
We propose a framework for automated detection of dry age-related macular degeneration (AMD) and diabetic macular edema (DME) from retina optical coherence tomography (OCT) images, based on sparse coding and dictionary learning. The study aims to improve the classification performance of state-of-the-art methods. First, our method presents a general approach to automatically align and crop retina regions; then it obtains global representations of images by using sparse coding and a spatial pyramid; finally, a multiclass linear support vector machine classifier is employed for classification. We apply two datasets for validating our algorithm: Duke spectral domain OCT (SD-OCT) dataset, consisting of volumetric scans acquired from 45 subjects—15 normal subjects, 15 AMD patients, and 15 DME patients; and clinical SD-OCT dataset, consisting of 678 OCT retina scans acquired from clinics in Beijing—168, 297, and 213 OCT images for AMD, DME, and normal retinas, respectively. For the former dataset, our classifier correctly identifies 100%, 100%, and 93.33% of the volumes with DME, AMD, and normal subjects, respectively, and thus performs much better than the conventional method; for the latter dataset, our classifier leads to a correct classification rate of 99.67%, 99.67%, and 100.00% for DME, AMD, and normal images, respectively.
Vision based obstacle detection and grouping for helicopter guidance
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Chatterji, Gano
1993-01-01
Electro-optical sensors can be used to compute range to objects in the flight path of a helicopter. The computation is based on the optical flow/motion at different points in the image. The motion algorithms provide a sparse set of ranges to discrete features in the image sequence as a function of azimuth and elevation. For obstacle avoidance guidance and display purposes, these discrete set of ranges, varying from a few hundreds to several thousands, need to be grouped into sets which correspond to objects in the real world. This paper presents a new method for object segmentation based on clustering the sparse range information provided by motion algorithms together with the spatial relation provided by the static image. The range values are initially grouped into clusters based on depth. Subsequently, the clusters are modified by using the K-means algorithm in the inertial horizontal plane and the minimum spanning tree algorithms in the image plane. The object grouping allows interpolation within a group and enables the creation of dense range maps. Researchers in robotics have used densely scanned sequence of laser range images to build three-dimensional representation of the outside world. Thus, modeling techniques developed for dense range images can be extended to sparse range images. The paper presents object segmentation results for a sequence of flight images.
Moïsi, Jennifer C; Kabuka, Jonathan; Mitingi, Dorah; Levine, Orin S; Scott, J Anthony G
2010-08-09
We conducted a vaccine coverage survey in Kilifi District, Kenya in order to identify predictors of childhood immunization. We calculated travel time to vaccine clinics and examined its relationship to immunization coverage and timeliness among the 2169 enrolled children (median age: 12.5 months). 86% had vaccine cards available, >95% had received three doses of DTP-HepB-Hib and polio vaccines and 88% of measles. Travel time did not affect vaccination coverage or timeliness. The Kenyan EPI reaches nearly all children in Kilifi and delays in vaccination are few, suggesting that vaccines will have maximal impact on child morbidity and mortality. Copyright 2010 Elsevier Ltd. All rights reserved.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Xiao, Kai; Ma, Ying -Zhong; Simpson, Mary Jane; ...
2016-04-22
Charge carrier trapping degrades the performance of organometallic halide perovskite solar cells. To characterize the locations of electronic trap states in a heterogeneous photoactive layer, a spatially resolved approach is essential. Here, we report a comparative study on methylammonium lead tri-iodide perovskite thin films subject to different thermal annealing times using a combined photoluminescence (PL) and femtosecond transient absorption microscopy (TAM) approach to spatially map trap states. This approach coregisters the initially populated electronic excited states with the regions that recombine radiatively. Although the TAM images are relatively homogeneous for both samples, the corresponding PL images are highly structured. Themore » remarkable variation in the PL intensities as compared to transient absorption signal amplitude suggests spatially dependent PL quantum efficiency, indicative of trapping events. Furthermore, detailed analysis enables identification of two trapping regimes: a densely packed trapping region and a sparse trapping area that appear as unique spatial features in scaled PL maps.« less
NASA Astrophysics Data System (ADS)
Yenier, E.; Baturan, D.; Karimi, S.
2016-12-01
Monitoring of seismicity related to oil and gas operations is routinely performed nowadays using a number of different surface and downhole seismic array configurations and technologies. Here, we provide a hydraulic fracture (HF) monitoring case study that compares the data set generated by a sparse local surface network of broadband seismometers to a data set generated by a single downhole geophone string. Our data was collected during a 5-day single-well HF operation, by a temporary surface network consisting of 10 stations deployed within 5 km of the production well. The downhole data was recorded by a 20 geophone string deployed in an observation well located 15 m from the production well. Surface network data processing included standard STA/LTA event triggering enhanced by template-matching subspace detection, grid search locations which was improved using the double-differencing re-location technique, as well as Richter (ML) and moment (Mw) magnitude computations for all detected events. In addition, moment tensors were computed from first motion polarities and amplitudes for the subset of highest SNR events. The resulting surface event catalog shows a very weak spatio-temporal correlation to HF operations with only 43% of recorded seismicity occurring during HF stages times. This along with source mechanisms shows that the surface-recorded seismicity delineates the activation of several pre-existing structures striking NNE-SSW and consistent with regional stress conditions as indicated by the orientation of SHmax. Comparison of the sparse-surface and single downhole string datasets allows us to perform a cost-benefit analysis of the two monitoring methods. Our findings show that although the downhole array recorded ten times as many events, the surface network provides a more coherent delineation of the underlying structure and more accurate magnitudes for larger magnitude events. We attribute this to the enhanced focal coverage provided by the surface network and the use of broadband instrumentation. The results indicate that sparse surface networks of high quality instruments can provide rich and reliable datasets for evaluation of the impact and effectiveness of hydraulic fracture operations in regions with favorable surface noise, local stress and attenuation characteristics.
A new, multi-resolution bedrock elevation map of the Greenland ice sheet
NASA Astrophysics Data System (ADS)
Griggs, J. A.; Bamber, J. L.; Grisbed Consortium
2010-12-01
Gridded bedrock elevation for the Greenland ice sheet has previously been constructed with a 5 km posting. The true resolution of the data set was, in places, however, considerably coarser than this due to the across-track spacing of ice-penetrating radar transects. Errors were estimated to be on the order of a few percent in the centre of the ice sheet, increasing markedly in relative magnitude near the margins, where accurate thickness is particularly critical for numerical modelling and other applications. We use new airborne and satellite estimates of ice thickness and surface elevation to determine the bed topography for the whole of Greenland. This is a dynamic product, which will be updated frequently as new data, such as that from NASA’s Operation Ice Bridge, becomes available. The University of Kansas has in recent years, flown an airborne ice-penetrating radar system with close flightline spacing over several key outlet glacier systems. This allows us to produce a multi-resolution bedrock elevation dataset with the high spatial resolution needed for ice dynamic modelling over these key outlet glaciers and coarser resolution over the more sparsely sampled interior. Airborne ice thickness and elevation from CReSIS obtained between 1993 and 2009 are combined with JPL/UCI/Iowa data collected by the WISE (Warm Ice Sounding Experiment) covering the marginal areas along the south west coast from 2009. Data collected in the 1970’s by the Technical University of Denmark were also used in interior areas with sparse coverage from other sources. Marginal elevation data from the ICESat laser altimeter and the Greenland Ice Mapping Program were used to help constrain the ice thickness and bed topography close to the ice sheet margin where, typically, the terrestrial observations have poor sampling between flight tracks. The GRISBed consortium currently consists of: W. Blake, S. Gogineni, A. Hoch, C. M. Laird, C. Leuschen, J. Meisel, J. Paden, J. Plummer, F. Rodriguez-Morales and L. Smith, CReSIS, University of Kansas; E. Rignot, JPL and University of California, Irvine; Y. Gim, JPL; J. Mouginot, University of California, Irvine; D. Kirchner, University of Iowa; I. Howat, Byrd Polar Research Center, Ohio State University; I. Joughin and B. Smith, University of Washington; T. Scambos, NSIDC; S. Martin, University of Washington; T. Wagner, NASA.
Wang, Kang; Zhang, Tingjun; Zhang, Xiangdong; Clow, Gary D.; Jafarov, Elchin E.; Overeem, Irina; Romanovsky, Vladimir; Peng, Xiaoqing; Cao, Bin
2017-01-01
Historically, in situ measurements have been notoriously sparse over the Arctic. As a consequence, the existing gridded data of surface air temperature (SAT) may have large biases in estimating the warming trend in this region. Using data from an expanded monitoring network with 31 stations in the Alaskan Arctic, we demonstrate that the SAT has increased by 2.19°C in this region, or at a rate of 0.23°C/decade during 1921–2015. Meanwhile, we found that the SAT warmed at 0.71°C/decade over 1998–2015, which is 2 to 3 times faster than the rate established from the gridded data sets. Focusing on the “hiatus” period 1998–2012 as identified by the Intergovernmental Panel on Climate Change (IPCC) report, the SAT has increased at 0.45°C/decade, which captures more than 90% of the regional trend for 1951–2012. We suggest that sparse in situ measurements are responsible for underestimation of the SAT change in the gridded data sets. It is likely that enhanced climate warming may also have happened in the other regions of the Arctic since the late 1990s but left undetected because of incomplete observational coverage.
Basu, Sumanta; Duren, William; Evans, Charles R; Burant, Charles F; Michailidis, George; Karnovsky, Alla
2017-05-15
Recent technological advances in mass spectrometry, development of richer mass spectral libraries and data processing tools have enabled large scale metabolic profiling. Biological interpretation of metabolomics studies heavily relies on knowledge-based tools that contain information about metabolic pathways. Incomplete coverage of different areas of metabolism and lack of information about non-canonical connections between metabolites limits the scope of applications of such tools. Furthermore, the presence of a large number of unknown features, which cannot be readily identified, but nonetheless can represent bona fide compounds, also considerably complicates biological interpretation of the data. Leveraging recent developments in the statistical analysis of high-dimensional data, we developed a new Debiased Sparse Partial Correlation algorithm (DSPC) for estimating partial correlation networks and implemented it as a Java-based CorrelationCalculator program. We also introduce a new version of our previously developed tool Metscape that enables building and visualization of correlation networks. We demonstrate the utility of these tools by constructing biologically relevant networks and in aiding identification of unknown compounds. http://metscape.med.umich.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Garry, Freya; McDonagh, Elaine; Blaker, Adam; Roberts, Chris; Desbruyères, Damien; King, Brian
2017-04-01
Estimates of heat content change in the deep oceans (below 2000 m) over the last thirty years are obtained from temperature measurements made by hydrographic survey ships. Cruises occupy the same tracks across an ocean basin approximately every 5+ years. Measurements may not be sufficiently frequent in time or space to allow accurate evaluation of total ocean heat content (OHC) and its rate of change. It is widely thought that additional deep ocean sampling will also aid understanding of the mechanisms for OHC change on annual to decadal timescales, including how OHC varies regionally under natural and anthropogenically forced climate change. Here a 0.25˚ ocean model is used to investigate the magnitude of uncertainties and biases that exist in estimates of deep ocean temperature change from hydrographic sections due to their infrequent timing and sparse spatial distribution during 1990 - 2010. Biases in the observational data may be due to lack of spatial coverage (not enough sections covering the basin), lack of data between occupations (typically 5-10 years apart) and due to occupations not closely spanning the time period of interest. Between 1990 - 2010, the modelled biases globally are comparatively small in the abyssal ocean below 3500 m although regionally certain biases in heat flux into the 4000 - 6000 m layer can be up to 0.05 Wm-2. Biases in the heat flux into the deep 2000 - 4000 m layer due to either temporal or spatial sampling uncertainties are typically much larger and can be over 0.1 Wm-2 across an ocean. Overall, 82% of the warming trend below 2000 m is captured by observational-style sampling in the model. However, at 2500 m (too deep for additional temperature information to be inferred from upper ocean Argo) less than two thirds of the magnitude of the global warming trend is obtained, and regionally large biases exist in the Atlantic, Southern and Indian Oceans, highlighting the need for widespread improved deep ocean temperature sampling. In addition to bias due to infrequent sampling, moving the timings of occupations by a few months generates relatively large uncertainty due to intra-annual variability in deep ocean model temperature, further strengthening the case for high temporal frequency observations in the deep ocean (as could be achieved using deep ocean autonomous float technologies). Biases due to different uncertainties can have opposing signs and differ in relative importance both regionally and with depth revealing the importance of reducing all uncertainties (both spatial and temporal) simultaneously in future deep ocean observing design.
SAGE III L2 Monthly Cloud Presence Data (Binary)
Atmospheric Science Data Center
2016-06-14
... degrees South Spatial Resolution: 1 km vertical Temporal Coverage: 02/27/2002 - 12/31/2005 ... Parameters: Cloud Amount/Frequency Cloud Height Cloud Vertical Distribution Order Data: Search and ...
Wang, Shusen; Pan, Ming; Mu, Qiaozhen; ...
2015-07-29
Here, this study compares six evapotranspiration ET products for Canada's landmass, namely, eddy covariance EC measurements; surface water budget ET; remote sensing ET from MODIS; and land surface model (LSM) ET from the Community Land Model (CLM), the Ecological Assimilation of Land and Climate Observations (EALCO) model, and the Variable Infiltration Capacity model (VIC). The ET climatology over the Canadian landmass is characterized and the advantages and limitations of the datasets are discussed. The EC measurements have limited spatial coverage, making it difficult for model validations at the national scale. Water budget ET has the largest uncertainty because of datamore » quality issues with precipitation in mountainous regions and in the north. MODIS ET shows relatively large uncertainty in cold seasons and sparsely vegetated regions. The LSM products cover the entire landmass and exhibit small differences in ET among them. Annual ET from the LSMs ranges from small negative values to over 600 mm across the landmass, with a countrywide average of 256 ± 15 mm. Seasonally, the countrywide average monthly ET varies from a low of about 3 mm in four winter months (November-February) to 67 ± 7 mm in July. The ET uncertainty is scale dependent. Larger regions tend to have smaller uncertainties because of the offset of positive and negative biases within the region. More observation networks and better quality controls are critical to improving ET estimates. Future techniques should also consider a hybrid approach that integrates strengths of the various ET products to help reduce uncertainties in ET estimation.« less
NASA Tech Briefs, November 2003
NASA Technical Reports Server (NTRS)
2003-01-01
Topics covered include: Computer Program Recognizes Patterns in Time-Series Data; Program for User-Friendly Management of Input and Output Data Sets; Noncoherent Tracking of a Source of a Data-Modulated Signal; Software for Acquiring Image Data for PIV; Detecting Edges in Images by Use of Fuzzy Reasoning; A Timer for Synchronous Digital Systems; Prototype Parts of a Digital Beam-Forming Wide-Band Receiver; High-Voltage Droplet Dispenser; Network Extender for MIL-STD-1553 Bus; MMIC HEMT Power Amplifier for 140 to 170 GHz; Piezoelectric Diffraction-Based Optical Switches; Numerical Modeling of Nanoelectronic Devices; Organizing Diverse, Distributed Project Information; Eigensolver for a Sparse, Large Hermitian Matrix; Modified Polar-Format Software for Processing SAR Data; e-Stars Template Builder; Software for Acoustic Rendering; Functionally Graded Nanophase Beryllium/Carbon Composites; Thin Thermal-Insulation Blankets for Very High Temperatures; Prolonging Microgravity on Parabolic Airplane Flights; Device for Locking a Control Knob; Cable-Dispensing Cart; Foam Sensor Structures Would be Self-Deployable and Survive Hard Landings; Real-Gas Effects on Binary Mixing Layers; Earth-Space Link Attenuation Estimation via Ground Radar Kdp; Wedge Heat-Flux Indicators for Flash Thermography; Measuring Diffusion of Liquids by Common-Path Interferometry; Zero-Shear, Low-Disturbance Optical Delay Line; Whispering-Gallery Mode-Locked Lasers; Spatial Light Modulators as Optical Crossbar Switches; Update on EMD and Hilbert-Spectra Analysis of Time Series; Quad-Tree Visual-Calculus Analysis of Satellite Coverage; Dyakonov-Perel Effect on Spin Dephasing in n-Type GaAs; Update on Area Production in Mixing of Supercritical Fluids; and Quasi-Sun-Pointing of Spacecraft Using Radiation Pressure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Shusen; Pan, Ming; Mu, Qiaozhen
Here, this study compares six evapotranspiration ET products for Canada's landmass, namely, eddy covariance EC measurements; surface water budget ET; remote sensing ET from MODIS; and land surface model (LSM) ET from the Community Land Model (CLM), the Ecological Assimilation of Land and Climate Observations (EALCO) model, and the Variable Infiltration Capacity model (VIC). The ET climatology over the Canadian landmass is characterized and the advantages and limitations of the datasets are discussed. The EC measurements have limited spatial coverage, making it difficult for model validations at the national scale. Water budget ET has the largest uncertainty because of datamore » quality issues with precipitation in mountainous regions and in the north. MODIS ET shows relatively large uncertainty in cold seasons and sparsely vegetated regions. The LSM products cover the entire landmass and exhibit small differences in ET among them. Annual ET from the LSMs ranges from small negative values to over 600 mm across the landmass, with a countrywide average of 256 ± 15 mm. Seasonally, the countrywide average monthly ET varies from a low of about 3 mm in four winter months (November-February) to 67 ± 7 mm in July. The ET uncertainty is scale dependent. Larger regions tend to have smaller uncertainties because of the offset of positive and negative biases within the region. More observation networks and better quality controls are critical to improving ET estimates. Future techniques should also consider a hybrid approach that integrates strengths of the various ET products to help reduce uncertainties in ET estimation.« less
NASA Astrophysics Data System (ADS)
Fan, Jiayuan; Tan, Hui Li; Toomik, Maria; Lu, Shijian
2016-10-01
Spatial pyramid matching has demonstrated its power for image recognition task by pooling features from spatially increasingly fine sub-regions. Motivated by the concept of feature pooling at multiple pyramid levels, we propose a novel spectral-spatial hyperspectral image classification approach using superpixel-based spatial pyramid representation. This technique first generates multiple superpixel maps by decreasing the superpixel number gradually along with the increased spatial regions for labelled samples. By using every superpixel map, sparse representation of pixels within every spatial region is then computed through local max pooling. Finally, features learned from training samples are aggregated and trained by a support vector machine (SVM) classifier. The proposed spectral-spatial hyperspectral image classification technique has been evaluated on two public hyperspectral datasets, including the Indian Pines image containing 16 different agricultural scene categories with a 20m resolution acquired by AVIRIS and the University of Pavia image containing 9 land-use categories with a 1.3m spatial resolution acquired by the ROSIS-03 sensor. Experimental results show significantly improved performance compared with the state-of-the-art works. The major contributions of this proposed technique include (1) a new spectral-spatial classification approach to generate feature representation for hyperspectral image, (2) a complementary yet effective feature pooling approach, i.e. the superpixel-based spatial pyramid representation that is used for the spatial correlation study, (3) evaluation on two public hyperspectral image datasets with superior image classification performance.
Sun, Shaojie; Hu, Chuanmin; Feng, Lian; Swayze, Gregg A.; Holmes, Jamie; Graettinger, George; MacDonald, Ian R.; Garcia, Oscar; Leifer, Ira
2016-01-01
Using fine spatial resolution (~ 7.6 m) hyperspectral AVIRIS data collected over the Deepwater Horizon oil spill in the Gulf of Mexico, we statistically estimated slick lengths, widths and length/width ratios to characterize oil slick morphology for different thickness classes. For all AVIRIS-detected oil slicks (N = 52,100 continuous features) binned into four thickness classes (≤ 50 μm but thicker than sheen, 50–200 μm, 200–1000 μm, and > 1000 μm), the median lengths, widths, and length/width ratios of these classes ranged between 22 and 38 m, 7–11 m, and 2.5–3.3, respectively. The AVIRIS data were further aggregated to 30-m (Landsat resolution) and 300-m (MERIS resolution) spatial bins to determine the fractional oil coverage in each bin. Overall, if 50% fractional pixel coverage were to be required to detect oil with thickness greater than sheen for most oil containing pixels, a 30-m resolution sensor would be needed.
NASA Astrophysics Data System (ADS)
Greenwald, R. A.; Ruohoniemi, M.; Baker, J. B.; Talaat, E.; Lester, M.; Oksavik, K.
2008-12-01
During the IPY, the second of two lower-latitude SuperDARN radars was put into operation in the eastern U.S. Located at Blackstone, VA and directed toward central Canada, it extends the coverage of the preexisting Wallops Island radar to more than 4 hours of magnetic local time and covers 50-70 degrees geomagnetic latitude providing coverage of ionospheric plasma convection and electric fields on magnetic field lines connected to the inner boundary of the plasmasheet, ring current and plasmapause. Although initial measurements with this coordinated pair of radars were made at a time of low geomagnetic activity, there have been many opportunities to examine both the spatial and temporal response of low-latitude auroral and subauroral plasma convection and its associated electric field to a variety of high-latitude magnetospheric drivers including dayside reconnection and midnight sector substorms. In this paper, we discuss the dynamical response of these flows to both dayside reconnection and substorms. We specifically examine the timing, location, spatial extent and intensity of these flow enhancements versus the nature and strength of the driver.
NASA Astrophysics Data System (ADS)
Klehmet, K.; Rockel, B.
2012-04-01
The analysis of long-term changes and variability of climate variables for the large areal extent of Siberia - covering arctic, subarctic and temperate northern latitudes - is hampered by the sparseness of in-situ observations. To counteract this deficiency we aimed to provide a reconstruction of regional climate for the period 1948-2010 getting homogenous, consistent fields of various terrestrial and atmospheric parameters for Siberia. In order to obtain in addition a higher temporal and spatial resolution than global datasets can provide, we performed the reconstruction using the regional climate model COSMO-CLM (climate mode of the limited area model COSMO developed by the German weather service). However, the question arises whether the dynamically downscaled data of reanalysis can improve the representation of recent climate conditions. As global forcing for the initialization and the regional boundaries we use NCEP-1 Reanalysis of the National Centers for Environmental Prediction since it has the longest temporal data coverage among the reanalysis products. Additionally, spectral nudging is applied to prevent the regional model from deviating from the prescribed large-scale circulation within the whole simulation domain. The area of interest covers a region in Siberia, spanning from the Laptev Sea and Kara Sea to Northern Mongolia and from the West Siberian Lowland to the border of Sea of Okhotsk. The current horizontal resolution is of about 50 km which is planned to be increased to 25 km. To answer the question, we investigate spatial and temporal characteristics of temperature and precipitation of the model output in comparison to global reanalysis data (NCEP-1, ERA40, ERA-Interim). As reference Russian station data from the "Global Summary of the Day" data set, provided by NCDC, is used. Temperature is analyzed with respect to its climatologically spatial patterns across the model domain and its variability of extremes based on climate indices derived from daily mean, maximum, minimum temperature (e.g. frost days) for different subregions. The decreasing number of frost days from north to south of the region, calculated from the reanalysis datasets and COSMO-CLM output, indicates the temperature gradient from the arctic to temperate latitudes. For most of the considered subregions NCEP-1 shows more frost days than ERA-Interim and COSMO-CLM.
NASA Astrophysics Data System (ADS)
Cifelli, R.; Chen, H.; Chandrasekar, C. V.; Willie, D.; Reynolds, D.; Campbell, C.; Zhang, Y.; Sukovich, E.
2012-12-01
Investigating the uncertainties and improving the accuracy of quantitative precipitation estimation (QPE) is a critical mission of the National Oceanic and Atmospheric Administration (NOAA). QPE is extremely challenging in regions of complex terrain like the western U.S. because of the sparse coverage of ground-based radar, complex orographic precipitation processes, and the effects of beam blockages (e.g., Westrick et al. 1999). In addition, the rain gauge density in complex terrain is often inadequate to capture spatial variability in the precipitation patterns. The NOAA Hydrometeorology Testbed (HMT) conducts research on precipitation and weather conditions that can lead to flooding, and fosters transition of scientific advances and new tools into forecasting operations (see hmt.noaa.gov). The HMT program consists of a series of demonstration projects in different geographical regions to enhance understanding of region specific processes related to precipitation, including QPE. There are a number of QPE systems that are widely used across NOAA for precipitation estimation (e.g., Cifelli et al. 2011; Chandrasekar et al. 2012). Two of these systems have been installed at the NOAA Earth System Research Laboratory: Multisensor Precipitation Estimator (MPE) and National Mosaic and Multi-sensor QPE (NMQ) developed by NWS and NSSL, respectively. Both provide gridded QPE products that include radar-only, gauge-only and gauge-radar-merged, etc; however, these systems often provide large differences in QPE (in terms of amounts and spatial patterns) due to differences in Z-R selection, vertical profile of reflectivity correction, and gauge interpolation procedures. Determining the appropriate QPE product and quantification of QPE uncertainty is critical for operational applications, including water management decisions and flood warnings. For example, hourly QPE is used to correct radar based rain rates used by the Flash Flood Monitoring and Prediction (FFMP) package in the NWS forecast offices for issuance of flash flood warnings. This study will evaluate the performance of MPE and NMQ QPE products using independent gauges, object identification techniques for spatial verification and impact on surface runoff using a distributed hydrologic model. The effort will consist of baseline evaluations of these QPE systems to determine which combination of algorithm features is appropriate as well as investigate new methods for combining the gage and radar data. The Russian River Basin in California is used to demonstrate the comparison methodology with data collected from several rainfall events in March 2012.
On the uncertainties associated with using gridded rainfall data as a proxy for observed
NASA Astrophysics Data System (ADS)
Tozer, C. R.; Kiem, A. S.; Verdon-Kidd, D. C.
2012-05-01
Gridded rainfall datasets are used in many hydrological and climatological studies, in Australia and elsewhere, including for hydroclimatic forecasting, climate attribution studies and climate model performance assessments. The attraction of the spatial coverage provided by gridded data is clear, particularly in Australia where the spatial and temporal resolution of the rainfall gauge network is sparse. However, the question that must be asked is whether it is suitable to use gridded data as a proxy for observed point data, given that gridded data is inherently "smoothed" and may not necessarily capture the temporal and spatial variability of Australian rainfall which leads to hydroclimatic extremes (i.e. droughts, floods). This study investigates this question through a statistical analysis of three monthly gridded Australian rainfall datasets - the Bureau of Meteorology (BOM) dataset, the Australian Water Availability Project (AWAP) and the SILO dataset. The results of the monthly, seasonal and annual comparisons show that not only are the three gridded datasets different relative to each other, there are also marked differences between the gridded rainfall data and the rainfall observed at gauges within the corresponding grids - particularly for extremely wet or extremely dry conditions. Also important is that the differences observed appear to be non-systematic. To demonstrate the hydrological implications of using gridded data as a proxy for gauged data, a rainfall-runoff model is applied to one catchment in South Australia initially using gauged data as the source of rainfall input and then gridded rainfall data. The results indicate a markedly different runoff response associated with each of the different sources of rainfall data. It should be noted that this study does not seek to identify which gridded dataset is the "best" for Australia, as each gridded data source has its pros and cons, as does gauged data. Rather, the intention is to quantify differences between various gridded data sources and how they compare with gauged data so that these differences can be considered and accounted for in studies that utilise these gridded datasets. Ultimately, if key decisions are going to be based on the outputs of models that use gridded data, an estimate (or at least an understanding) of the uncertainties relating to the assumptions made in the development of gridded data and how that gridded data compares with reality should be made.
Overview of Global Monitoring of Terrestrial Chlorophyll Fluorescence from Space
NASA Technical Reports Server (NTRS)
Guanter, Luis; Zhang, Yongguang; Kohler, Philipp; Walther, Sophia; Frankenberg, Christian; Joiner, Joanna
2016-01-01
Despite the critical importance of photosynthesis for the Earth system, understanding how it is influenced by factors such as climate variability, disturbance history, and water or nutrient availability remains a challenge because of the complex interactions and the lack of GPP measurements at various temporal and spatial scales. Space observations of the sun-induced chlorophyll fluorescence (SIF) electromagnetic signal emitted by plants in the 650-850nm spectral range hold the promise of providing a new view of vegetation photosynthesis on a global basis. Global retrievals of SIF from space have recently been achieved from a number of spaceborne spectrometers originally intended for atmospheric research. Despite not having been designed for land applications, such instruments have turned out to provide the necessary spectral and radiometric sensitivity for SIF retrieval from space. The first global measurements of SIF were achieved in 2011 from spectra acquired by the Japanese GOSAT mission launched in 2009. The retrieval takes advantage of the high spectral resolution provided by GOSATs Fourier Transform Spectrometer (FTS) which allows the evaluation of the in-filling of solar Fraunhofer lines by SIF. Unfortunately, GOSAT only provides a sparse spatial sampling with individual soundings separated by several hundred kilometers. Complementary, the Global Ozone Monitoring Experiment-2 (GOME-2) instruments onboard MetOp-A and MetOp-B enable SIF retrievals since 2007 with a continuous and global spatial coverage. GOME-2 measures in the red and near-infrared (NIR) spectral regions with a spectral resolution of 0.5 nm and a pixel size of up to 40x40 km2. Most recently, another global and spatially continuous data set of SIF retrievals at 740 nm spanning the 2003-2012 time frame has been produced from ENVISATSCIAMACHY. This observational scenario has been completed by the first fluorescence data from the NASA-JPL OCO-2 mission (launched in July 2014) and the upcoming Copernicus' Sentinel 5-Precursor to be launched in early 2016. OCO-2 and TROPOMI offer the possibility of monitoring SIF globally with a 100-fold improvement in spatial and temporal resolution with respect to the current measurements from the GOSAT, GOME-2 and SCIAMACHY missions. In this contribution, we will provide an overview of existing global SIF data sets derived from space-based atmospheric spectrometers and will demonstrate the potential of such data to improve our knowledge of vegetation photosynthesis and gross primary production at the synoptic scale. We will show examples of ongoing research exploiting SIF data for an improved monitoring of photosynthetic activity in different ecosystems, including large crop belts worldwide, the Amazon rainforest and boreal evergreen forests.
NASA Astrophysics Data System (ADS)
Shrestha, Rudra K.; Arora, Vivek K.; Melton, Joe R.; Sushama, Laxmi
2017-10-01
The performance of the competition module of the CLASS-CTEM (Canadian Land Surface Scheme and Canadian Terrestrial Ecosystem Model) modelling framework is assessed at 1° spatial resolution over North America by comparing the simulated geographical distribution of its plant functional types (PFTs) with two observation-based estimates. The model successfully reproduces the broad geographical distribution of trees, grasses and bare ground although limitations remain. In particular, compared to the two observation-based estimates, the simulated fractional vegetation coverage is lower in the arid southwest North American region and higher in the Arctic region. The lower-than-observed simulated vegetation coverage in the southwest region is attributed to lack of representation of shrubs in the model and plausible errors in the observation-based data sets. The observation-based data indicate vegetation fractional coverage of more than 60 % in this arid region, despite only 200-300 mm of precipitation that the region receives annually, and observation-based leaf area index (LAI) values in the region are lower than one. The higher-than-observed vegetation fractional coverage in the Arctic is likely due to the lack of representation of moss and lichen PFTs and also likely because of inadequate representation of permafrost in the model as a result of which the C3 grass PFT performs overly well in the region. The model generally reproduces the broad spatial distribution and the total area covered by the two primary tree PFTs (needleleaf evergreen trees, NDL-EVG; and broadleaf cold deciduous trees, BDL-DCD-CLD) reasonably well. The simulated fractional coverage of tree PFTs increases after the 1960s in response to the CO2 fertilization effect and climate warming. Differences between observed and simulated PFT coverages highlight model limitations and suggest that the inclusion of shrubs, and moss and lichen PFTs, and an adequate representation of permafrost will help improve model performance.
Yang, Hao; Luo, Peng; Wang, Jun; Mou, Chengxiang; Mo, Li; Wang, Zhiyuan; Fu, Yao; Lin, Honghui; Yang, Yongping; Bhatta, Laxmi Dutt
2015-01-01
Climate and human-driven changes play an important role in regional droughts. Northwest Yunnan Province is a key region for biodiversity conservation in China, and it has experienced severe droughts since the beginning of this century; however, the extent of the contributions from climate and human-driven changes remains unclear. We calculated the ecosystem evapotranspiration (ET) and water yield (WY) of northwest Yunnan Province, China from 2001 to 2013 using meteorological and remote sensing observation data and a Surface Energy Balance System (SEBS) model. Multivariate regression analyses were used to differentiate the contribution of climate and vegetation coverage to ET. The results showed that the annual average vegetation coverage significantly increased over time with a mean of 0.69 in spite of the precipitation fluctuation. Afforestation/reforestation and other management efforts attributed to vegetation coverage increase in NW Yunnan. Both ET and WY considerably fluctuated with the climate factors, which ranged from 623.29 mm to 893.8 mm and –51.88 mm to 384.40 mm over the time period. Spatially, ET in the southeast of NW Yunnan (mainly in Lijiang) increased significantly, which was in line with the spatial trend of vegetation coverage. Multivariate linear regression analysis indicated that climatic factors accounted for 85.18% of the ET variation, while vegetation coverage explained 14.82%. On the other hand, precipitation accounted for 67.5% of the WY. We conclude that the continuous droughts in northwest Yunnan were primarily climatically driven; however, man-made land cover and vegetation changes also increased the vulnerability of local populations to drought. Because of the high proportion of the water yield consumed for subsistence and poor infrastructure for water management, local populations have been highly vulnerable to climate drought conditions. We suggest that conservation of native vegetation and development of water-conserving agricultural practices should be implemented as adaptive strategies to mitigate climate change. PMID:26237220
Yang, Hao; Luo, Peng; Wang, Jun; Mou, Chengxiang; Mo, Li; Wang, Zhiyuan; Fu, Yao; Lin, Honghui; Yang, Yongping; Bhatta, Laxmi Dutt
2015-01-01
Climate and human-driven changes play an important role in regional droughts. Northwest Yunnan Province is a key region for biodiversity conservation in China, and it has experienced severe droughts since the beginning of this century; however, the extent of the contributions from climate and human-driven changes remains unclear. We calculated the ecosystem evapotranspiration (ET) and water yield (WY) of northwest Yunnan Province, China from 2001 to 2013 using meteorological and remote sensing observation data and a Surface Energy Balance System (SEBS) model. Multivariate regression analyses were used to differentiate the contribution of climate and vegetation coverage to ET. The results showed that the annual average vegetation coverage significantly increased over time with a mean of 0.69 in spite of the precipitation fluctuation. Afforestation/reforestation and other management efforts attributed to vegetation coverage increase in NW Yunnan. Both ET and WY considerably fluctuated with the climate factors, which ranged from 623.29 mm to 893.8 mm and -51.88 mm to 384.40 mm over the time period. Spatially, ET in the southeast of NW Yunnan (mainly in Lijiang) increased significantly, which was in line with the spatial trend of vegetation coverage. Multivariate linear regression analysis indicated that climatic factors accounted for 85.18% of the ET variation, while vegetation coverage explained 14.82%. On the other hand, precipitation accounted for 67.5% of the WY. We conclude that the continuous droughts in northwest Yunnan were primarily climatically driven; however, man-made land cover and vegetation changes also increased the vulnerability of local populations to drought. Because of the high proportion of the water yield consumed for subsistence and poor infrastructure for water management, local populations have been highly vulnerable to climate drought conditions. We suggest that conservation of native vegetation and development of water-conserving agricultural practices should be implemented as adaptive strategies to mitigate climate change.
Montoro, Pedro R; Luna, Dolores
2009-10-01
Previous studies on the processing of hierarchical patterns (Luna & Montoro, 2008) have shown that altering the spatial relationships between the local elements affected processing dominance by decreasing global advantage. In the present article, the authors examine whether heterogeneity or a sparse distribution of the local elements was the responsible factor for this effect. In Experiments 1 and 2, the distance between the local elements was increased in a similar way, but between-element distance was homogeneous in Experiment 1 and heterogeneous in Experiment 2. In Experiment 3, local elements' size was varied by presenting global patterns composed of similar large or small local elements and of different large and small sizes. The results of the present research showed that, instead of element sparsity, spatial heterogeneity that could change the appearance of the global form as well as the salience of the local elements was the main determiner of impairing global processing.
Compressed single pixel imaging in the spatial frequency domain
Torabzadeh, Mohammad; Park, Il-Yong; Bartels, Randy A.; Durkin, Anthony J.; Tromberg, Bruce J.
2017-01-01
Abstract. We have developed compressed sensing single pixel spatial frequency domain imaging (cs-SFDI) to characterize tissue optical properties over a wide field of view (35 mm×35 mm) using multiple near-infrared (NIR) wavelengths simultaneously. Our approach takes advantage of the relatively sparse spatial content required for mapping tissue optical properties at length scales comparable to the transport scattering length in tissue (ltr∼1 mm) and the high bandwidth available for spectral encoding using a single-element detector. cs-SFDI recovered absorption (μa) and reduced scattering (μs′) coefficients of a tissue phantom at three NIR wavelengths (660, 850, and 940 nm) within 7.6% and 4.3% of absolute values determined using camera-based SFDI, respectively. These results suggest that cs-SFDI can be developed as a multi- and hyperspectral imaging modality for quantitative, dynamic imaging of tissue optical and physiological properties. PMID:28300272
NASA Astrophysics Data System (ADS)
Ahn, S.; Sheng, Z.; Abudu, S.
2017-12-01
Hydrologic cycle of agricultural area has been changing due to the impacts of climate and land use changes (crop coverage changes) in an arid region of Rincon Valley, New Mexico. This study is to evaluate the impacts of weather condition and crop coverage change on hydrologic behavior of agricultural area in Rincon Valley (2,466km2) for agricultural watershed management using a watershed-scale hydrologic model, SWAT (Soil and Water Assessment Tool). The SWAT model was developed to incorporate irrigation of different crops using auto irrigation function. For the weather condition and crop coverage change evaluation, three spatial crop coverages including a normal (2008), wet (2009), and dry (2011) years were prepared using USDA crop data layer (CDL) for fourteen different crops. The SWAT model was calibrated for the period of 2001-2003 and validated for the period of 2004-2006 using daily-observed streamflow data. Scenario analysis was performed for wet and dry years based on the unique combinations of crop coverages and releases from Caballo Reservoir. The SWAT model simulated the present vertical water budget and horizontal water transfer considering irrigation practices in the Rincon Valley. Simulation results indicated the temporal and spatial variability for irrigation and non-irrigation seasons of hydrologic cycle in agricultural area in terms of surface runoff, evapotranspiration, infiltration, percolation, baseflow, soil moisture, and groundwater recharge. The water supply of the dry year could not fully cover whole irrigation period due to dry weather conditions, resulting in reduction of crop acreage. For extreme weather conditions, the temporal variation of water budget became robust, which requires careful irrigation management of the agricultural area. The results could provide guidelines for farmers to decide crop patterns in response to different weather conditions and water availability.
Tip-Enhanced Raman Voltammetry: Coverage Dependence and Quantitative Modeling.
Mattei, Michael; Kang, Gyeongwon; Goubert, Guillaume; Chulhai, Dhabih V; Schatz, George C; Jensen, Lasse; Van Duyne, Richard P
2017-01-11
Electrochemical atomic force microscopy tip-enhanced Raman spectroscopy (EC-AFM-TERS) was employed for the first time to observe nanoscale spatial variations in the formal potential, E 0' , of a surface-bound redox couple. TERS cyclic voltammograms (TERS CVs) of single Nile Blue (NB) molecules were acquired at different locations spaced 5-10 nm apart on an indium tin oxide (ITO) electrode. Analysis of TERS CVs at different coverages was used to verify the observation of single-molecule electrochemistry. The resulting TERS CVs were fit to the Laviron model for surface-bound electroactive species to quantitatively extract the formal potential E 0' at each spatial location. Histograms of single-molecule E 0' at each coverage indicate that the electrochemical behavior of the cationic oxidized species is less sensitive to local environment than the neutral reduced species. This information is not accessible using purely electrochemical methods or ensemble spectroelectrochemical measurements. We anticipate that quantitative modeling and measurement of site-specific electrochemistry with EC-AFM-TERS will have a profound impact on our understanding of the role of nanoscale electrode heterogeneity in applications such as electrocatalysis, biological electron transfer, and energy production and storage.
Risk assessment of groundwater level variability using variable Kriging methods
NASA Astrophysics Data System (ADS)
Spanoudaki, Katerina; Kampanis, Nikolaos A.
2015-04-01
Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49. Kitanidis, P. K. (1997). Introduction to geostatistics, Cambridge: University Press.
Schönbrodt-Stitt, Sarah; Bosch, Anna; Behrens, Thorsten; Hartmann, Heike; Shi, Xuezheng; Scholten, Thomas
2013-10-01
In densely populated countries like China, clean water is one of the most challenging issues of prospective politics and environmental planning. Water pollution and eutrophication by excessive input of nitrogen and phosphorous from nonpoint sources is mostly linked to soil erosion from agricultural land. In order to prevent such water pollution by diffuse matter fluxes, knowledge about the extent of soil loss and the spatial distribution of hot spots of soil erosion is essential. In remote areas such as the mountainous regions of the upper and middle reaches of the Yangtze River, rainfall data are scarce. Since rainfall erosivity is one of the key factors in soil erosion modeling, e.g., expressed as R factor in the Revised Universal Soil Loss Equation model, a methodology is needed to spatially determine rainfall erosivity. Our study aims at the approximation and spatial regionalization of rainfall erosivity from sparse data in the large (3,200 km(2)) and strongly mountainous catchment of the Xiangxi River, a first order tributary to the Yangtze River close to the Three Gorges Dam. As data on rainfall were only obtainable in daily records for one climate station in the central part of the catchment and five stations in its surrounding area, we approximated rainfall erosivity as R factors using regression analysis combined with elevation bands derived from a digital elevation model. The mean annual R factor (R a) amounts for approximately 5,222 MJ mm ha(-1) h(-1) a(-1). With increasing altitudes, R a rises up to maximum 7,547 MJ mm ha(-1) h(-1) a(-1) at an altitude of 3,078 m a.s.l. At the outlet of the Xiangxi catchment erosivity is at minimum with approximate R a=1,986 MJ mm ha(-1) h(-1) a(-1). The comparison of our results with R factors from high-resolution measurements at comparable study sites close to the Xiangxi catchment shows good consistance and allows us to calculate grid-based R a as input for a spatially high-resolution and area-specific assessment of soil erosion risk.
Best, Virginia; Mason, Christine R.; Swaminathan, Jayaganesh; Roverud, Elin; Kidd, Gerald
2017-01-01
In many situations, listeners with sensorineural hearing loss demonstrate reduced spatial release from masking compared to listeners with normal hearing. This deficit is particularly evident in the “symmetric masker” paradigm in which competing talkers are located to either side of a central target talker. However, there is some evidence that reduced target audibility (rather than a spatial deficit per se) under conditions of spatial separation may contribute to the observed deficit. In this study a simple “glimpsing” model (applied separately to each ear) was used to isolate the target information that is potentially available in binaural speech mixtures. Intelligibility of these glimpsed stimuli was then measured directly. Differences between normally hearing and hearing-impaired listeners observed in the natural binaural condition persisted for the glimpsed condition, despite the fact that the task no longer required segregation or spatial processing. This result is consistent with the idea that the performance of listeners with hearing loss in the spatialized mixture was limited by their ability to identify the target speech based on sparse glimpses, possibly as a result of some of those glimpses being inaudible. PMID:28147587
Monitoring air quality in mountains: Designing an effective network
Peterson, D.L.
2000-01-01
A quantitatively robust yet parsimonious air-quality monitoring network in mountainous regions requires special attention to relevant spatial and temporal scales of measurement and inference. The design of monitoring networks should focus on the objectives required by public agencies, namely: 1) determine if some threshold has been exceeded (e.g., for regulatory purposes), and 2) identify spatial patterns and temporal trends (e.g., to protect natural resources). A short-term, multi-scale assessment to quantify spatial variability in air quality is a valuable asset in designing a network, in conjunction with an evaluation of existing data and simulation-model output. A recent assessment in Washington state (USA) quantified spatial variability in tropospheric ozone distribution ranging from a single watershed to the western third of the state. Spatial and temporal coherence in ozone exposure modified by predictable elevational relationships ( 1.3 ppbv ozone per 100 m elevation gain) extends from urban areas to the crest of the Cascade Range. This suggests that a sparse network of permanent analyzers is sufficient at all spatial scales, with the option of periodic intensive measurements to validate network design. It is imperative that agencies cooperate in the design of monitoring networks in mountainous regions to optimize data collection and financial efficiencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, J.A.
This report is a sequel to ORNL/CSD-96 in the ongoing supplements to Professor A.S. Householder's KWIC Index for Numerical Algebra. With this supplement, the coverage has been restricted to Numerical Linear Algebra and is now roughly characterized by the American Mathematical Society's classification section 15 and 65F but with little coverage of inifinite matrices, matrices over fields of characteristics other than zero, operator theory, optimization and those parts of matrix theory primarily combinatorial in nature. Some recognition is made of the uses of graph theory in Numerical Linear Algebra, particularly as regards their use in algorithms for sparse matrix computations.more » The period covered by this report is roughly the calendar year 1981 as measured by the appearance of the articles in the American Mathematical Society's Contents of Mathematical Publications. The review citations are limited to the Mathematical Reviews (MR) and Das Zentralblatt fur Mathematik und Ihre Grenzgebiete (ZBL). Future reports will be made more timely by closer ovservation of the few journals which supply the bulk of the listings rather than what appears to be too much reliance on secondary sources. Some thought is being given to the physical appearance of these reports and the author welcomes comments concerning both their appearance and contents.« less
Global assimilation of X Project Loon stratospheric balloon observations
NASA Astrophysics Data System (ADS)
Coy, L.; Schoeberl, M. R.; Pawson, S.; Candido, S.; Carver, R. W.
2017-12-01
Project Loon has an overall goal of providing worldwide internet coverage using a network of long-duration super-pressure balloons. Beginning in 2013, Loon has launched over 1600 balloons from multiple tropical and middle latitude locations. These GPS tracked balloon trajectories provide lower stratospheric wind information over the oceans and remote land areas where traditional radiosonde soundings are sparse, thus providing unique coverage of lower stratospheric winds. To fully investigate these Loon winds we: 1) compare the Loon winds to winds produced by a global data assimilation system (DAS: NASA GEOS) and 2) assimilate the Loon winds into the same comprehensive DAS. Results show that in middle latitudes the Loon winds and DAS winds agree well and assimilating the Loon winds have only a small impact on short-term forecasting of the Loon winds, however, in the tropics the loon winds and DAS winds often disagree substantially (8 m/s or more in magnitude) and in these cases assimilating the loon winds significantly improves the forecast of the loon winds. By highlighting cases where the Loon and DAS winds differ, these results can lead to improved understanding of stratospheric winds, especially in the tropics.
Global Assimilation of X Project Loon Stratospheric Balloon Observations
NASA Technical Reports Server (NTRS)
Coy, Lawrence; Schoeberl, Mark R.; Pawson, Steven; Candido, Salvatore; Carver, Robert W.
2017-01-01
Project Loon has an overall goal of providing worldwide internet coverage using a network of long-duration super-pressure balloons. Beginning in 2013, Loon has launched over 1600 balloons from multiple tropical and middle latitude locations. These GPS tracked balloon trajectories provide lower stratospheric wind information over the oceans and remote land areas where traditional radiosonde soundings are sparse, thus providing unique coverage of lower stratospheric winds. To fully investigate these Loon winds we: 1) compare the Loon winds to winds produced by a global data assimilation system (DAS: NASA GEOS) and 2) assimilate the Loon winds into the same comprehensive DAS. Results show that in middle latitudes the Loon winds and DAS winds agree well and assimilating the Loon winds have only a small impact on short-term forecasting of the Loon winds, however, in the tropics the loon winds and DAS winds often disagree substantially (8 m/s or more in magnitude) and in these cases assimilating the loon winds significantly improves the forecast of the loon winds. By highlighting cases where the Loon and DAS winds differ, these results can lead to improved understanding of stratospheric winds, especially in the tropics.
Spatial Compressive Sensing for Strain Data Reconstruction from Sparse Sensors
2014-10-01
optical fiber Bragg grating (or FBG ) sensors embedded in the plate. For the sake of simplicity, we assume that the FBGs are embedded in the radial...direction, as shown by the yellow lines in Fig. 10. The yellow lines are the direction along which strain is being measured. We considered FBGs here...however, strain gages emplaced along these lines can also be envisioned. FBGs are strain-measuring sensors that use the principle of low coherence
Particle Size Distributions in Atmospheric Clouds
NASA Technical Reports Server (NTRS)
Paoli, Roberto; Shariff, Karim
2003-01-01
In this note, we derive a transport equation for a spatially integrated distribution function of particles size that is suitable for sparse particle systems, such as in atmospheric clouds. This is done by integrating a Boltzmann equation for a (local) distribution function over an arbitrary but finite volume. A methodology for evolving the moments of the integrated distribution is presented. These moments can be either tracked for a finite number of discrete populations ('clusters') or treated as continuum variables.
Household wireless electroencephalogram hat
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Moon, Gyu; Yamakawa, Takeshi; Tran, Binh
2012-06-01
We applied Compressive Sensing to design an affordable, convenient Brain Machine Interface (BMI) measuring the high spatial density, and real-time process of Electroencephalogram (EEG) brainwaves by a Smartphone. It is useful for therapeutic and mental health monitoring, learning disability biofeedback, handicap interfaces, and war gaming. Its spec is adequate for a biomedical laboratory, without the cables hanging over the head and tethered to a fixed computer terminal. Our improved the intrinsic signal to noise ratio (SNR) by using the non-uniform placement of the measuring electrodes to create the proximity of measurement to the source effect. We computing a spatiotemporal average the larger magnitude of EEG data centers in 0.3 second taking on tethered laboratory data, using fuzzy logic, and computing the inside brainwave sources, by Independent Component Analysis (ICA). Consequently, we can overlay them together by non-uniform electrode distribution enhancing the signal noise ratio and therefore the degree of sparseness by threshold. We overcame the conflicting requirements between a high spatial electrode density and precise temporal resolution (beyond Event Related Potential (ERP) P300 brainwave at 0.3 sec), and Smartphone wireless bottleneck of spatiotemporal throughput rate. Our main contribution in this paper is the quality and the speed of iterative compressed image recovery algorithm based on a Block Sparse Code (Baranuick et al, IEEE/IT 2008). As a result, we achieved real-time wireless dynamic measurement of EEG brainwaves, matching well with traditionally tethered high density EEG.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.
Morikawa, Naoki; Tanaka, Toshihisa; Islam, Md Rabiul
2018-07-01
Mixed frequency and phase coding (FPC) can achieve the significant increase of the number of commands in steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI). However, the inconsistent phases of the SSVEP over channels in a trial and the existence of non-contributing channels due to noise effects can decrease accurate detection of stimulus frequency. We propose a novel command detection method based on a complex sparse spatial filter (CSSF) by solving ℓ 1 - and ℓ 2,1 -regularization problems for a mixed-coded SSVEP-BCI. In particular, ℓ 2,1 -regularization (aka group sparsification) can lead to the rejection of electrodes that are not contributing to the SSVEP detection. A calibration data based canonical correlation analysis (CCA) and CSSF with ℓ 1 - and ℓ 2,1 -regularization cases were demonstrated for a 16-target stimuli with eleven subjects. The results of statistical test suggest that the proposed method with ℓ 1 - and ℓ 2,1 -regularization significantly achieved the highest ITR. The proposed approaches do not need any reference signals, automatically select prominent channels, and reduce the computational cost compared to the other mixed frequency-phase coding (FPC)-based BCIs. The experimental results suggested that the proposed method can be usable implementing BCI effectively with reduce visual fatigue. Copyright © 2018 Elsevier B.V. All rights reserved.
Wisconsin's approach to variation in traffic data
DOT National Transportation Integrated Search
2000-08-01
Traffic data exhibits considerable variability, both spatially and temporally. Given limited resources and the large geographic coverage required for data collection efforts, short period (24-hours to 7-day) traffic data collection must often serve t...
SAGE III L2 Monthly Cloud Presence Data (HDF-EOS)
Atmospheric Science Data Center
2016-06-14
... degrees South Spatial Resolution: 1 km vertical Temporal Coverage: 02/27/2002 - 12/31/2005 ... Parameters: Cloud Amount/Frequency Cloud Height Cloud Vertical Distribution Order Data: Search and ...
Kundrick, Avery; Huang, Zhuojie; Carran, Spencer; Kagoli, Matthew; Grais, Rebecca Freeman; Hurtado, Northan; Ferrari, Matthew
2018-06-15
Despite progress towards increasing global vaccination coverage, measles continues to be one of the leading, preventable causes of death among children worldwide. Whether and how to target sub-national areas for vaccination campaigns continues to remain a question. We analyzed three metrics for prioritizing target areas: vaccination coverage, susceptible birth cohort, and the effective reproductive ratio (R E ) in the context of the 2010 measles epidemic in Malawi. Using case-based surveillance data from the 2010 measles outbreak in Malawi, we estimated vaccination coverage from the proportion of cases reporting with a history of prior vaccination at the district and health facility catchment scale. Health facility catchments were defined as the set of locations closer to a given health facility than to any other. We combined these estimates with regional birth rates to estimate the size of the annual susceptible birth cohort. We also estimated the effective reproductive ratio, R E , at the health facility polygon scale based on the observed rate of exponential increase of the epidemic. We combined these estimates to identify spatial regions that would be of high priority for supplemental vaccination activities. The estimated vaccination coverage across all districts was 84%, but ranged from 61 to 99%. We found that 8 districts and 354 health facility catchments had estimated vaccination coverage below 80%. Areas that had highest birth cohort size were frequently large urban centers that had high vaccination coverage. The estimated R E ranged between 1 and 2.56. The ranking of districts and health facility catchments as priority areas varied depending on the measure used. Each metric for prioritization may result in discrete target areas for vaccination campaigns; thus, there are tradeoffs to choosing one metric over another. However, in some cases, certain areas may be prioritized by all three metrics. These areas should be treated with particular concern. Furthermore, the spatial scale at which each metric is calculated impacts the resulting prioritization and should also be considered when prioritizing areas for vaccination campaigns. These methods may be used to allocate effort for prophylactic campaigns or to prioritize response for outbreak response vaccination.
Pearson, Scott F.; Knapp, Shannon M.
2016-01-01
Habitat selection that has fitness consequences has important implications for conservation activities. For example, habitat characteristics that influence nest success in birds can be manipulated to improve habitat quality with the goal of ultimately improving reproductive success. We examined habitat selection by the threatened streaked horned lark (Eremophila alpestris strigata) at both the breeding-site (territory) and nest-site scales. Larks were selective at both spatial scales but with contrasting selection. At the territory scale, male larks selected sparsely vegetated grasslands with relatively short vegetation. At the nest-site scale, female larks selected sites within territories with higher vegetation density and more perennial forbs. These nest-site scale choices had reproductive consequences, with greater nest success in areas with higher densities of perennial forbs. We experimentally manipulated lark habitat structure in an attempt to mimic the habitat conditions selected by larks by using late summer prescribed fires. After the burn, changes in vegetation structure were in the direction preferred by larks but habitat effects attenuated by the following year. Our results highlight the importance of evaluating habitat selection at spatial scales appropriate to the species of interest, especially when attempting to improve habitat quality for rare and declining species. They also highlight the importance of conducting restoration activities in a research context. For example, because the sparsely vegetated conditions created by fire attenuate, there may be value in examining more frequent burns or hotter fires as the next management and research action. We hope the design outlined in this study will serve as an integrated research and management example for conserving grassland birds generally. PMID:27322196
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)
NASA Technical Reports Server (NTRS)
Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.
2001-01-01
A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.
Gallé, Róbert; Urák, István; Nikolett, Gallé-Szpisjak; Hartel, Tibor
2017-01-01
The integration of food production and biodiversity conservation represents a key challenge for sustainability. Several studies suggest that even small structural elements in the landscape can make a substantial contribution to the overall biodiversity value of the agricultural landscapes. Pastures can have high biodiversity potential. However, their intensive and monofunctional use typically erodes its natural capital, including biodiversity. Here we address the ecological value of fine scale structural elements represented by sparsely scattered trees and shrubs for the spider communities in a moderately intensively grazed pasture in Transylvania, Eastern Europe. The pasture was grazed with sheep, cattle and buffalo (ca 1 Livestock Unit ha-1) and no chemical fertilizers were applied. Sampling sites covered the open pasture as well as the existing fine-scale heterogeneity created by scattered trees and shrub. 40 sampling locations each being represented by three 1 m2 quadrats were situated in a stratified design while assuring spatial independency of sampling locations. We identified 140 species of spiders, out of which 18 were red listed and four were new for the Romanian fauna. Spider species assemblages of open pasture, scattered trees, trees and shrubs and the forest edge were statistically distinct. Our study shows that sparsely scattered mature woody vegetation and shrubs substantially increases the ecological value of managed pastures. The structural complexity provided by scattered trees and shrubs makes possible the co-occurrence of high spider diversity with a moderately high intensity grazing possible in this wood-pasture. Our results are in line with recent empirical research showing that sparse trees and shrubs increases the biodiversity potential of pastures managed for commodity production.
Nikolett, Gallé-Szpisjak; Hartel, Tibor
2017-01-01
The integration of food production and biodiversity conservation represents a key challenge for sustainability. Several studies suggest that even small structural elements in the landscape can make a substantial contribution to the overall biodiversity value of the agricultural landscapes. Pastures can have high biodiversity potential. However, their intensive and monofunctional use typically erodes its natural capital, including biodiversity. Here we address the ecological value of fine scale structural elements represented by sparsely scattered trees and shrubs for the spider communities in a moderately intensively grazed pasture in Transylvania, Eastern Europe. The pasture was grazed with sheep, cattle and buffalo (ca 1 Livestock Unit ha-1) and no chemical fertilizers were applied. Sampling sites covered the open pasture as well as the existing fine-scale heterogeneity created by scattered trees and shrub. 40 sampling locations each being represented by three 1 m2 quadrats were situated in a stratified design while assuring spatial independency of sampling locations. We identified 140 species of spiders, out of which 18 were red listed and four were new for the Romanian fauna. Spider species assemblages of open pasture, scattered trees, trees and shrubs and the forest edge were statistically distinct. Our study shows that sparsely scattered mature woody vegetation and shrubs substantially increases the ecological value of managed pastures. The structural complexity provided by scattered trees and shrubs makes possible the co-occurrence of high spider diversity with a moderately high intensity grazing possible in this wood-pasture. Our results are in line with recent empirical research showing that sparse trees and shrubs increases the biodiversity potential of pastures managed for commodity production. PMID:28886058
Adaptive OFDM Waveform Design for Spatio-Temporal-Sparsity Exploited STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
In this chapter, we describe a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly moving target using an orthogonal frequency division multiplexing (OFDM) radar. The motivation of employing an OFDM signal is that it improves the target-detectability from the interfering signals by increasing the frequency diversity of the system. However, due to the addition of one extra dimension in terms of frequency, the adaptive degrees-of-freedom in an OFDM-STAP also increases. Therefore, to avoid the construction a fully adaptive OFDM-STAP, we develop a sparsity-based STAP algorithm. We observe that the interference spectrum is inherently sparse in the spatio-temporal domain,more » as the clutter responses occupy only a diagonal ridge on the spatio-temporal plane and the jammer signals interfere only from a few spatial directions. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data compared to the other existing STAP techniques, and produces nearly optimum STAP performance. In addition to designing the STAP filter, we optimally design the transmit OFDM signals by maximizing the output signal-to-interference-plus-noise ratio (SINR) in order to improve the STAP performance. The computation of output SINR depends on the estimated value of the interference covariance matrix, which we obtain by applying the sparse recovery algorithm. Therefore, we analytically assess the effects of the synthesized OFDM coefficients on the sparse recovery of the interference covariance matrix by computing the coherence measure of the sparse measurement matrix. Our numerical examples demonstrate the achieved STAP-performance due to sparsity-based technique and adaptive waveform design.« less
Low-rank Atlas Image Analyses in the Presence of Pathologies
Liu, Xiaoxiao; Niethammer, Marc; Kwitt, Roland; Singh, Nikhil; McCormick, Matt; Aylward, Stephen
2015-01-01
We present a common framework, for registering images to an atlas and for forming an unbiased atlas, that tolerates the presence of pathologies such as tumors and traumatic brain injury lesions. This common framework is particularly useful when a sufficient number of protocol-matched scans from healthy subjects cannot be easily acquired for atlas formation and when the pathologies in a patient cause large appearance changes. Our framework combines a low-rank-plus-sparse image decomposition technique with an iterative, diffeomorphic, group-wise image registration method. At each iteration of image registration, the decomposition technique estimates a “healthy” version of each image as its low-rank component and estimates the pathologies in each image as its sparse component. The healthy version of each image is used for the next iteration of image registration. The low-rank and sparse estimates are refined as the image registrations iteratively improve. When that framework is applied to image-to-atlas registration, the low-rank image is registered to a pre-defined atlas, to establish correspondence that is independent of the pathologies in the sparse component of each image. Ultimately, image-to-atlas registrations can be used to define spatial priors for tissue segmentation and to map information across subjects. When that framework is applied to unbiased atlas formation, at each iteration, the average of the low-rank images from the patients is used as the atlas image for the next iteration, until convergence. Since each iteration’s atlas is comprised of low-rank components, it provides a population-consistent, pathology-free appearance. Evaluations of the proposed methodology are presented using synthetic data as well as simulated and clinical tumor MRI images from the brain tumor segmentation (BRATS) challenge from MICCAI 2012. PMID:26111390
Controls on sinuosity in the sparsely vegetated Fossálar River, southern Iceland
NASA Astrophysics Data System (ADS)
Ielpi, Alessandro
2017-06-01
Vegetation exerts strong controls on fluvial sinuosity, providing bank stability and buffering surface runoff. These controls are manifest in densely vegetated landscapes, whereas sparsely vegetated fluvial systems have been so far overlooked. This study integrates remote sensing and gauging records of the meandering to wandering Fossálar River, a relatively steep-sloped (< 2.5%) Icelandic river featuring well-developed point bars (79%-85% of total active bar surface) despite the lack of thick, arborescent vegetation. Over four decades, fluctuations in the sinuosity index (1.15-1.43) and vegetation cover (63%-83%) are not significantly correlated (r = 0.28, p > 0.05), suggesting that relationships between the two are mediated by intervening variables and uncertain lag times. By comparison, discharge regime and fluvial planform show direct correlation over monthly to yearly time scales, with stable discharge stages accompanying the accretion of meander bends and peak floods related to destructive point-bar reworking. Rapid planform change is aided by the unconsolidated nature of unrooted alluvial banks, with recorded rates of lateral channel-belt migration averaging 18 m/yr. Valley confinement and channel mobility also control the geometry and evolution of individual point bars, with the highest degree of spatial geomorphic variability recorded in low-gradient stretches where lateral migration is unimpeded. Point bars in the Fossálar River display morphometric values comparable to those of other sparsely vegetated rivers, suggesting shared scalar properties. This conjecture prompts the need for more sophisticated integrations between remote sensing and gauging records on modern rivers lacking widespread plant life. While a large volume of experimental and field-based work maintains that thick vegetation has a critical role in limiting braiding, thus favouring sinuosity, this study demonstrates the stronger controls of discharge regime and alluvial morphology on sparsely vegetated sinuous rivers.
Chen, Jian-Jun; Yi, Shu-Hua; Qin, Yu; Wang, Xiao-Yun
2014-06-01
This paper retrieved the fractional vegetation cover of alpine grassland in the source region of the Shule River Basin based on Chinese environmental satellite (HJ-1A/1B) images and field data, and analyzed the response of the vegetation cover to topographic factors and types of frozen ground. The results showed that the vegetation coverage of this region was low with large spatial heterogeneity and high degree of dispersion. The landscape consisted mainly of non-vegetation surface types, eg. ice, snow, the bare rock gravel land and bare land. Slopes and aspects were the main limiting factors of vegetation distribution. The average vegetation coverage decreased with the increase of slope. The average vegetation coverage was the lowest on the sunny slope, and the highest on the shady slope. There were significant differences of vegetation coverage among different types of frozen ground. The distribution of vegetation coverage presented a reversed "U" curve trend by extremely stable permafrost, stable permafrost, sub-stable permafrost, transition permafrost, unstable permafrost and seasonal frost, and the average vegetation coverage was the highest in the sub-stable permafrost.
Decentralized state estimation for a large-scale spatially interconnected system.
Liu, Huabo; Yu, Haisheng
2018-03-01
A decentralized state estimator is derived for the spatially interconnected systems composed of many subsystems with arbitrary connection relations. An optimization problem on the basis of linear matrix inequality (LMI) is constructed for the computations of improved subsystem parameter matrices. Several computationally effective approaches are derived which efficiently utilize the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, this decentralized state estimator is proved to converge to a stable system and obtain a bounded covariance matrix of estimation errors under certain conditions. Numerical simulations show that the obtained decentralized state estimator is attractive in the synthesis of a large-scale networked system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Urban land use monitoring from computer-implemented processing of airborne multispectral data
NASA Technical Reports Server (NTRS)
Todd, W. J.; Mausel, P. W.; Baumgardner, M. F.
1976-01-01
Machine processing techniques were applied to multispectral data obtained from airborne scanners at an elevation of 600 meters over central Indianapolis in August, 1972. Computer analysis of these spectral data indicate that roads (two types), roof tops (three types), dense grass (two types), sparse grass (two types), trees, bare soil, and water (two types) can be accurately identified. Using computers, it is possible to determine land uses from analysis of type, size, shape, and spatial associations of earth surface images identified from multispectral data. Land use data developed through machine processing techniques can be programmed to monitor land use changes, simulate land use conditions, and provide impact statistics that are required to analyze stresses placed on spatial systems.
NASA Astrophysics Data System (ADS)
Sánchez, Antonio; Malak, Dania Abdul; Schröder, Christoph; Martinez-Murillo, Juan F.
2016-04-01
Remote sensing techniques (SRS) are valid tools for wetland monitoring that could support wetland managers in assessing the spatial and temporal changes in wetland ecosystems as well as in understanding their condition and the ecosystem services they provide. This study focuses on the one hand, on drawing hydro-ecological guidelines for the delimitation of wetland ecosystems; and on the other hand, to assess the reliability of widely available satellite images (Landsat) in estimating the land use/ land cover types covering wetlands. This research develops comprehensive guidelines to determine the boundaries of the Fuente de Piedra wetland ecosystem located in Andalusia, Spain and defines the main land use/ land cover classes covering this ecosystem using Landsat 8 images. An accuracy of the SRS results delivered is tested using the regional inventory of land use produced by the regional government of Andalusia in 2011. By using the ecological and hydrological settings of the area, the boundaries of the Fuente de Piedra wetland ecosystem are determined as an alternative to improve the current delimitations methodology (the Ramsar and Natura 2000 delineations), used by the local authorities so far and based mainly on administrative reasoning. In terms of the land use land cover definition in the area, Fuente de Piedra wetland ecosystem shows to cover a total area of 195 km2 composed mainly by agricultural areas (81.46%): olive groves, non-irrigated arable land and pastures, being 54.82%, 25.71% and 0.93% of the surface respectively. Wetland related land covers (water surface, wetland vegetation) represent 6.85% while natural vegetation is distributed in forest, 1.67%, and shrub areas, 4.14%, being 5.81% in total. 4.58% of the area corresponds to urban and other artificial surfaces. The rest, 1.30%, is composed of different areas without vegetation (sands, bare rock, dumps, etc.). The classification of the Landsat images made with the newly developed SWOS toolbox (under the Horizon 2020 SWOS project) provides reliable results (r2= 0.98). The image segmentation corresponds very closely with the plots of land observed in the satellite image, and the allocation of land use coverages corresponds in 82% of the segments. Forest and olive groves are the best identified coverages with an accuracy of 93% in both cases. Wetlands are correctly classified by 87%, where linear features (narrow streams, etc.) are not detected by the methodology used due to the limitations of Landsat resolution. Arable lands are classified with an accuracy of 85.5%; where the methodology seems to confuse this land use with sparse olive grove. In the case of shrubs, accuracy round the 72%, with confusions with this land use are related with arable land, sparse forests in wetland areas. In the case of urban areas, only 60.5% of the segments are correctly classified as the distinction between urban fabric and industrial areas does not seem to be possible and linear features are not detected (highways, secondary roads,…).
NASA Technical Reports Server (NTRS)
1982-01-01
Functional and design data from various thematic mapper subsystems are presented. Coarse focus, modulation transfer function, and shim requirements are addressed along with spectral matching and spatial coverage tests.
Efficient receptive field tiling in primate V1
Nauhaus, Ian; Nielsen, Kristina J.; Callaway, Edward M.
2017-01-01
The primary visual cortex (V1) encodes a diverse set of visual features, including orientation, ocular dominance (OD) and spatial frequency (SF), whose joint organization must be precisely structured to optimize coverage within the retinotopic map. Prior experiments have only identified efficient coverage based on orthogonal maps. Here, we used two-photon calcium imaging to reveal an alternative arrangement for OD and SF maps in macaque V1; their gradients run parallel but with unique spatial periods, whereby low SF regions coincide with monocular regions. Next, we mapped receptive fields and find surprisingly precise micro-retinotopy that yields a smaller point-image and requires more efficient inter-map geometry, thus underscoring the significance of map relationships. While smooth retinotopy is constraining, studies suggest that it improves both wiring economy and the V1 population code read downstream. Altogether, these data indicate that connectivity within V1 is finely tuned and precise at the level of individual neurons. PMID:27499086
Retrieved Products from Simulated Hyperspectral Observations of a Hurricane
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis C.; Iredell, Lena; Blaisdell, John; Pagano, Thomas; Mathews, William
2015-01-01
This research uses GCM derived products, with 1 km spatial resolution and sampled every 10 minutes, over a moving area following the track of a simulated severe Atlantic storm. Model products were aggregated over sounder footprints corresponding to 13 km in LEO, 2 km in LEO, and 5 km in GEO sampled every 72 minutes. We simulated radiances for instruments with AIRS-like spectral coverage, spectral resolution, and channel noise, using these aggregated products as the truth, and analyzed them using a slightly modified version of the operational AIRS Version-6 retrieval algorithm. Accuracy of retrievals obtained using simulated AIRS radiances with a 13 km footprint was similar to that obtained using real AIRS data. Spatial coverage and accuracy of retrievals are shown for all three sounding scenarios. The research demonstrates the potential significance of flying Advanced AIRS-like instruments on future LEO and GEO missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, Tatsuya J.; Nofiele, Joris; Yuan, Qing
Purpose: Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of “true” information due to greater reliance on a priori information. Methods: Lung tumor motion trajectories in the superior–inferior direction, previously recorded from ten lung cancer patients, were replayed usingmore » a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. Results: In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3–0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. Conclusions: The authors’ initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy.« less
Arai, Tatsuya J; Nofiele, Joris; Madhuranthakam, Ananth J; Yuan, Qing; Pedrosa, Ivan; Chopra, Rajiv; Sawant, Amit
2016-06-01
Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of "true" information due to greater reliance on a priori information. Lung tumor motion trajectories in the superior-inferior direction, previously recorded from ten lung cancer patients, were replayed using a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3-0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. The authors' initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy.
Arai, Tatsuya J.; Nofiele, Joris; Madhuranthakam, Ananth J.; Yuan, Qing; Pedrosa, Ivan; Chopra, Rajiv; Sawant, Amit
2016-01-01
Purpose: Sparse-sampling and reconstruction techniques represent an attractive strategy to achieve faster image acquisition speeds, while maintaining adequate spatial resolution and signal-to-noise ratio in rapid magnetic resonance imaging (MRI). The authors investigate the use of one such sequence, broad-use linear acquisition speed-up technique (k-t BLAST) in monitoring tumor motion for thoracic and abdominal radiotherapy and examine the potential trade-off between increased sparsification (to increase imaging speed) and the potential loss of “true” information due to greater reliance on a priori information. Methods: Lung tumor motion trajectories in the superior–inferior direction, previously recorded from ten lung cancer patients, were replayed using a motion phantom module driven by an MRI-compatible motion platform. Eppendorf test tubes filled with water which serve as fiducial markers were placed in the phantom. The modeled rigid and deformable motions were collected in a coronal image slice using balanced fast field echo in conjunction with k-t BLAST. Root mean square (RMS) error was used as a metric of spatial accuracy as measured trajectories were compared to input data. The loss of spatial information was characterized for progressively increasing acceleration factor from 1 to 16; the resultant sampling frequency was increased approximately from 2.5 to 19 Hz when the principal direction of the motion was set along frequency encoding direction. In addition to the phantom study, respiration-induced tumor motions were captured from two patients (kidney tumor and lung tumor) at 13 Hz over 49 s to demonstrate the impact of high speed motion monitoring over multiple breathing cycles. For each subject, the authors compared the tumor centroid trajectory as well as the deformable motion during free breathing. Results: In the rigid and deformable phantom studies, the RMS error of target tracking at the acquisition speed of 19 Hz was approximately 0.3–0.4 mm, which was smaller than the reconstructed pixel resolution of 0.67 mm. In the patient study, the dynamic 2D MRI enabled the monitoring of cycle-to-cycle respiratory variability present in the tumor position. It was seen that the range of centroid motion as well as the area covered due to target motion during each individual respiratory cycle was underestimated compared to the entire motion range observed over multiple breathing cycles. Conclusions: The authors’ initial results demonstrate that sparse-sampling- and reconstruction-based dynamic MRI can be used to achieve adequate image acquisition speeds without significant information loss for the task of radiotherapy guidance. Such monitoring can yield spatial and temporal information superior to conventional offline and online motion capture methods used in thoracic and abdominal radiotherapy. PMID:27277029
Dynamic Positron Emission Tomography [PET] in Man Using Small Bismuth Germanate Crystals
DOE R&D Accomplishments Database
Derenzo, S. E.; Budinger, T. F.; Huesman, R. H.; Cahoon, J. L.
1982-04-01
Primary considerations for the design of positron emission tomographs for medical studies in humans are the need for high imaging sensitivity, whole organ coverage, good spatial resolution, high maximum data rates, adequate spatial sampling with minimum mechanical motion, shielding against out of plane activity, pulse height discrimination against scattered photons, and timing discrimination against accidental coincidences. We discuss the choice of detectors, sampling motion, shielding, and electronics to meet these objectives.
Polo, Gina; Acosta, C. Mera; Ferreira, Fernando; Dias, Ricardo Augusto
2015-01-01
This study integrated accessibility and location-allocation models in geographic information systems as a proposed strategy to improve the spatial planning of public health services. To estimate the spatial accessibility, we modified the two-step floating catchment area (2SFCA) model with a different impedance function, a Gaussian weight for competition among service sites, a friction coefficient, distances along a street network based on the Dijkstra’s algorithm and by performing a vectorial analysis. To check the accuracy of the strategy, we used the data from the public sterilization program for the dogs and cats of Bogot´a, Colombia. Since the proposed strategy is independent of the service, it could also be applied to any other public intervention when the capacity of the service is known. The results of the accessibility model were consistent with the sterilization program data, revealing that the western, central and northern zones are the most isolated areas under the sterilization program. Spatial accessibility improvement was sought by relocating the sterilization sites using the maximum coverage with finite demand and the p-median models. The relocation proposed by the maximum coverage model more effectively maximized the spatial accessibility to the sterilization service given the non-uniform distribution of the populations of dogs and cats throughout the city. The implementation of the proposed strategy would provide direct benefits by improving the effectiveness of different public health interventions and the use of financial and human resources. PMID:25775411
This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.
Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M
2012-03-01
Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
LESS: Link Estimation with Sparse Sampling in Intertidal WSNs
Ji, Xiaoyu; Chen, Yi-chao; Li, Xiaopeng; Xu, Wenyuan
2018-01-01
Deploying wireless sensor networks (WSN) in the intertidal area is an effective approach for environmental monitoring. To sustain reliable data delivery in such a dynamic environment, a link quality estimation mechanism is crucial. However, our observations in two real WSN systems deployed in the intertidal areas reveal that link update in routing protocols often suffers from energy and bandwidth waste due to the frequent link quality measurement and updates. In this paper, we carefully investigate the network dynamics using real-world sensor network data and find it feasible to achieve accurate estimation of link quality using sparse sampling. We design and implement a compressive-sensing-based link quality estimation protocol, LESS, which incorporates both spatial and temporal characteristics of the system to aid the link update in routing protocols. We evaluate LESS in both real WSN systems and a large-scale simulation, and the results show that LESS can reduce energy and bandwidth consumption by up to 50% while still achieving more than 90% link quality estimation accuracy. PMID:29494557
Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)
Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K
2011-01-01
To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069
Yang, Yan; Onishi, Takeo; Hiramatsu, Ken
2014-01-01
Simulation results of the widely used temperature index snowmelt model are greatly influenced by input air temperature data. Spatially sparse air temperature data remain the main factor inducing uncertainties and errors in that model, which limits its applications. Thus, to solve this problem, we created new air temperature data using linear regression relationships that can be formulated based on MODIS land surface temperature data. The Soil Water Assessment Tool model, which includes an improved temperature index snowmelt module, was chosen to test the newly created data. By evaluating simulation performance for daily snowmelt in three test basins of the Amur River, performance of the newly created data was assessed. The coefficient of determination (R 2) and Nash-Sutcliffe efficiency (NSE) were used for evaluation. The results indicate that MODIS land surface temperature data can be used as a new source for air temperature data creation. This will improve snow simulation using the temperature index model in an area with sparse air temperature observations. PMID:25165746
Array signal recovery algorithm for a single-RF-channel DBF array
NASA Astrophysics Data System (ADS)
Zhang, Duo; Wu, Wen; Fang, Da Gang
2016-12-01
An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.
Parameter Estimation for a Pulsating Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason; Wimer, Nicholas; Lapointe, Caelan; Hayden, Torrey; Grooms, Ian; Rieker, Greg; Hamlington, Peter
2017-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other ``truth'' data to be used for the prediction of unknown parameters, such as flow properties and boundary conditions, in numerical simulations of real-world engineering systems. Here we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a direct numerical simulation (DNS) with known boundary conditions and problem parameters, while the ABC procedure utilizes lower fidelity large eddy simulations. Using spatially-sparse statistics from the 2D buoyant jet DNS, we show that the ABC method provides accurate predictions of true jet inflow parameters. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for predicting flow information, such as boundary conditions, that can be difficult to determine experimentally.
Competition in high dimensional spaces using a sparse approximation of neural fields.
Quinton, Jean-Charles; Girau, Bernard; Lefort, Mathieu
2011-01-01
The Continuum Neural Field Theory implements competition within topologically organized neural networks with lateral inhibitory connections. However, due to the polynomial complexity of matrix-based implementations, updating dense representations of the activity becomes computationally intractable when an adaptive resolution or an arbitrary number of input dimensions is required. This paper proposes an alternative to self-organizing maps with a sparse implementation based on Gaussian mixture models, promoting a trade-off in redundancy for higher computational efficiency and alleviating constraints on the underlying substrate.This version reproduces the emergent attentional properties of the original equations, by directly applying them within a continuous approximation of a high dimensional neural field. The model is compatible with preprocessed sensory flows but can also be interfaced with artificial systems. This is particularly important for sensorimotor systems, where decisions and motor actions must be taken and updated in real-time. Preliminary tests are performed on a reactive color tracking application, using spatially distributed color features.
Compressed digital holography: from micro towards macro
NASA Astrophysics Data System (ADS)
Schretter, Colas; Bettens, Stijn; Blinder, David; Pesquet-Popescu, Béatrice; Cagnazzo, Marco; Dufaux, Frédéric; Schelkens, Peter
2016-09-01
signal processing methods from software-driven computer engineering and applied mathematics. The compressed sensing theory in particular established a practical framework for reconstructing the scene content using few linear combinations of complex measurements and a sparse prior for regularizing the solution. Compressed sensing found direct applications in digital holography for microscopy. Indeed, the wave propagation phenomenon in free space mixes in a natural way the spatial distribution of point sources from the 3-dimensional scene. As the 3-dimensional scene is mapped to a 2-dimensional hologram, the hologram samples form a compressed representation of the scene as well. This overview paper discusses contributions in the field of compressed digital holography at the micro scale. Then, an outreach on future extensions towards the real-size macro scale is discussed. Thanks to advances in sensor technologies, increasing computing power and the recent improvements in sparse digital signal processing, holographic modalities are on the verge of practical high-quality visualization at a macroscopic scale where much higher resolution holograms must be acquired and processed on the computer.
Sparsity enables estimation of both subcortical and cortical activity from MEG and EEG
Krishnaswamy, Pavitra; Obregon-Henao, Gabriel; Ahveninen, Jyrki; Khan, Sheraz; Iglesias, Juan Eugenio; Hämäläinen, Matti S.; Purdon, Patrick L.
2017-01-01
Subcortical structures play a critical role in brain function. However, options for assessing electrophysiological activity in these structures are limited. Electromagnetic fields generated by neuronal activity in subcortical structures can be recorded noninvasively, using magnetoencephalography (MEG) and electroencephalography (EEG). However, these subcortical signals are much weaker than those generated by cortical activity. In addition, we show here that it is difficult to resolve subcortical sources because distributed cortical activity can explain the MEG and EEG patterns generated by deep sources. We then demonstrate that if the cortical activity is spatially sparse, both cortical and subcortical sources can be resolved with M/EEG. Building on this insight, we develop a hierarchical sparse inverse solution for M/EEG. We assess the performance of this algorithm on realistic simulations and auditory evoked response data, and show that thalamic and brainstem sources can be correctly estimated in the presence of cortical activity. Our work provides alternative perspectives and tools for characterizing electrophysiological activity in subcortical structures in the human brain. PMID:29138310