DOE Office of Scientific and Technical Information (OSTI.GOV)
Yun, Yuxing; Fan, Jiwen; Xiao, Heng
Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32more » km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.« less
Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-05-29
Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less
NASA Astrophysics Data System (ADS)
Whidden, E.; Roulet, N.
2003-04-01
Interpretation of a site average terrestrial flux may be complicated in the presence of inhomogeneities. Inhomogeneity may invalidate the basic assumptions of aerodynamic flux measurement. Chamber measurement may miss or misinterpret important temporal or spatial anomalies. Models may smooth over important nonlinearities depending on the scale of application. Although inhomogeneity is usually seen as a design problem, many sites have spatial variance that may have a large impact on net flux, and in many cases a large homogeneous surface is unrealistic. The sensitivity and validity of a site average flux are investigated in the presence of an inhomogeneous site. Directional differences are used to evaluate the validity of aerodynamic methods and the computation of a site average tower flux. Empirical and modelling methods are used to interpret the spatial controls on flux. An ecosystem model, Ecosys, is used to assess spatial length scales appropriate to the ecophysiologic controls. A diffusion model is used to compare tower, chamber, and model data, by spatially weighting contributions within the tower footprint. Diffusion model weighting is also used to improve tower flux estimates by producing footprint averaged ecological parameters (soil moisture, soil temperature, etc.). Although uncertainty remains in the validity of measurement methods and the accuracy of diffusion models, a detailed spatial interpretation is required at an inhomogeneous site. Flux estimation between methods improves with spatial interpretation, showing the importance to an estimation of a site average flux. Small-scale temporal and spatial anomalies may be relatively unimportant to overall flux, but accounting for medium-scale differences in ecophysiological controls is necessary. A combination of measurements and modelling can be used to define the appropriate time and length scales of significant non-linearity due to inhomogeneity.
Spatial Lattice Modulation for MIMO Systems
NASA Astrophysics Data System (ADS)
Choi, Jiwook; Nam, Yunseo; Lee, Namyoon
2018-06-01
This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.
NASA Astrophysics Data System (ADS)
Lopez-Baeza, E.; Monsoriu Torres, A.; Font, J.; Alonso, O.
2009-04-01
The ESA SMOS (Soil Moisture and Ocean Salinity) Mission is planned to be launched in July 2009. The satellite will measure soil moisture over the continents and surface salinity of the oceans at resolutions that are sufficient for climatological-type studies. This paper describes the procedure to be used at the Spanish SMOS Level 3 and 4 Data Processing Centre (CP34) to generate Soil Moisture and other Land Surface Product maps from SMOS Level 2 data. This procedure can be used to map Soil Moisture, Vegetation Water Content and Soil Dielectric Constant data into different pre-defined spatial grids with fixed temporal frequency. The L3 standard Land Surface Products to be generated at CP34 are: Soil Moisture products: maximum spatial resolution with no spatial averaging, temporal averaging of 3 days, daily generation maximum spatial resolution with no spatial averaging, temporal averaging of 10 days, generation frequency of once every 10 days. b': maximum spatial resolution with no spatial averaging, temporal averaging of monthly decades (1st to 10th of the month, 11th to 20th of the month, 21st to last day of the month), generation frequency of once every decade monthly average, temporal averaging from L3 decade averages, monthly generation Seasonal average, temporal averaging from L3 monthly averages, seasonally generation yearly average, temporal averaging from L3 monthly averages, yearly generation Vegetation Water Content products: maximum spatial resolution with no spatial averaging, temporal averaging of 10 days, generation frequency of once every 10 days. a': maximum spatial resolution with no spatial averaging, temporal averaging of monthly decades (1st to 10th of the month, 11th to 20th of the month, 21st to last day of the month) using simple averaging method over the L2 products in ISEA grid, generation frequency of once every decade monthly average, temporal averaging from L3 decade averages, monthly generation seasonal average, temporal averaging from L3 monthly averages, seasonally generation yearly average, temporal averaging from L3 monthly averages, yearly generation Dielectric Constant products: (the dielectric constant products are delivered together with soil moisture products, with the same averaging periods and generation frequency): maximum spatial resolution with no spatial averaging, temporal averaging of 3 days, daily generation maximum spatial resolution with no spatial averaging, temporal averaging of 10 days, generation frequency of once every 10 days. b': maximum spatial resolution with no spatial averaging, temporal averaging of monthly decades (1st to 10th of the month, 11th to 20th of the month, 21st to last day of the month), generation frequency of once every decade monthly average, temporal averaging from L3 decade averages, monthly generation seasonal average, temporal averaging from L3 monthly averages, seasonally generation yearly average, temporal averaging from L3 monthly averages, yearly generation.
Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi
2017-01-01
Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.
Statistical Considerations of Data Processing in Giovanni Online Tool
NASA Technical Reports Server (NTRS)
Suhung, Shen; Leptoukh, G.; Acker, J.; Berrick, S.
2005-01-01
The GES DISC Interactive Online Visualization and Analysis Infrastructure (Giovanni) is a web-based interface for the rapid visualization and analysis of gridded data from a number of remote sensing instruments. The GES DISC currently employs several Giovanni instances to analyze various products, such as Ocean-Giovanni for ocean products from SeaWiFS and MODIS-Aqua; TOMS & OM1 Giovanni for atmospheric chemical trace gases from TOMS and OMI, and MOVAS for aerosols from MODIS, etc. (http://giovanni.gsfc.nasa.gov) Foremost among the Giovanni statistical functions is data averaging. Two aspects of this function are addressed here. The first deals with the accuracy of averaging gridded mapped products vs. averaging from the ungridded Level 2 data. Some mapped products contain mean values only; others contain additional statistics, such as number of pixels (NP) for each grid, standard deviation, etc. Since NP varies spatially and temporally, averaging with or without weighting by NP will be different. In this paper, we address differences of various weighting algorithms for some datasets utilized in Giovanni. The second aspect is related to different averaging methods affecting data quality and interpretation for data with non-normal distribution. The present study demonstrates results of different spatial averaging methods using gridded SeaWiFS Level 3 mapped monthly chlorophyll a data. Spatial averages were calculated using three different methods: arithmetic mean (AVG), geometric mean (GEO), and maximum likelihood estimator (MLE). Biogeochemical data, such as chlorophyll a, are usually considered to have a log-normal distribution. The study determined that differences between methods tend to increase with increasing size of a selected coastal area, with no significant differences in most open oceans. The GEO method consistently produces values lower than AVG and MLE. The AVG method produces values larger than MLE in some cases, but smaller in other cases. Further studies indicated that significant differences between AVG and MLE methods occurred in coastal areas where data have large spatial variations and a log-bimodal distribution instead of log-normal distribution.
Directional spatial frequency analysis of lipid distribution in atherosclerotic plaque
NASA Astrophysics Data System (ADS)
Korn, Clyde; Reese, Eric; Shi, Lingyan; Alfano, Robert; Russell, Stewart
2016-04-01
Atherosclerosis is characterized by the growth of fibrous plaques due to the retention of cholesterol and lipids within the artery wall, which can lead to vessel occlusion and cardiac events. One way to evaluate arterial disease is to quantify the amount of lipid present in these plaques, since a higher disease burden is characterized by a higher concentration of lipid. Although therapeutic stimulation of reverse cholesterol transport to reduce cholesterol deposits in plaque has not produced significant results, this may be due to current image analysis methods which use averaging techniques to calculate the total amount of lipid in the plaque without regard to spatial distribution, thereby discarding information that may have significance in marking response to therapy. Here we use Directional Fourier Spatial Frequency (DFSF) analysis to generate a characteristic spatial frequency spectrum for atherosclerotic plaques from C57 Black 6 mice both treated and untreated with a cholesterol scavenging nanoparticle. We then use the Cauchy product of these spectra to classify the images with a support vector machine (SVM). Our results indicate that treated plaque can be distinguished from untreated plaque using this method, where no difference is seen using the spatial averaging method. This work has the potential to increase the effectiveness of current in-vivo methods of plaque detection that also use averaging methods, such as laser speckle imaging and Raman spectroscopy.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
NASA Astrophysics Data System (ADS)
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Urban noise functional stratification for estimating average annual sound level.
Rey Gozalo, Guillermo; Barrigón Morillas, Juan Miguel; Prieto Gajardo, Carlos
2015-06-01
Road traffic noise causes many health problems and the deterioration of the quality of urban life; thus, adequate spatial noise and temporal assessment methods are required. Different methods have been proposed for the spatial evaluation of noise in cities, including the categorization method. Until now, this method has only been applied for the study of spatial variability with measurements taken over a week. In this work, continuous measurements of 1 year carried out in 21 different locations in Madrid (Spain), which has more than three million inhabitants, were analyzed. The annual average sound levels and the temporal variability were studied in the proposed categories. The results show that the three proposed categories highlight the spatial noise stratification of the studied city in each period of the day (day, evening, and night) and in the overall indicators (L(And), L(Aden), and L(A24)). Also, significant differences between the diurnal and nocturnal sound levels show functional stratification in these categories. Therefore, this functional stratification offers advantages from both spatial and temporal perspectives by reducing the sampling points and the measurement time.
NASA Astrophysics Data System (ADS)
Cai, Jingya; Pang, Zhiguo; Fu, Jun'e.
2018-04-01
To quantitatively analyze the spatial features of a cosmic-ray sensor (CRS) (i.e., the measurement support volume of the CRS and the weight of the in situ point-scale soil water content (SWC) in terms of the regionally averaged SWC derived from the CRS) in measuring the SWC, cooperative observations based on CRS, oven drying and frequency domain reflectometry (FDR) methods are performed at the point and regional scales in a desert steppe area of the Inner Mongolia Autonomous Region. This region is flat with sparse vegetation cover consisting of only grass, thereby minimizing the effects of terrain and vegetation. Considering the two possibilities of the measurement support volume of the CRS, the results of four weighting methods are compared with the SWC monitored by FDR within an appropriate measurement support volume. The weighted average calculated using the neutron intensity-based weighting method (Ni weighting method) best fits the regionally averaged SWC measured by the CRS. Therefore, we conclude that the gyroscopic support volume and the weights determined by the Ni weighting method are the closest to the actual spatial features of the CRS when measuring the SWC. Based on these findings, a scale transformation model of the SWC from the point scale to the scale of the CRS measurement support volume is established. In addition, the spatial features simulated using the Ni weighting method are visualized by developing a software system.
Global Surface Temperature Change and Uncertainties Since 1861
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M. (Technical Monitor)
2002-01-01
The objective of this talk is to analyze the warming trend and its uncertainties of the global and hemi-spheric surface temperatures. By the method of statistical optimal averaging scheme, the land surface air temperature and sea surface temperature observational data are used to compute the spatial average annual mean surface air temperature. The optimal averaging method is derived from the minimization of the mean square error between the true and estimated averages and uses the empirical orthogonal functions. The method can accurately estimate the errors of the spatial average due to observational gaps and random measurement errors. In addition, quantified are three independent uncertainty factors: urbanization, change of the in situ observational practices and sea surface temperature data corrections. Based on these uncertainties, the best linear fit to annual global surface temperature gives an increase of 0.61 +/- 0.16 C between 1861 and 2000. This lecture will also touch the topics on the impact of global change on nature and environment. as well as the latest assessment methods for the attributions of global change.
Hinckley, A; Bachand, A; Nuckols, J; Reif, J
2005-01-01
Background and Aims: Epidemiological studies of disinfection by-products (DBPs) and reproductive outcomes have been hampered by misclassification of exposure. In most epidemiological studies conducted to date, all persons living within the boundaries of a water distribution system have been assigned a common exposure value based on facility-wide averages of trihalomethane (THM) concentrations. Since THMs do not develop uniformly throughout a distribution system, assignment of facility-wide averages may be inappropriate. One approach to mitigate this potential for misclassification is to select communities for epidemiological investigations that are served by distribution systems with consistently low spatial variability of THMs. Methods and Results: A feasibility study was conducted to develop methods for community selection using the Information Collection Rule (ICR) database, assembled by the US Environmental Protection Agency. The ICR database contains quarterly DBP concentrations collected between 1997 and 1998 from the distribution systems of 198 public water facilities with minimum service populations of 100 000 persons. Facilities with low spatial variation of THMs were identified using two methods; 33 facilities were found with low spatial variability based on one or both methods. Because brominated THMs may be important predictors of risk for adverse reproductive outcomes, sites were categorised into three exposure profiles according to proportion of brominated THM species and average TTHM concentration. The correlation between THMs and haloacetic acids (HAAs) in these facilities was evaluated to see whether selection by total trihalomethanes (TTHMs) corresponds to low spatial variability for HAAs. TTHMs were only moderately correlated with HAAs (r = 0.623). Conclusions: Results provide a simple method for a priori selection of sites with low spatial variability from state or national public water facility datasets as a means to reduce exposure misclassification in epidemiological studies of DBPs. PMID:15961627
NASA Astrophysics Data System (ADS)
Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.
2017-12-01
We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.
Xiao, Yangfan; Yi, Shanzhen; Tang, Zhongqian
2017-12-01
Flood is the most common natural hazard in the world and has caused serious loss of life and property. Assessment of flood prone areas is of great importance for watershed management and reduction of potential loss of life and property. In this study, a framework of multi-criteria analysis (MCA) incorporating geographic information system (GIS), fuzzy analytic hierarchy process (AHP) and spatial ordered weighted averaging (OWA) method was developed for flood hazard assessment. The factors associated with geographical, hydrological and flood-resistant characteristics of the basin were selected as evaluation criteria. The relative importance of the criteria was estimated through fuzzy AHP method. The OWA method was utilized to analyze the effects of different risk attitudes of the decision maker on the assessment result. The spatial ordered weighted averaging method with spatially variable risk preference was implemented in the GIS environment to integrate the criteria. The advantage of the proposed method is that it has considered spatial heterogeneity in assigning risk preference in the decision-making process. The presented methodology has been applied to the area including Hanyang, Caidian and Hannan of Wuhan, China, where flood events occur frequently. The outcome of flood hazard distribution presents a tendency of high risk towards populated and developed areas, especially the northeast part of Hanyang city, which has suffered frequent floods in history. The result indicates where the enhancement projects should be carried out first under the condition of limited resources. Finally, sensitivity of the criteria weights was analyzed to measure the stability of results with respect to the variation of the criteria weights. The flood hazard assessment method presented in this paper is adaptable for hazard assessment of a similar basin, which is of great significance to establish counterplan to mitigate life and property losses. Copyright © 2017 Elsevier B.V. All rights reserved.
Brownian systems with spatially inhomogeneous activity
NASA Astrophysics Data System (ADS)
Sharma, A.; Brader, J. M.
2017-09-01
We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.
NASA Astrophysics Data System (ADS)
Bernhardt, Jase; Carleton, Andrew M.
2018-05-01
The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.
NASA Technical Reports Server (NTRS)
Boyadjian, N. G.; Dallakyan, P. Y.; Garyaka, A. P.; Mamidjanian, E. A.
1985-01-01
A method for calculating the average spatial and energy characteristics of hadron-lepton cascades in the atmosphere is described. The results of calculations for various strong interaction models of primary protons and nuclei are presented. The sensitivity of the experimentally observed extensive air showers (EAS) characteristics to variations of the elementary act parameters is analyzed.
Spatial cluster detection using dynamic programming
2012-01-01
Background The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. Methods We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. Results When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. Conclusions We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm. PMID:22443103
Spatial averaging of a dissipative particle dynamics model for active suspensions
NASA Astrophysics Data System (ADS)
Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot
2018-03-01
Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.
NASA Astrophysics Data System (ADS)
Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi
2017-01-01
Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.
Spatial cluster detection using dynamic programming.
Sverchkov, Yuriy; Jiang, Xia; Cooper, Gregory F
2012-03-25
The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm.
PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.
Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi
2017-08-01
The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-García, Eric E.; González-Lópezlira, Rosa A.; Bruzual A, Gustavo
2017-01-20
Stellar masses of galaxies are frequently obtained by fitting stellar population synthesis models to galaxy photometry or spectra. The state of the art method resolves spatial structures within a galaxy to assess the total stellar mass content. In comparison to unresolved studies, resolved methods yield, on average, higher fractions of stellar mass for galaxies. In this work we improve the current method in order to mitigate a bias related to the resolved spatial distribution derived for the mass. The bias consists in an apparent filamentary mass distribution and a spatial coincidence between mass structures and dust lanes near spiral arms.more » The improved method is based on iterative Bayesian marginalization, through a new algorithm we have named Bayesian Successive Priors (BSP). We have applied BSP to M51 and to a pilot sample of 90 spiral galaxies from the Ohio State University Bright Spiral Galaxy Survey. By quantitatively comparing both methods, we find that the average fraction of stellar mass missed by unresolved studies is only half what previously thought. In contrast with the previous method, the output BSP mass maps bear a better resemblance to near-infrared images.« less
Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee
2017-07-01
Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.
Oppugning the assumptions of spatial averaging of segment and joint orientations.
Pierrynowski, Michael Raymond; Ball, Kevin Arthur
2009-02-09
Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.
The stock-flow model of spatial data infrastructure development refined by fuzzy logic.
Abdolmajidi, Ehsan; Harrie, Lars; Mansourian, Ali
2016-01-01
The system dynamics technique has been demonstrated to be a proper method by which to model and simulate the development of spatial data infrastructures (SDI). An SDI is a collaborative effort to manage and share spatial data at different political and administrative levels. It is comprised of various dynamically interacting quantitative and qualitative (linguistic) variables. To incorporate linguistic variables and their joint effects in an SDI-development model more effectively, we suggest employing fuzzy logic. Not all fuzzy models are able to model the dynamic behavior of SDIs properly. Therefore, this paper aims to investigate different fuzzy models and their suitability for modeling SDIs. To that end, two inference and two defuzzification methods were used for the fuzzification of the joint effect of two variables in an existing SDI model. The results show that the Average-Average inference and Center of Area defuzzification can better model the dynamics of SDI development.
Multimodal Medical Image Fusion by Adaptive Manifold Filter.
Geng, Peng; Liu, Shuaiqi; Zhuang, Shanna
2015-01-01
Medical image fusion plays an important role in diagnosis and treatment of diseases such as image-guided radiotherapy and surgery. The modified local contrast information is proposed to fuse multimodal medical images. Firstly, the adaptive manifold filter is introduced into filtering source images as the low-frequency part in the modified local contrast. Secondly, the modified spatial frequency of the source images is adopted as the high-frequency part in the modified local contrast. Finally, the pixel with larger modified local contrast is selected into the fused image. The presented scheme outperforms the guided filter method in spatial domain, the dual-tree complex wavelet transform-based method, nonsubsampled contourlet transform-based method, and four classic fusion methods in terms of visual quality. Furthermore, the mutual information values by the presented method are averagely 55%, 41%, and 62% higher than the three methods and those values of edge based similarity measure by the presented method are averagely 13%, 33%, and 14% higher than the three methods for the six pairs of source images.
Parameterisation of multi-scale continuum perfusion models from discrete vascular networks.
Hyde, Eoin R; Michler, Christian; Lee, Jack; Cookson, Andrew N; Chabiniok, Radek; Nordsletten, David A; Smith, Nicolas P
2013-05-01
Experimental data and advanced imaging techniques are increasingly enabling the extraction of detailed vascular anatomy from biological tissues. Incorporation of anatomical data within perfusion models is non-trivial, due to heterogeneous vessel density and disparate radii scales. Furthermore, previous idealised networks have assumed a spatially repeating motif or periodic canonical cell, thereby allowing for a flow solution via homogenisation. However, such periodicity is not observed throughout anatomical networks. In this study, we apply various spatial averaging methods to discrete vascular geometries in order to parameterise a continuum model of perfusion. Specifically, a multi-compartment Darcy model was used to provide vascular scale separation for the fluid flow. Permeability tensor fields were derived from both synthetic and anatomically realistic networks using (1) porosity-scaled isotropic, (2) Huyghe and Van Campen, and (3) projected-PCA methods. The Darcy pressure fields were compared via a root-mean-square error metric to an averaged Poiseuille pressure solution over the same domain. The method of Huyghe and Van Campen performed better than the other two methods in all simulations, even for relatively coarse networks. Furthermore, inter-compartment volumetric flux fields, determined using the spatially averaged discrete flux per unit pressure difference, were shown to be accurate across a range of pressure boundary conditions. This work justifies the application of continuum flow models to characterise perfusion resulting from flow in an underlying vascular network.
NASA Astrophysics Data System (ADS)
Wood, Brian; He, Xiaoliang; Apte, Sourabh
2017-11-01
Turbulent flows through porous media are encountered in a number of natural and engineered systems. Many attempts to close the Navier-Stokes equation for such type of flow have been made, for example using RANS models and double averaging. On the other hand, Whitaker (1996) applied volume averaging theorem to close the macroscopic N-S equation for low Re flow. In this work, the volume averaging theory is extended into the turbulent flow regime to posit a relationship between the macroscale velocities and the spatial velocity statistics in terms of the spatial averaged velocity only. Rather than developing a Reynolds stress model, we propose a simple algebraic closure, consistent with generalized effective viscosity models (Pope 1975), to represent the spatial fluctuating velocity and pressure respectively. The coefficients (one 1st order, two 2nd order and one 3rd order tensor) of the linear functions depend on averaged velocity and gradient. With the data set from DNS, performed with inertial and turbulent flows (pore Re of 300, 500 and 1000) through a periodic face centered cubic (FCC) unit cell, all the unknown coefficients can be computed and the closure is complete. The macroscopic quantity calculated from the averaging is then compared with DNS data to verify the upscaling. NSF Project Numbers 1336983, 1133363.
Inferring Spatial Variations of Microstructural Properties from Macroscopic Mechanical Response
Liu, Tengxiao; Hall, Timothy J.; Barbone, Paul E.; Oberai, Assad A.
2016-01-01
Disease alters tissue microstructure, which in turn affects the macroscopic mechanical properties of tissue. In elasticity imaging, the macroscopic response is measured and is used to infer the spatial distribution of the elastic constitutive parameters. When an empirical constitutive model is used these parameters cannot be linked to the microstructure. However, when the constitutive model is derived from a microstructural representation of the material, it allows for the possibility of inferring the local averages of the spatial distribution of the microstructural parameters. This idea forms the basis of this study. In particular, we first derive a constitutive model by homogenizing the mechanical response of a network of elastic, tortuous fibers. Thereafter, we use this model in an inverse problem to determine the spatial distribution of the microstructural parameters. We solve the inverse problem as a constrained minimization problem, and develop efficient methods for solving it. We apply these methods to displacement fields obtained by deforming gelatin-agar co-gels, and determine the spatial distribution of agar concentration and fiber tortuosity, thereby demonstrating that it is possible to image local averages of microstructural parameters from macroscopic measurements of deformation. PMID:27655420
Spatial methods for deriving crop rotation history
NASA Astrophysics Data System (ADS)
Mueller-Warrant, George W.; Trippe, Kristin M.; Whittaker, Gerald W.; Anderson, Nicole P.; Sullivan, Clare S.
2017-08-01
Benefits of converting 11 years of remote sensing classification data into cropping history of agricultural fields included measuring lengths of rotation cycles and identifying specific sequences of intervening crops grown between final years of old grass seed stands and establishment of new ones. Spatial and non-spatial methods were complementary. Individual-year classification errors were often correctable in spreadsheet-based non-spatial analysis, whereas their presence in spatial data generally led to exclusion of fields from further analysis. Markov-model testing of non-spatial data revealed that year-to-year cropping sequences did not match average frequencies for transitions among crops grown in western Oregon, implying that rotations into new grass seed stands were influenced by growers' desires to achieve specific objectives. Moran's I spatial analysis of length of time between consecutive grass seed stands revealed that clustering of fields was relatively uncommon, with high and low value clusters only accounting for 7.1 and 6.2% of fields.
A novel principal component analysis for spatially misaligned multivariate air pollution data.
Jandarov, Roman A; Sheppard, Lianne A; Sampson, Paul D; Szpiro, Adam A
2017-01-01
We propose novel methods for predictive (sparse) PCA with spatially misaligned data. These methods identify principal component loading vectors that explain as much variability in the observed data as possible, while also ensuring the corresponding principal component scores can be predicted accurately by means of spatial statistics at locations where air pollution measurements are not available. This will make it possible to identify important mixtures of air pollutants and to quantify their health effects in cohort studies, where currently available methods cannot be used. We demonstrate the utility of predictive (sparse) PCA in simulated data and apply the approach to annual averages of particulate matter speciation data from national Environmental Protection Agency (EPA) regulatory monitors.
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.
2015-01-01
Digital holography is technique which includes recording of interference pattern with digital photosensor, processing of obtained holographic data and reconstruction of object wavefront. Increase of signal-to-noise ratio (SNR) of reconstructed digital holograms is especially important in such fields as image encryption, pattern recognition, static and dynamic display of 3D scenes, and etc. In this paper compensation of photosensor light spatial noise portrait (LSNP) for increase of SNR of reconstructed digital holograms is proposed. To verify the proposed method, numerical experiments with computer generated Fresnel holograms with resolution equal to 512×512 elements were performed. Simulation of shots registration with digital camera Canon EOS 400D was performed. It is shown that solo use of the averaging over frames method allows to increase SNR only up to 4 times, and further increase of SNR is limited by spatial noise. Application of the LSNP compensation method in conjunction with the averaging over frames method allows for 10 times SNR increase. This value was obtained for LSNP measured with 20 % error. In case of using more accurate LSNP, SNR can be increased up to 20 times.
A Nonparametric Geostatistical Method For Estimating Species Importance
Andrew J. Lister; Rachel Riemann; Michael Hoppus
2001-01-01
Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...
Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob
2013-11-01
Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.
Ehrhardt, J; Säring, D; Handels, H
2007-01-01
Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.
Fujisawa, Junya; Touyama, Hideaki; Hirose, Michitaka
2008-01-01
In this paper, alpha band modulation during visual spatial attention without visual stimuli was focused. Visual spatial attention has been expected to provide a new channel of non-invasive independent brain computer interface (BCI), but little work has been done on the new interfacing method. The flickering stimuli used in previous work cause a decline of independency and have difficulties in a practical use. Therefore we investigated whether visual spatial attention could be detected without such stimuli. Further, the common spatial patterns (CSP) were for the first time applied to the brain states during visual spatial attention. The performance evaluation was based on three brain states of left, right and center direction attention. The 30-channel scalp electroencephalographic (EEG) signals over occipital cortex were recorded for five subjects. Without CSP, the analyses made 66.44 (range 55.42 to 72.27) % of average classification performance in discriminating left and right attention classes. With CSP, the averaged classification accuracy was 75.39 (range 63.75 to 86.13) %. It is suggested that CSP is useful in the context of visual spatial attention, and the alpha band modulation during visual spatial attention without flickering stimuli has the possibility of a new channel for independent BCI as well as motor imagery.
Jay M. Ver Hoef; Hailemariam Temesgen; Sergio Gómez
2013-01-01
Forest surveys provide critical information for many diverse interests. Data are often collected from samples, and from these samples, maps of resources and estimates of aerial totals or averages are required. In this paper, two approaches for mapping and estimating totals; the spatial linear model (SLM) and k-NN (k-Nearest Neighbor) are compared, theoretically,...
Spatial study of mortality in motorcycle accidents in the State of Pernambuco, Northeastern Brazil.
Silva, Paul Hindenburg Nobre de Vasconcelos; Lima, Maria Luiza Carvalho de; Moreira, Rafael da Silveira; Souza, Wayner Vieira de; Cabral, Amanda Priscila de Santana
2011-04-01
To analyze the spatial distribution of mortality due to motorcycle accidents in the state of Pernambuco, Northeastern Brazil. A population-based ecological study using data on mortality in motorcycle accidents from 01/01/2000 to 31/12/2005. The analysis units were the municipalities. For the spatial distribution analysis, an average mortality rate was calculated, using deaths from motorcycle accidents recorded in the Mortality Information System as the numerator, and as the denominator the population of the mid-period. Spatial analysis techniques, mortality smoothing coefficient estimate by the local empirical Bayesian method and Moran scatterplot, applied to the digital cartographic base of Pernambuco were used. The average mortality rate for motorcycle accidents in Pernambuco was 3.47 per 100 thousand inhabitants. Of the 185 municipalities, 16 were part of five clusters identified with average mortality rates ranging from 5.66 to 11.66 per 100 thousand inhabitants, and were considered critical areas. Three clusters are located in the area known as sertão and two in the agreste of the state. The risk of dying from a motorcycle accident is greater in conglomerate areas outside the metropolitan axis, and intervention measures should consider the economic, social and cultural contexts.
Zhang, Shen; Zheng, Yanchun; Wang, Daifa; Wang, Ling; Ma, Jianai; Zhang, Jing; Xu, Weihao; Li, Deyu; Zhang, Dan
2017-08-10
Motor imagery is one of the most investigated paradigms in the field of brain-computer interfaces (BCIs). The present study explored the feasibility of applying a common spatial pattern (CSP)-based algorithm for a functional near-infrared spectroscopy (fNIRS)-based motor imagery BCI. Ten participants performed kinesthetic imagery of their left- and right-hand movements while 20-channel fNIRS signals were recorded over the motor cortex. The CSP method was implemented to obtain the spatial filters specific for both imagery tasks. The mean, slope, and variance of the CSP filtered signals were taken as features for BCI classification. Results showed that the CSP-based algorithm outperformed two representative channel-wise methods for classifying the two imagery statuses using either data from all channels or averaged data from imagery responsive channels only (oxygenated hemoglobin: CSP-based: 75.3±13.1%; all-channel: 52.3±5.3%; averaged: 64.8±13.2%; deoxygenated hemoglobin: CSP-based: 72.3±13.0%; all-channel: 48.8±8.2%; averaged: 63.3±13.3%). Furthermore, the effectiveness of the CSP method was also observed for the motor execution data to a lesser extent. A partial correlation analysis revealed significant independent contributions from all three types of features, including the often-ignored variance feature. To our knowledge, this is the first study demonstrating the effectiveness of the CSP method for fNIRS-based motor imagery BCIs. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Witthuhn, Jonas; Macke, Andreas
2017-03-01
The time series of global radiation observed by a dense network of 99 autonomous pyranometers during the HOPE campaign around Jülich, Germany, are investigated with a multiresolution analysis based on the maximum overlap discrete wavelet transform and the Haar wavelet. For different sky conditions, typical wavelet power spectra are calculated to quantify the timescale dependence of variability in global transmittance. Distinctly higher variability is observed at all frequencies in the power spectra of global transmittance under broken-cloud conditions compared to clear, cirrus, or overcast skies. The spatial autocorrelation function including its frequency dependence is determined to quantify the degree of similarity of two time series measurements as a function of their spatial separation. Distances ranging from 100 m to 10 km are considered, and a rapid decrease of the autocorrelation function is found with increasing frequency and distance. For frequencies above 1/3 min-1 and points separated by more than 1 km, variations in transmittance become completely uncorrelated. A method is introduced to estimate the deviation between a point measurement and a spatially averaged value for a surrounding domain, which takes into account domain size and averaging period, and is used to explore the representativeness of a single pyranometer observation for its surrounding region. Two distinct mechanisms are identified, which limit the representativeness; on the one hand, spatial averaging reduces variability and thus modifies the shape of the power spectrum. On the other hand, the correlation of variations of the spatially averaged field and a point measurement decreases rapidly with increasing temporal frequency. For a grid box of 10 km × 10 km and averaging periods of 1.5-3 h, the deviation of global transmittance between a point measurement and an area-averaged value depends on the prevailing sky conditions: 2.8 (clear), 1.8 (cirrus), 1.5 (overcast), and 4.2 % (broken clouds). The solar global radiation observed at a single station is found to deviate from the spatial average by as much as 14-23 (clear), 8-26 (cirrus), 4-23 (overcast), and 31-79 W m-2 (broken clouds) from domain averages ranging from 1 km × 1 km to 10 km × 10 km in area.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L
2012-09-01
Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.
NASA Astrophysics Data System (ADS)
Martin, Sabrina; Bange, Jens
2014-01-01
Crawford et al. (Boundary-Layer Meteorol 66:237-245, 1993) showed that the time average is inappropriate for airborne eddy-covariance flux calculations. The aircraft's ground speed through a turbulent field is not constant. One reason can be a correlation with vertical air motion, so that some types of structures are sampled more densely than others. To avoid this, the time-sampled data are adjusted for the varying ground speed so that the modified estimates are equivalent to spatially-sampled data. A comparison of sensible heat-flux calculations using temporal and spatial averaging methods is presented and discussed. Data of the airborne measurement systems , Helipod and Dornier 128-6 are used for the analysis. These systems vary in size, weight and aerodynamic characteristics, since the is a small unmanned aerial vehicle (UAV), the Helipod a helicopter-borne turbulence probe and the Dornier 128-6 a manned research aircraft. The systematic bias anticipated in covariance computations due to speed variations was neither found when averaging over Dornier, Helipod nor UAV flight legs. However, the random differences between spatial and temporal averaging fluxes were found to be up to 30 % on the individual flight legs.
Cho, Hanna; Kim, Jin Su; Choi, Jae Yong; Ryu, Young Hoon; Lyoo, Chul Hyoung
2014-01-01
We developed a new computed tomography (CT)-based spatial normalization method and CT template to demonstrate its usefulness in spatial normalization of positron emission tomography (PET) images with [(18)F] fluorodeoxyglucose (FDG) PET studies in healthy controls. Seventy healthy controls underwent brain CT scan (120 KeV, 180 mAs, and 3 mm of thickness) and [(18)F] FDG PET scans using a PET/CT scanner. T1-weighted magnetic resonance (MR) images were acquired for all subjects. By averaging skull-stripped and spatially-normalized MR and CT images, we created skull-stripped MR and CT templates for spatial normalization. The skull-stripped MR and CT images were spatially normalized to each structural template. PET images were spatially normalized by applying spatial transformation parameters to normalize skull-stripped MR and CT images. A conventional perfusion PET template was used for PET-based spatial normalization. Regional standardized uptake values (SUV) measured by overlaying the template volume of interest (VOI) were compared to those measured with FreeSurfer-generated VOI (FSVOI). All three spatial normalization methods underestimated regional SUV values by 0.3-20% compared to those measured with FSVOI. The CT-based method showed slightly greater underestimation bias. Regional SUV values derived from all three spatial normalization methods were correlated significantly (p < 0.0001) with those measured with FSVOI. CT-based spatial normalization may be an alternative method for structure-based spatial normalization of [(18)F] FDG PET when MR imaging is unavailable. Therefore, it is useful for PET/CT studies with various radiotracers whose uptake is expected to be limited to specific brain regions or highly variable within study population.
Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Background Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. Methods The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008–2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. Results The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse “V” shape and “V” shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. Conclusion We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across provinces. Future research should explore the risk factors that cause spatial correlated structure or high variation of HFMD incidence which can be explained by temperature. When analyzing association between HFMD incidence and climatic variables, spatial heterogeneity among provinces should be evaluated. Moreover, the extra-Poisson multilevel model was capable of modeling the association between overdispersion of HFMD incidence and climatic variables. PMID:26808311
2013-01-01
Introduction There is a great health services disparity between urban and rural areas in China. The percentage of people who are unable to access health services due to long travel times increases. This paper takes Donghai County as the study unit to analyse areas with physician shortages and characteristics of the potential spatial accessibility of health services. We analyse how the unequal health services resources distribution and the New Cooperative Medical Scheme affect the potential spatial accessibility of health services in Donghai County. We also give some advice on how to alleviate the unequal spatial accessibility of health services in areas that are more remote and isolated. Methods The shortest traffic times of from hospitals to villages are calculated with an O-D matrix of GIS extension model. This paper applies an enhanced two-step floating catchment area (E2SFCA) method to study the spatial accessibility of health services and to determine areas with physician shortages in Donghai County. The sensitivity of the E2SFCA for assessing variation in the spatial accessibility of health services is checked using different impedance coefficient valuesa. Geostatistical Analyst model and spatial analyst method is used to analyse the spatial pattern and the edge effect of potential spatial accessibility of health services. Results The results show that 69% of villages have access to lower potential spatial accessibility of health services than the average for Donghai County, and 79% of the village scores are lower than the average for Jiangsu Province. The potential spatial accessibility of health services diminishes greatly from the centre of the county to outlying areas. Using a smaller impedance coefficient leads to greater disparity among the villages. The spatial accessibility of health services is greater along highway in the county. Conclusions Most of villages are in underserved health services areas. An unequal distribution of health service resources and the reimbursement policies of the New Cooperative Medical Scheme have led to an edge effect regarding spatial accessibility of health services in Donghai County, whereby people living on the edge of the county have less access to health services. Comprehensive measures should be considered to alleviate the unequal spatial accessibility of health services in areas that are more remote and isolated. PMID:23688278
A Note on Spatial Averaging and Shear Stresses Within Urban Canopies
NASA Astrophysics Data System (ADS)
Xie, Zheng-Tong; Fuka, Vladimir
2018-04-01
One-dimensional urban models embedded in mesoscale numerical models may place several grid points within the urban canopy. This requires an accurate parametrization for shear stresses (i.e. vertical momentum fluxes) including the dispersive stress and momentum sinks at these points. We used a case study with a packing density of 33% and checked rigorously the vertical variation of spatially-averaged total shear stress, which can be used in a one-dimensional column urban model. We found that the intrinsic spatial average, in which the volume or area of the solid parts are not included in the average process, yield greater time-spatial average of total stress within the canopy and a more evident abrupt change at the top of the buildings than the comprehensive spatial average, in which the volume or area of the solid parts are included in the average.
NASA Astrophysics Data System (ADS)
Wong, Jaime G.; Rosi, Giuseppe A.; Rouhi, Amirreza; Rival, David E.
2017-10-01
Particle tracking velocimetry (PTV) produces high-quality temporal information that is often neglected when computing spatial gradients. A method is presented here to utilize this temporal information in order to improve the estimation of spatial gradients for spatially unstructured Lagrangian data sets. Starting with an initial guess, this method penalizes any gradient estimate where the substantial derivative of vorticity along a pathline is not equal to the local vortex stretching/tilting. Furthermore, given an initial guess, this method can proceed on an individual pathline without any further reference to neighbouring pathlines. The equivalence of the substantial derivative and vortex stretching/tilting is based on the vorticity transport equation, where viscous diffusion is neglected. By minimizing the residual of the vorticity-transport equation, the proposed method is first tested to reduce error and noise on a synthetic Taylor-Green vortex field dissipating in time. Furthermore, when the proposed method is applied to high-density experimental data collected with `Shake-the-Box' PTV, noise within the spatial gradients is significantly reduced. In the particular test case investigated here of an accelerating circular plate captured during a single run, the method acts to delineate the shear layer and vortex core, as well as resolve the Kelvin-Helmholtz instabilities, which were previously unidentifiable without the use of ensemble averaging. The proposed method shows promise for improving PTV measurements that require robust spatial gradients while retaining the unstructured Lagrangian perspective.
NASA Technical Reports Server (NTRS)
Acker, James G.; Uz, Stephanie Schollaert; Shen, Suhung; Leptoukh, Gregory G.
2010-01-01
Application of appropriate spatial averaging techniques is crucial to correct evaluation of ocean color radiometric data, due to the common log-normal or mixed log-normal distribution of these data. Averaging method is particularly crucial for data acquired in coastal regions. The effect of averaging method was markedly demonstrated for a precipitation-driven event on the U.S. Northeast coast in October-November 2005, which resulted in export of high concentrations of riverine colored dissolved organic matter (CDOM) to New York and New Jersey coastal waters over a period of several days. Use of the arithmetic mean averaging method created an inaccurate representation of the magnitude of this event in SeaWiFS global mapped chl a data, causing it to be visualized as a very large chl a anomaly. The apparent chl a anomaly was enhanced by the known incomplete discrimination of CDOM and phytoplankton chlorophyll in SeaWiFS data; other data sources enable an improved characterization. Analysis using the geometric mean averaging method did not indicate this event to be statistically anomalous. Our results predicate the necessity of providing the geometric mean averaging method for ocean color radiometric data in the Goddard Earth Sciences DISC Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni).
Cross-comparison and evaluation of air pollution field estimation methods
NASA Astrophysics Data System (ADS)
Yu, Haofei; Russell, Armistead; Mulholland, James; Odman, Talat; Hu, Yongtao; Chang, Howard H.; Kumar, Naresh
2018-04-01
Accurate estimates of human exposure is critical for air pollution health studies and a variety of methods are currently being used to assign pollutant concentrations to populations. Results from these methods may differ substantially, which can affect the outcomes of health impact assessments. Here, we applied 14 methods for developing spatiotemporal air pollutant concentration fields of eight pollutants to the Atlanta, Georgia region. These methods include eight methods relying mostly on air quality observations (CM: central monitor; SA: spatial average; IDW: inverse distance weighting; KRIG: kriging; TESS-D: discontinuous tessellation; TESS-NN: natural neighbor tessellation with interpolation; LUR: land use regression; AOD: downscaled satellite-derived aerosol optical depth), one using the RLINE dispersion model, and five methods using a chemical transport model (CMAQ), with and without using observational data to constrain results. The derived fields were evaluated and compared. Overall, all methods generally perform better at urban than rural area, and for secondary than primary pollutants. We found the CM and SA methods may be appropriate only for small domains, and for secondary pollutants, though the SA method lead to large negative spatial correlations when using data withholding for PM2.5 (spatial correlation coefficient R = -0.81). The TESS-D method was found to have major limitations. Results of the IDW, KRIG and TESS-NN methods are similar. They are found to be better suited for secondary pollutants because of their satisfactory temporal performance (e.g. average temporal R2 > 0.85 for PM2.5 but less than 0.35 for primary pollutant NO2). In addition, they are suitable for areas with relatively dense monitoring networks due to their inability to capture spatial concentration variabilities, as indicated by the negative spatial R (lower than -0.2 for PM2.5 when assessed using data withholding). The performance of LUR and AOD methods were similar to kriging. Using RLINE and CMAQ fields without fusing observational data led to substantial errors and biases, though the CMAQ model captured spatial gradients reasonably well (spatial R = 0.45 for PM2.5). Two unique tests conducted here included quantifying autocorrelation of method biases (which can be important in time series analyses) and how well the methods capture the observed interspecies correlations (which would be of particular importance in multipollutant health assessments). Autocorrelation of method biases lasted longest and interspecies correlations of primary pollutants was higher than observations when air quality models were used without data fusing. Use of hybrid methods that combine air quality model outputs with observational data overcome some of these limitations and is better suited for health studies. Results from this study contribute to better understanding the strengths and weaknesses of different methods for estimating human exposures.
Multi-sensor image fusion algorithm based on multi-objective particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Xie, Xia-zhu; Xu, Ya-wei
2017-11-01
On the basis of DT-CWT (Dual-Tree Complex Wavelet Transform - DT-CWT) theory, an approach based on MOPSO (Multi-objective Particle Swarm Optimization Algorithm) was proposed to objectively choose the fused weights of low frequency sub-bands. High and low frequency sub-bands were produced by DT-CWT. Absolute value of coefficients was adopted as fusion rule to fuse high frequency sub-bands. Fusion weights in low frequency sub-bands were used as particles in MOPSO. Spatial Frequency and Average Gradient were adopted as two kinds of fitness functions in MOPSO. The experimental result shows that the proposed approach performances better than Average Fusion and fusion methods based on local variance and local energy respectively in brightness, clarity and quantitative evaluation which includes Entropy, Spatial Frequency, Average Gradient and QAB/F.
de Vries, W; Wieggers, H J J; Brus, D J
2010-08-05
Element fluxes through forest ecosystems are generally based on measurements of concentrations in soil solution at regular time intervals at plot locations sampled in a regular grid. Here we present spatially averaged annual element leaching fluxes in three Dutch forest monitoring plots using a new sampling strategy in which both sampling locations and sampling times are selected by probability sampling. Locations were selected by stratified random sampling with compact geographical blocks of equal surface area as strata. In each sampling round, six composite soil solution samples were collected, consisting of five aliquots, one per stratum. The plot-mean concentration was estimated by linear regression, so that the bias due to one or more strata being not represented in the composite samples is eliminated. The sampling times were selected in such a way that the cumulative precipitation surplus of the time interval between two consecutive sampling times was constant, using an estimated precipitation surplus averaged over the past 30 years. The spatially averaged annual leaching flux was estimated by using the modeled daily water flux as an ancillary variable. An important advantage of the new method is that the uncertainty in the estimated annual leaching fluxes due to spatial and temporal variation and resulting sampling errors can be quantified. Results of this new method were compared with the reference approach in which daily leaching fluxes were calculated by multiplying daily interpolated element concentrations with daily water fluxes and then aggregated to a year. Results show that the annual fluxes calculated with the reference method for the period 2003-2005, including all plots, elements and depths, lies only in 53% of the cases within the range of the average +/-2 times the standard error of the new method. Despite the differences in results, both methods indicate comparable N retention and strong Al mobilization in all plots, with Al leaching being nearly equal to the leaching of SO(4) and NO(3) with fluxes expressed in mol(c) ha(-1) yr(-1). This illustrates that Al release, which is the clearest signal of soil acidification, is mainly due to the external input of SO(4) and NO(3).
Fukuyama, Atsushi; Isoda, Haruo; Morita, Kento; Mori, Marika; Watanabe, Tomoya; Ishiguro, Kenta; Komori, Yoshiaki; Kosugi, Takafumi
2017-01-01
Introduction: We aim to elucidate the effect of spatial resolution of three-dimensional cine phase contrast magnetic resonance (3D cine PC MR) imaging on the accuracy of the blood flow analysis, and examine the optimal setting for spatial resolution using flow phantoms. Materials and Methods: The flow phantom has five types of acrylic pipes that represent human blood vessels (inner diameters: 15, 12, 9, 6, and 3 mm). The pipes were fixed with 1% agarose containing 0.025 mol/L gadolinium contrast agent. A blood-mimicking fluid with human blood property values was circulated through the pipes at a steady flow. Magnetic resonance (MR) images (three-directional phase images with speed information and magnitude images for information of shape) were acquired using the 3-Tesla MR system and receiving coil. Temporal changes in spatially-averaged velocity and maximum velocity were calculated using hemodynamic analysis software. We calculated the error rates of the flow velocities based on the volume flow rates measured with a flowmeter and examined measurement accuracy. Results: When the acrylic pipe was the size of the thoracicoabdominal or cervical artery and the ratio of pixel size for the pipe was set at 30% or lower, spatially-averaged velocity measurements were highly accurate. When the pixel size ratio was set at 10% or lower, maximum velocity could be measured with high accuracy. It was difficult to accurately measure maximum velocity of the 3-mm pipe, which was the size of an intracranial major artery, but the error for spatially-averaged velocity was 20% or less. Conclusions: Flow velocity measurement accuracy of 3D cine PC MR imaging for pipes with inner sizes equivalent to vessels in the cervical and thoracicoabdominal arteries is good. The flow velocity accuracy for the pipe with a 3-mm-diameter that is equivalent to major intracranial arteries is poor for maximum velocity, but it is relatively good for spatially-averaged velocity. PMID:28132996
Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E
2017-01-01
Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial autocorrelation in an SDM context and, by taking account of random effects, produce outputs that can better elucidate the role of covariates in predicting species occurrence. Given that it is often unclear what the drivers are behind data clumping in an empirical occurrence dataset, or indeed how geographically restricted these data are, spatially-explicit Bayesian SDMs may be the better choice when modelling the spatial distribution of target species.
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
Estimation of the vortex length scale and intensity from two-dimensional samples
NASA Technical Reports Server (NTRS)
Reuss, D. L.; Cheng, W. P.
1992-01-01
A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.
NASA Astrophysics Data System (ADS)
Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi
2017-11-01
Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.
Global Precipitation Measurement (GPM) Validation Network
NASA Technical Reports Server (NTRS)
Schwaller, Mathew; Moris, K. Robert
2010-01-01
The method averages the minimum TRMM PR and Ground Radar (GR) sample volumes needed to match-up spatially/temporally coincident PR and GR data types. PR and GR averages are calculated at the geometric intersection of the PR rays with the individual Ground Radar(GR)sweeps. Along-ray PR data are averaged only in the vertical, GR data are averaged only in the horizontal. Small difference in PR & GR reflectivity high in the atmosphere, relatively larger differences. Version 6 TRMM PR underestimates rainfall in the case of convective rain in the lower part of the atmosphere by 30 to 40 percent.
Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain
NASA Astrophysics Data System (ADS)
LöWe, H.; Helbig, N.
2012-10-01
We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.
Evaluation techniques and metrics for assessment of pan+MSI fusion (pansharpening)
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
2015-05-01
Fusion of broadband panchromatic data with narrow band multispectral data - pansharpening - is a common and often studied problem in remote sensing. Many methods exist to produce data fusion results with the best possible spatial and spectral characteristics, and a number have been commercially implemented. This study examines the output products of 4 commercial implementations with regard to their relative strengths and weaknesses for a set of defined image characteristics and analyst use-cases. Image characteristics used are spatial detail, spatial quality, spectral integrity, and composite color quality (hue and saturation), and analyst use-cases included a variety of object detection and identification tasks. The imagery comes courtesy of the RIT SHARE 2012 collect. Two approaches are used to evaluate the pansharpening methods, analyst evaluation or qualitative measure and image quality metrics or quantitative measures. Visual analyst evaluation results are compared with metric results to determine which metrics best measure the defined image characteristics and product use-cases and to support future rigorous characterization the metrics' correlation with the analyst results. Because pansharpening represents a trade between adding spatial information from the panchromatic image, and retaining spectral information from the MSI channels, the metrics examined are grouped into spatial improvement metrics and spectral preservation metrics. A single metric to quantify the quality of a pansharpening method would necessarily be a combination of weighted spatial and spectral metrics based on the importance of various spatial and spectral characteristics for the primary task of interest. Appropriate metrics and weights for such a combined metric are proposed here, based on the conducted analyst evaluation. Additionally, during this work, a metric was developed specifically focused on assessment of spatial structure improvement relative to a reference image and independent of scene content. Using analysis of Fourier transform images, a measure of high-frequency content is computed in small sub-segments of the image. The average increase in high-frequency content across the image is used as the metric, where averaging across sub-segments combats the scene dependent nature of typical image sharpness techniques. This metric had an improved range of scores, better representing difference in the test set than other common spatial structure metrics.
NASA Technical Reports Server (NTRS)
McClanahan, T. P.; Mitrofanov, I. G.; Boynton, W. V.; Chin, G.; Livengood, T.; Starr, R. D.; Evans, L. G.; Mazarico, E.; Smith, D. E.
2012-01-01
We present a method and preliminary results related to determining the spatial resolution of orbital neutron detectors using epithermal maps and differential topographic masks. Our technique is similar to coded aperture imaging methods for optimizing photonic signals in telescopes [I]. In that approach photon masks with known spatial patterns in a telescope aperature are used to systematically restrict incoming photons which minimizes interference and enhances photon signal to noise. Three orbital neutron detector systems with different stated spatial resolutions are evaluated. The differing spatial resolutions arise due different orbital altitudes and the use of neutron collimation techniques. 1) The uncollimated Lunar Prospector Neutron Spectrometer (LPNS) system has spatial resolution of 45km FWHM from approx. 30km altitude mission phase [2]. The Lunar Rennaissance Orbiter (LRO) Lunar Exploration Neutron Detector (LEND) with two detectors at 50km altitude evaluated here: 2) the collimated 10km FWHM spatial resolution detector CSETN and 3) LEND's collimated Sensor for Epithermal Neutrons (SETN). Thus providing two orbital altitudes to study factors of: uncollimated vs collimated and two average altitudes for their effect on fields-of-view.
Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.
2013-01-01
Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679
Spatial averaging for small molecule diffusion in condensed phase environments
NASA Astrophysics Data System (ADS)
Plattner, Nuria; Doll, J. D.; Meuwly, Markus
2010-07-01
Spatial averaging is a new approach for sampling rare-event problems. The approach modifies the importance function which improves the sampling efficiency while keeping a defined relation to the original statistical distribution. In this work, spatial averaging is applied to multidimensional systems for typical problems arising in physical chemistry. They include (I) a CO molecule diffusing on an amorphous ice surface, (II) a hydrogen molecule probing favorable positions in amorphous ice, and (III) CO migration in myoglobin. The systems encompass a wide range of energy barriers and for all of them spatial averaging is found to outperform conventional Metropolis Monte Carlo. It is also found that optimal simulation parameters are surprisingly similar for the different systems studied, in particular, the radius of the point cloud over which the potential energy function is averaged. For H2 diffusing in amorphous ice it is found that facile migration is possible which is in agreement with previous suggestions from experiment. The free energy barriers involved are typically lower than 1 kcal/mol. Spatial averaging simulations for CO in myoglobin are able to locate all currently characterized metastable states. Overall, it is found that spatial averaging considerably improves the sampling of configurational space.
Stress before and after the 2002 Denali fault earthquake
Wesson, R.L.; Boyd, O.S.
2007-01-01
Spatially averaged, absolute deviatoric stress tensors along the faults ruptured during the 2002 Denali fault earthquake, both before and after the event, are derived, using a new method, from estimates of the orientations of the principal stresses and the stress change associated with the earthquake. Stresses are estimated in three regions along the Denali fault, one of which also includes the Susitna Glacier fault, and one region along the Totschunda fault. Estimates of the spatially averaged shear stress before the earthquake resolved onto the faults that ruptured during the event range from near 1 MPa to near 4 MPa. Shear stresses estimated along the faults in all these regions after the event are near zero (0 ?? 1 MPa). These results suggest that deviatoric stresses averaged over a few tens of km along strike are low, and that the stress drop during the earthquake was complete or nearly so.
Correction for spatial averaging in laser speckle contrast analysis
Thompson, Oliver; Andrews, Michael; Hirst, Evan
2011-01-01
Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623
Spatial probabilistic pulsatility model for enhancing photoplethysmographic imaging systems
NASA Astrophysics Data System (ADS)
Amelard, Robert; Clausi, David A.; Wong, Alexander
2016-11-01
Photoplethysmographic imaging (PPGI) is a widefield noncontact biophotonic technology able to remotely monitor cardiovascular function over anatomical areas. Although spatial context can provide insight into physiologically relevant sampling locations, existing PPGI systems rely on coarse spatial averaging with no anatomical priors for assessing arterial pulsatility. Here, we developed a continuous probabilistic pulsatility model for importance-weighted blood pulse waveform extraction. Using a data-driven approach, the model was constructed using a 23 participant sample with a large demographic variability (11/12 female/male, age 11 to 60 years, BMI 16.4 to 35.1 kg·m-2). Using time-synchronized ground-truth blood pulse waveforms, spatial correlation priors were computed and projected into a coaligned importance-weighted Cartesian space. A modified Parzen-Rosenblatt kernel density estimation method was used to compute the continuous resolution-agnostic probabilistic pulsatility model. The model identified locations that consistently exhibited pulsatility across the sample. Blood pulse waveform signals extracted with the model exhibited significantly stronger temporal correlation (W=35,p<0.01) and spectral SNR (W=31,p<0.01) compared to uniform spatial averaging. Heart rate estimation was in strong agreement with true heart rate [r2=0.9619, error (μ,σ)=(0.52,1.69) bpm].
Characterizing the Spatial Contiguity of Extreme Precipitation over the US in the Recent Past
NASA Astrophysics Data System (ADS)
Touma, D. E.; Swain, D. L.; Diffenbaugh, N. S.
2016-12-01
The spatial characteristics of extreme precipitation over an area can define the hydrologic response in a basin, subsequently affecting the flood risk in the region. Here, we examine the spatial extent of extreme precipitation in the US by defining its "footprint": a contiguous area of rainfall exceeding a certain threshold (e.g., 90th percentile) on a given day. We first characterize the climatology of extreme rainfall footprint sizes across the US from 1980-2015 using Daymet, a high-resolution observational gridded rainfall dataset. We find that there are distinct regional and seasonal differences in average footprint sizes of extreme daily rainfall. In the winter, the Midwest shows footprints exceeding 500,000 sq. km while the Front Range exhibits footprints of 10,000 sq. km. Alternatively, the summer average footprint size is generally smaller and more uniform across the US, ranging from 10,000 sq. km in the Southwest to 100,000 sq. km in Montana and North Dakota. Moreover, we find that there are some significant increasing trends of average footprint size between 1980-2015, specifically in the Southwest in the winter and the Northeast in the spring. While gridded daily rainfall datasets allow for a practical framework in calculating footprint size, this calculation heavily depends on the interpolation methods that have been used in creating the dataset. Therefore, we assess footprint size using the GHCN-Daily station network and use geostatistical methods to define footprints of extreme rainfall directly from station data. Compared to the findings from Daymet, preliminary results using this method show fewer small daily footprint sizes over the US while large footprints are of similar number and magnitude to Daymet. Overall, defining the spatial characteristics of extreme rainfall as well as observed and expected changes in these characteristics allows us to better understand the hydrologic response to extreme rainfall and how to better characterize flood risks.
Spatial analysis of county-based gonorrhoea incidence in mainland China, from 2004 to 2009.
Yin, Fei; Feng, Zijian; Li, Xiaosong
2012-07-01
Gonorrhoea is one of the most common sexually transmissible infections in mainland China. Effective spatial monitoring of gonorrhoea incidence is important for successful implementation of control and prevention programs. The county-level gonorrhoea incidence rates for all of mainland China was monitored through examining spatial patterns. County-level data on gonorrhoea cases between 2004 and 2009 were obtained from the China Information System for Disease Control and Prevention. Bayesian smoothing and exploratory spatial data analysis (ESDA) methods were used to characterise the spatial distribution pattern of gonorrhoea cases. During the 6-year study period, the average annual gonorrhoea incidence was 12.41 cases per 100000 people. Using empirical Bayes smoothed rates, the local Moran test identified one significant single-centre cluster and two significant multi-centre clusters of high gonorrhoea risk (all P-values <0.01). Bayesian smoothing and ESDA methods can assist public health officials in using gonorrhoea surveillance data to identify high risk areas. Allocating more resources to such areas could effectively reduce gonorrhoea incidence.
Scaling field data to calibrate and validate moderate spatial resolution remote sensing models
Baccini, A.; Friedl, M.A.; Woodcock, C.E.; Zhu, Z.
2007-01-01
Validation and calibration are essential components of nearly all remote sensing-based studies. In both cases, ground measurements are collected and then related to the remote sensing observations or model results. In many situations, and particularly in studies that use moderate resolution remote sensing, a mismatch exists between the sensor's field of view and the scale at which in situ measurements are collected. The use of in situ measurements for model calibration and validation, therefore, requires a robust and defensible method to spatially aggregate ground measurements to the scale at which the remotely sensed data are acquired. This paper examines this challenge and specifically considers two different approaches for aggregating field measurements to match the spatial resolution of moderate spatial resolution remote sensing data: (a) landscape stratification; and (b) averaging of fine spatial resolution maps. The results show that an empirically estimated stratification based on a regression tree method provides a statistically defensible and operational basis for performing this type of procedure.
Spatial-frequency dependent binocular imbalance in amblyopia
Kwon, MiYoung; Wiecek, Emily; Dakin, Steven C.; Bex, Peter J.
2015-01-01
While amblyopia involves both binocular imbalance and deficits in processing high spatial frequency information, little is known about the spatial-frequency dependence of binocular imbalance. Here we examined binocular imbalance as a function of spatial frequency in amblyopia using a novel computer-based method. Binocular imbalance at four spatial frequencies was measured with a novel dichoptic letter chart in individuals with amblyopia, or normal vision. Our dichoptic letter chart was composed of band-pass filtered letters arranged in a layout similar to the ETDRS acuity chart. A different chart was presented to each eye of the observer via stereo-shutter glasses. The relative contrast of the corresponding letter in each eye was adjusted by a computer staircase to determine a binocular Balance Point at which the observer reports the letter presented to either eye with equal probability. Amblyopes showed pronounced binocular imbalance across all spatial frequencies, with greater imbalance at high compared to low spatial frequencies (an average increase of 19%, p < 0.01). Good test-retest reliability of the method was demonstrated by the Bland-Altman plot. Our findings suggest that spatial-frequency dependent binocular imbalance may be useful for diagnosing amblyopia and as an outcome measure for recovery of binocular vision following therapy. PMID:26603125
Spatial-frequency dependent binocular imbalance in amblyopia.
Kwon, MiYoung; Wiecek, Emily; Dakin, Steven C; Bex, Peter J
2015-11-25
While amblyopia involves both binocular imbalance and deficits in processing high spatial frequency information, little is known about the spatial-frequency dependence of binocular imbalance. Here we examined binocular imbalance as a function of spatial frequency in amblyopia using a novel computer-based method. Binocular imbalance at four spatial frequencies was measured with a novel dichoptic letter chart in individuals with amblyopia, or normal vision. Our dichoptic letter chart was composed of band-pass filtered letters arranged in a layout similar to the ETDRS acuity chart. A different chart was presented to each eye of the observer via stereo-shutter glasses. The relative contrast of the corresponding letter in each eye was adjusted by a computer staircase to determine a binocular Balance Point at which the observer reports the letter presented to either eye with equal probability. Amblyopes showed pronounced binocular imbalance across all spatial frequencies, with greater imbalance at high compared to low spatial frequencies (an average increase of 19%, p < 0.01). Good test-retest reliability of the method was demonstrated by the Bland-Altman plot. Our findings suggest that spatial-frequency dependent binocular imbalance may be useful for diagnosing amblyopia and as an outcome measure for recovery of binocular vision following therapy.
Mathes, Robert W; Lall, Ramona; Levin-Rector, Alison; Sell, Jessica; Paladini, Marc; Konty, Kevin J; Olson, Don; Weiss, Don
2017-01-01
The New York City Department of Health and Mental Hygiene has operated an emergency department syndromic surveillance system since 2001, using temporal and spatial scan statistics run on a daily basis for cluster detection. Since the system was originally implemented, a number of new methods have been proposed for use in cluster detection. We evaluated six temporal and four spatial/spatio-temporal detection methods using syndromic surveillance data spiked with simulated injections. The algorithms were compared on several metrics, including sensitivity, specificity, positive predictive value, coherence, and timeliness. We also evaluated each method's implementation, programming time, run time, and the ease of use. Among the temporal methods, at a set specificity of 95%, a Holt-Winters exponential smoother performed the best, detecting 19% of the simulated injects across all shapes and sizes, followed by an autoregressive moving average model (16%), a generalized linear model (15%), a modified version of the Early Aberration Reporting System's C2 algorithm (13%), a temporal scan statistic (11%), and a cumulative sum control chart (<2%). Of the spatial/spatio-temporal methods we tested, a spatial scan statistic detected 3% of all injects, a Bayes regression found 2%, and a generalized linear mixed model and a space-time permutation scan statistic detected none at a specificity of 95%. Positive predictive value was low (<7%) for all methods. Overall, the detection methods we tested did not perform well in identifying the temporal and spatial clusters of cases in the inject dataset. The spatial scan statistic, our current method for spatial cluster detection, performed slightly better than the other tested methods across different inject magnitudes and types. Furthermore, we found the scan statistics, as applied in the SaTScan software package, to be the easiest to program and implement for daily data analysis.
Temporal and Spatial Variation of Water Yield Modulus in the Yangtze River Basin in Recent 60 Years
NASA Astrophysics Data System (ADS)
Shi, Xiaoqing; Weng, Baisha; Qin, Tianling
2018-01-01
The Yangtze River Basin is the largest river basin of Asia and the third largest river basin of the world, the gross water resources amount ranks first in the river basins of the country, and it occupies an important position in the national water resources strategic layout. Under the influence of climate change and human activities, the water cycle has changed. The temporal and spatial distribution of precipitation in the basin is more uneven and the floods are frequent. In order to explore the water yield condition in the Yangtze River Basin, we selected the Water Yield Modulus (WYM) as the evaluation index, then analyzed the temporal and spatial evolution characteristics of the WYM in the Yangtze River Basin by using the climate tendency method and the M-K trend test method. The results showed that the average WYM of the Yangtze River Basin in 1956-2015 are between 103,600 and 1,262,900 m3/km2, with an average value of 562,300 m3/km2, which is greater than the national average value of 295,000 m3/km2. The minimum value appeared in the northwestern part of the Tongtian River district, the maximum value appeared in the northeastern of Dongting Lake district. The rate of change in 1956-2015 is between -0.68/a and 0.79/a, it showed a downward trend in the western part but not significantly, an upward trend in the eastern part reached a significance level of α=0.01. The minimum value appeared in the Tongtian River district, the largest value appeared in the Hangjia Lake district, and the average tendency rate is 0.04/a in the whole basin.
Lall, Ramona; Levin-Rector, Alison; Sell, Jessica; Paladini, Marc; Konty, Kevin J.; Olson, Don; Weiss, Don
2017-01-01
The New York City Department of Health and Mental Hygiene has operated an emergency department syndromic surveillance system since 2001, using temporal and spatial scan statistics run on a daily basis for cluster detection. Since the system was originally implemented, a number of new methods have been proposed for use in cluster detection. We evaluated six temporal and four spatial/spatio-temporal detection methods using syndromic surveillance data spiked with simulated injections. The algorithms were compared on several metrics, including sensitivity, specificity, positive predictive value, coherence, and timeliness. We also evaluated each method’s implementation, programming time, run time, and the ease of use. Among the temporal methods, at a set specificity of 95%, a Holt-Winters exponential smoother performed the best, detecting 19% of the simulated injects across all shapes and sizes, followed by an autoregressive moving average model (16%), a generalized linear model (15%), a modified version of the Early Aberration Reporting System’s C2 algorithm (13%), a temporal scan statistic (11%), and a cumulative sum control chart (<2%). Of the spatial/spatio-temporal methods we tested, a spatial scan statistic detected 3% of all injects, a Bayes regression found 2%, and a generalized linear mixed model and a space-time permutation scan statistic detected none at a specificity of 95%. Positive predictive value was low (<7%) for all methods. Overall, the detection methods we tested did not perform well in identifying the temporal and spatial clusters of cases in the inject dataset. The spatial scan statistic, our current method for spatial cluster detection, performed slightly better than the other tested methods across different inject magnitudes and types. Furthermore, we found the scan statistics, as applied in the SaTScan software package, to be the easiest to program and implement for daily data analysis. PMID:28886112
A research of road centerline extraction algorithm from high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Yushan; Xu, Tingfa
2017-09-01
Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.
Smith, Brian J; Zhang, Lixun; Field, R William
2007-11-10
This paper presents a Bayesian model that allows for the joint prediction of county-average radon levels and estimation of the associated leukaemia risk. The methods are motivated by radon data from an epidemiologic study of residential radon in Iowa that include 2726 outdoor and indoor measurements. Prediction of county-average radon is based on a geostatistical model for the radon data which assumes an underlying continuous spatial process. In the radon model, we account for uncertainties due to incomplete spatial coverage, spatial variability, characteristic differences between homes, and detector measurement error. The predicted radon averages are, in turn, included as a covariate in Poisson models for incident cases of acute lymphocytic (ALL), acute myelogenous (AML), chronic lymphocytic (CLL), and chronic myelogenous (CML) leukaemias reported to the Iowa cancer registry from 1973 to 2002. Since radon and leukaemia risk are modelled simultaneously in our approach, the resulting risk estimates accurately reflect uncertainties in the predicted radon exposure covariate. Posterior mean (95 per cent Bayesian credible interval) estimates of the relative risk associated with a 1 pCi/L increase in radon for ALL, AML, CLL, and CML are 0.91 (0.78-1.03), 1.01 (0.92-1.12), 1.06 (0.96-1.16), and 1.12 (0.98-1.27), respectively. Copyright 2007 John Wiley & Sons, Ltd.
A method for determining average beach slope and beach slope variability for U.S. sandy coastlines
Doran, Kara S.; Long, Joseph W.; Overbeck, Jacquelyn R.
2015-01-01
The U.S. Geological Survey (USGS) National Assessment of Hurricane-Induced Coastal Erosion Hazards compares measurements of beach morphology with storm-induced total water levels to produce forecasts of coastal change for storms impacting the Gulf of Mexico and Atlantic coastlines of the United States. The wave-induced water level component (wave setup and swash) is estimated by using modeled offshore wave height and period and measured beach slope (from dune toe to shoreline) through the empirical parameterization of Stockdon and others (2006). Spatial and temporal variability in beach slope leads to corresponding variability in predicted wave setup and swash. For instance, seasonal and storm-induced changes in beach slope can lead to differences on the order of 1 meter (m) in wave-induced water level elevation, making accurate specification of this parameter and its associated uncertainty essential to skillful forecasts of coastal change. A method for calculating spatially and temporally averaged beach slopes is presented here along with a method for determining total uncertainty for each 200-m alongshore section of coastline.
Spatial and Temporal Evolution of Evaporation in a Drying Soil
NASA Astrophysics Data System (ADS)
Eichinger, W.; Nichols, J.; Cooper, D.; Prueger, J.
2005-12-01
The Los Alamos Scanning Raman Lidar is capable of making spatially resolved estimates of evapotranspiration over an area approaching a square kilometer, with relatively fine (25 meter) spatial resolution, using three dimensional measurements of water vapor concentrations. The method is based upon Monin-Obukhov similarity theory applied to spatially and temporally averaged data. During SMEX02, the instrument was positioned between fields of corn and soybeans. Periodic maps of evapotranspiration rates over the two fields are presented. The maps show the relatively uniform response in the early morning when surface moisture is available and progress through the day as surface water becomes increasingly limited. The change in ET rates between the two crop types is noted as are the spatial patterns as the surface dries non-uniformly.
Linear optical quantum computing in a single spatial mode.
Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A
2013-10-11
We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.
NASA Technical Reports Server (NTRS)
Sellers, Piers
2012-01-01
Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications.
NASA Astrophysics Data System (ADS)
Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan
2018-03-01
GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.
Autonomous rock detection on mars through region contrast
NASA Astrophysics Data System (ADS)
Xiao, Xueming; Cui, Hutao; Yao, Meibao; Tian, Yang
2017-08-01
In this paper, we present a new autonomous rock detection approach through region contrast. Unlike current state-of-art pixel-level rock segmenting methods, new method deals with this issue in region level, which will significantly reduce the computational cost. Image is firstly splitted into homogeneous regions based on intensity information and spatial layout. Considering the high-water memory constraints of onboard flight processor, only low-level features, average intensity and variation of superpixel, are measured. Region contrast is derived as the integration of intensity contrast and smoothness measurement. Rocks are then segmented from the resulting contrast map by an adaptive threshold. Since the merely intensity-based method may cause false detection in background areas with different illuminations from surroundings, a more reliable method is further proposed by introducing spatial factor and background similarity to the region contrast. Spatial factor demonstrates the locality of contrast, while background similarity calculates the probability of each subregion belonging to background. Our method is efficient in dealing with large images and only few parameters are needed. Preliminary experimental results show that our algorithm outperforms edge-based methods in various grayscale rover images.
Spatially averaged flow over a wavy boundary revisited
McLean, S.R.; Wolfe, S.R.; Nelson, J.M.
1999-01-01
Vertical profiles of streamwise velocity measured over bed forms are commonly used to deduce boundary shear stress for the purpose of estimating sediment transport. These profiles may be derived locally or from some sort of spatial average. Arguments for using the latter procedure are based on the assumption that spatial averaging of the momentum equation effectively removes local accelerations from the problem. Using analogies based on steady, uniform flows, it has been argued that the spatially averaged velocity profiles are approximately logarithmic and can be used to infer values of boundary shear stress. This technique of using logarithmic profiles is investigated using detailed laboratory measurements of flow structure and boundary shear stress over fixed two-dimensional bed forms. Spatial averages over the length of the bed form of mean velocity measurements at constant distances from the mean bed elevation yield vertical profiles that are highly logarithmic even though the effect of the bottom topography is observed throughout the water column. However, logarithmic fits of these averaged profiles do not yield accurate estimates of the measured total boundary shear stress. Copyright 1999 by the American Geophysical Union.
NASA Technical Reports Server (NTRS)
Mackenzie, Anne I.; Lawrence, Roland W.
2000-01-01
As new radiometer technologies provide the possibility of greatly improved spatial resolution, their performance must also be evaluated in terms of expected sensitivity and absolute accuracy. As aperture size increases, the sensitivity of a Dicke mode radiometer can be maintained or improved by application of any or all of three digital averaging techniques: antenna data averaging with a greater than 50% antenna duty cycle, reference data averaging, and gain averaging. An experimental, noise-injection, benchtop radiometer at C-band showed a 68.5% reduction in Delta-T after all three averaging methods had been applied simultaneously. For any one antenna integration time, the optimum 34.8% reduction in Delta-T was realized by using an 83.3% antenna/reference duty cycle.
Sutherland, Chris; Munoz, David; Miller, David A.W.; Grant, Evan H. Campbell
2016-01-01
Spatial capture–recapture (SCR) is a relatively recent development in ecological statistics that provides a spatial context for estimating abundance and space use patterns, and improves inference about absolute population density. SCR has been applied to individual encounter data collected noninvasively using methods such as camera traps, hair snares, and scat surveys. Despite the widespread use of capture-based surveys to monitor amphibians and reptiles, there are few applications of SCR in the herpetological literature. We demonstrate the utility of the application of SCR for studies of reptiles and amphibians by analyzing capture–recapture data from Red-Backed Salamanders, Plethodon cinereus, collected using artificial cover boards. Using SCR to analyze spatial encounter histories of marked individuals, we found evidence that density differed little among four sites within the same forest (on average, 1.59 salamanders/m2) and that salamander detection probability peaked in early October (Julian day 278) reflecting expected surface activity patterns of the species. The spatial scale of detectability, a measure of space use, indicates that the home range size for this population of Red-Backed Salamanders in autumn was 16.89 m2. Surveying reptiles and amphibians using artificial cover boards regularly generates spatial encounter history data of known individuals, which can readily be analyzed using SCR methods, providing estimates of absolute density and inference about the spatial scale of habitat use.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
NASA Astrophysics Data System (ADS)
Lu, Guolan; Halig, Luma; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei
2014-03-01
As an emerging technology, hyperspectral imaging (HSI) combines both the chemical specificity of spectroscopy and the spatial resolution of imaging, which may provide a non-invasive tool for cancer detection and diagnosis. Early detection of malignant lesions could improve both survival and quality of life of cancer patients. In this paper, we introduce a tensor-based computation and modeling framework for the analysis of hyperspectral images to detect head and neck cancer. The proposed classification method can distinguish between malignant tissue and healthy tissue with an average sensitivity of 96.97% and an average specificity of 91.42% in tumor-bearing mice. The hyperspectral imaging and classification technology has been demonstrated in animal models and can have many potential applications in cancer research and management.
Song, Yongze; Ge, Yong; Wang, Jinfeng; Ren, Zhoupeng; Liao, Yilan; Peng, Junhuan
2016-07-07
Malaria is one of the most severe parasitic diseases in the world. Spatial distribution estimation of malaria and its future scenarios are important issues for malaria control and elimination. Furthermore, sophisticated nonlinear relationships for prediction between malaria incidence and potential variables have not been well constructed in previous research. This study aims to estimate these nonlinear relationships and predict future malaria scenarios in northern China. Nonlinear relationships between malaria incidence and predictor variables were constructed using a genetic programming (GP) method, to predict the spatial distributions of malaria under climate change scenarios. For this, the examples of monthly average malaria incidence were used in each county of northern China from 2004 to 2010. Among the five variables at county level, precipitation rate and temperature are used for projections, while elevation, water density index, and gross domestic product are held at their present-day values. Average malaria incidence was 0.107 ‰ per annum in northern China, with incidence characteristics in significant spatial clustering. A GP-based model fit the relationships with average relative error (ARE) = 8.127 % for training data (R(2) = 0.825) and 17.102 % for test data (R(2) = 0.532). The fitness of GP results are significantly improved compared with those by generalized additive models (GAM) and linear regressions. With the future precipitation rate and temperature conditions in Special Report on Emission Scenarios (SRES) family B1, A1B and A2 scenarios, spatial distributions and changes in malaria incidences in 2020, 2030, 2040 and 2050 were predicted and mapped. The GP method increases the precision of predicting the spatial distribution of malaria incidence. With the assumption of varied precipitation rate and temperature, and other variables controlled, the relationships between incidence and the varied variables appear sophisticated nonlinearity and spatially differentiation. Using the future fluctuated precipitation and the increased temperature, median malaria incidence in 2020, 2030, 2040 and 2050 would significantly increase that it might increase 19 to 29 % in 2020, but currently China is in the malaria elimination phase, indicating that the effective strategies and actions had been taken. While the mean incidences will not increase even reduce due to the incidence reduction in high-risk regions but the simultaneous expansion of the high-risk areas.
Representation of vegetation by continental data sets derived from NOAA-AVHRR data
NASA Technical Reports Server (NTRS)
Justice, C. O.; Townshend, J. R. G.; Kalb, V. L.
1991-01-01
Images of the normalized difference vegetation index (NDVI) are examined with specific attention given to the effect of spatial scales on the understanding of surface phenomena. A scale variance analysis is conducted on NDVI annual and seasonal images of Africa taken from 1987 NOAA-AVHRR data at spatial scales ranging from 8-512 km. The scales at which spatial variation takes place are determined and the relative magnitude of the variations are considered. Substantial differences are demonstrated, notably an increase in spatial variation with coarsening spatial resolution. Different responses in scale variance as a function of spatial resolution are noted in an analysis of maximum value composites for February and September; the difference is most marked in areas with very seasonal vegetation. The spatial variation at different scales is attributed to different factors, and methods involving the averaging of areas of transition and surface heterogeneity can oversimplify surface conditions. The spatial characteristics and the temporal variability of areas should be considered to accurately apply satellite data to global models.
Iqbal, Zohaib; Wilson, Neil E; Thomas, M Albert
2016-03-01
Several different pathologies, including many neurodegenerative disorders, affect the energy metabolism of the brain. Glutamate, a neurotransmitter in the brain, can be used as a biomarker to monitor these metabolic processes. One method that is capable of quantifying glutamate concentration reliably in several regions of the brain is TE-averaged (1) H spectroscopic imaging. However, this type of method requires the acquisition of multiple TE lines, resulting in long scan durations. The goal of this experiment was to use non-uniform sampling, compressed sensing reconstruction and an echo planar readout gradient to reduce the scan time by a factor of eight to acquire TE-averaged spectra in three spatial dimensions. Simulation of glutamate and glutamine showed that the 2.2-2.4 ppm spectral region contained 95% glutamate signal using the TE-averaged method. Peak integration of this spectral range and home-developed, prior-knowledge-based fitting were used for quantitation. Gray matter brain phantom measurements were acquired on a Siemens 3 T Trio scanner. Non-uniform sampling was applied retrospectively to these phantom measurements and quantitative results of glutamate with respect to creatine 3.0 (Glu/Cr) ratios showed a coefficient of variance of 16% for peak integration and 9% for peak fitting using eight-fold acceleration. In vivo scans of the human brain were acquired as well and five different brain regions were quantified using the prior-knowledge-based algorithm. Glu/Cr ratios from these regions agreed with previously reported results in the literature. The method described here, called accelerated TE-averaged echo planar spectroscopic imaging (TEA-EPSI), is a significant methodological advancement and may be a useful tool for categorizing glutamate changes in pathologies where affected brain regions are not known a priori. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Battaglin, William A.; Kuhn, Gerhard; Parker, Randolph S.
1993-01-01
The U.S. Geological Survey Precipitation-Runoff Modeling System, a modular, distributed-parameter, watershed-modeling system, is being applied to 20 smaller watersheds within the Gunnison River basin. The model is used to derive a daily water balance for subareas in a watershed, ultimately producing simulated streamflows that can be input into routing and accounting models used to assess downstream water availability under current conditions, and to assess the sensitivity of water resources in the basin to alterations in climate. A geographic information system (GIS) is used to automate a method for extracting physically based hydrologic response unit (HRU) distributed parameter values from digital data sources, and for the placement of those estimates into GIS spatial datalayers. The HRU parameters extracted are: area, mean elevation, average land-surface slope, predominant aspect, predominant land-cover type, predominant soil type, average total soil water-holding capacity, and average water-holding capacity of the root zone.
Daly, Keith R; Tracy, Saoirse R; Crout, Neil M J; Mairhofer, Stefan; Pridmore, Tony P; Mooney, Sacha J; Roose, Tiina
2018-01-01
Spatially averaged models of root-soil interactions are often used to calculate plant water uptake. Using a combination of X-ray computed tomography (CT) and image-based modelling, we tested the accuracy of this spatial averaging by directly calculating plant water uptake for young wheat plants in two soil types. The root system was imaged using X-ray CT at 2, 4, 6, 8 and 12 d after transplanting. The roots were segmented using semi-automated root tracking for speed and reproducibility. The segmented geometries were converted to a mesh suitable for the numerical solution of Richards' equation. Richards' equation was parameterized using existing pore scale studies of soil hydraulic properties in the rhizosphere of wheat plants. Image-based modelling allows the spatial distribution of water around the root to be visualized and the fluxes into the root to be calculated. By comparing the results obtained through image-based modelling to spatially averaged models, the impact of root architecture and geometry in water uptake was quantified. We observed that the spatially averaged models performed well in comparison to the image-based models with <2% difference in uptake. However, the spatial averaging loses important information regarding the spatial distribution of water near the root system. © 2017 John Wiley & Sons Ltd.
Rigid shape matching by segmentation averaging.
Wang, Hongzhi; Oliensis, John
2010-04-01
We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.
Asten, M.W.; Stephenson, William J.; Hartzell, Stephen
2015-01-01
The SPAC method of processing microtremor noise observations for estimation of Vs profiles has a limitation that the array has circular or triangular symmetry in order to allow spatial (azimuthal) averaging of inter-station coherencies over a constant station separation. Common processing methods allow for station separations to vary by typically ±10% in the azimuthal averaging before degradation of the SPAC spectrum is excessive. A limitation on use of high-wavenumbers in inversions of SPAC spectra to Vs profiles has been the requirement for exact array symmetry to avoid loss of information in the azimuthal averaging step. In this paper we develop a new wavenumber-normalised SPAC method (KRSPAC) where instead of performing averaging of sets of coherency versus frequency spectra and then fitting to a model SPAC spectrum, we interpolate each spectrum to coherency versus k.r, where k and r are wavenumber and station separation respectively, and r may be different for each pair of stations. For fundamental mode Rayleigh-wave energy the model SPAC spectrum to be fitted reduces to Jo(kr). The normalization process changes with each iteration since k is a function of frequency and phase velocity and hence is updated each iteration. The method proves robust and is demonstrated on data acquired in the Santa Clara Valley, CA, (Site STGA) where an asymmetric array having station separations varying by a factor of 2 is compared with a conventional triangular array; a 300-mdeep borehole with a downhole Vs log provides nearby ground truth. The method is also demonstrated on data from the Pleasanton array, CA, where station spacings are irregular and vary from 400 to 1200 m. The KRSPAC method allows inversion of data using kr (unitless) values routinely up to 30, and occasionally up to 60. Thus despite the large and irregular station spacings, this array permits resolution of Vs as fine as 15 m for the near-surface sediments, and down to a maximum depth of 2.5 km.
Rethink potential risks of toxic emissions from natural gas and oil mining.
Meng, Qingmin
2018-09-01
Studies have showed the increasing environmental and public health risks of toxic emissions from natural gas and oil mining, which have become even worse as fracking is becoming a dominant approach in current natural gas extraction. However, governments and communities often overlook the serious air pollutants from oil and gas mining, which are often quantified lower than the significant levels of adverse health effects. Therefore, we are facing a challenging dilemma: how could we clearly understand the potential risks of air toxics from natural gas and oil mining. This short study aims at the design and application of simple and robust methods to enhance and improve current understanding of the becoming worse toxic air emissions from natural gas and oil mining as fracking is becoming the major approach. Two simple ratios, the min-to-national-average and the max-to-national-average, are designed and applied to each type of air pollutants in a natural gas and oil mining region. The two ratios directly indicate how significantly high a type of air pollutant could be due to natural gas and oil mining by comparing it to the national average records, although it may not reach the significant risks of adverse health effects according to current risk screening methods. The min-to-national-average and the max-to-national-average ratios can be used as a direct and powerful method to describe the significance of air pollution by comparing it to the national average. The two ratios are easy to use for governments, stakeholders, and the public to pay enough attention on the air pollutants from natural gas and oil mining. The two ratios can also be thematically mapped at sampled sites for spatial monitoring, but spatial mitigation and analysis of environmental and health risks need other measurements of environmental and demographic characteristics across a natural gas and oil mining area. Copyright © 2018 Elsevier Ltd. All rights reserved.
A periodic spatio-spectral filter for event-related potentials.
Ghaderi, Foad; Kim, Su Kyoung; Kirchner, Elsa Andrea
2016-12-01
With respect to single trial detection of event-related potentials (ERPs), spatial and spectral filters are two of the most commonly used pre-processing techniques for signal enhancement. Spatial filters reduce the dimensionality of the data while suppressing the noise contribution and spectral filters attenuate frequency components that most likely belong to noise subspace. However, the frequency spectrum of ERPs overlap with that of the ongoing electroencephalogram (EEG) and different types of artifacts. Therefore, proper selection of the spectral filter cutoffs is not a trivial task. In this research work, we developed a supervised method to estimate the spatial and finite impulse response (FIR) spectral filters, simultaneously. We evaluated the performance of the method on offline single trial classification of ERPs in datasets recorded during an oddball paradigm. The proposed spatio-spectral filter improved the overall single-trial classification performance by almost 9% on average compared with the case that no spatial filters were used. We also analyzed the effects of different spectral filter lengths and the number of retained channels after spatial filtering. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Wang, J.; Feng, B.
2016-12-01
Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.
NASA Astrophysics Data System (ADS)
Chung, Hyunkoo; Lu, Guolan; Tian, Zhiqiang; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei
2016-03-01
Hyperspectral imaging (HSI) is an emerging imaging modality for medical applications. HSI acquires two dimensional images at various wavelengths. The combination of both spectral and spatial information provides quantitative information for cancer detection and diagnosis. This paper proposes using superpixels, principal component analysis (PCA), and support vector machine (SVM) to distinguish regions of tumor from healthy tissue. The classification method uses 2 principal components decomposed from hyperspectral images and obtains an average sensitivity of 93% and an average specificity of 85% for 11 mice. The hyperspectral imaging technology and classification method can have various applications in cancer research and management.
Xu, Yingying; Lin, Lanfen; Hu, Hongjie; Wang, Dan; Zhu, Wenchao; Wang, Jian; Han, Xian-Hua; Chen, Yen-Wei
2018-01-01
The bag of visual words (BoVW) model is a powerful tool for feature representation that can integrate various handcrafted features like intensity, texture, and spatial information. In this paper, we propose a novel BoVW-based method that incorporates texture and spatial information for the content-based image retrieval to assist radiologists in clinical diagnosis. This paper presents a texture-specific BoVW method to represent focal liver lesions (FLLs). Pixels in the region of interest (ROI) are classified into nine texture categories using the rotation-invariant uniform local binary pattern method. The BoVW-based features are calculated for each texture category. In addition, a spatial cone matching (SCM)-based representation strategy is proposed to describe the spatial information of the visual words in the ROI. In a pilot study, eight radiologists with different clinical experience performed diagnoses for 20 cases with and without the top six retrieved results. A total of 132 multiphase computed tomography volumes including five pathological types were collected. The texture-specific BoVW was compared to other BoVW-based methods using the constructed dataset of FLLs. The results show that our proposed model outperforms the other three BoVW methods in discriminating different lesions. The SCM method, which adds spatial information to the orderless BoVW model, impacted the retrieval performance. In the pilot trial, the average diagnosis accuracy of the radiologists was improved from 66 to 80% using the retrieval system. The preliminary results indicate that the texture-specific features and the SCM-based BoVW features can effectively characterize various liver lesions. The retrieval system has the potential to improve the diagnostic accuracy and the confidence of the radiologists.
Scene-based nonuniformity correction using local constant statistics.
Zhang, Chao; Zhao, Wenyi
2008-06-01
In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.
2014-01-01
Background There have been large-scale outbreaks of hand, foot and mouth disease (HFMD) in Mainland China over the last decade. These events varied greatly across the country. It is necessary to identify the spatial risk factors and spatial distribution patterns of HFMD for public health control and prevention. Climate risk factors associated with HFMD occurrence have been recognized. However, few studies discussed the socio-economic determinants of HFMD risk at a space scale. Methods HFMD records in Mainland China in May 2008 were collected. Both climate and socio-economic factors were selected as potential risk exposures of HFMD. Odds ratio (OR) was used to identify the spatial risk factors. A spatial autologistic regression model was employed to get OR values of each exposures and model the spatial distribution patterns of HFMD risk. Results Results showed that both climate and socio-economic variables were spatial risk factors for HFMD transmission in Mainland China. The statistically significant risk factors are monthly average precipitation (OR = 1.4354), monthly average temperature (OR = 1.379), monthly average wind speed (OR = 1.186), the number of industrial enterprises above designated size (OR = 17.699), the population density (OR = 1.953), and the proportion of student population (OR = 1.286). The spatial autologistic regression model has a good goodness of fit (ROC = 0.817) and prediction accuracy (Correct ratio = 78.45%) of HFMD occurrence. The autologistic regression model also reduces the contribution of the residual term in the ordinary logistic regression model significantly, from 17.25 to 1.25 for the odds ratio. Based on the prediction results of the spatial model, we obtained a map of the probability of HFMD occurrence that shows the spatial distribution pattern and local epidemic risk over Mainland China. Conclusions The autologistic regression model was used to identify spatial risk factors and model spatial risk patterns of HFMD. HFMD occurrences were found to be spatially heterogeneous over the Mainland China, which is related to both the climate and socio-economic variables. The combination of socio-economic and climate exposures can explain the HFMD occurrences more comprehensively and objectively than those with only climate exposures. The modeled probability of HFMD occurrence at the county level reveals not only the spatial trends, but also the local details of epidemic risk, even in the regions where there were no HFMD case records. PMID:24731248
Interocular transfer of spatial adaptation is weak at low spatial frequencies.
Baker, Daniel H; Meese, Tim S
2012-06-15
Adapting one eye to a high contrast grating reduces sensitivity to similar target gratings shown to the same eye, and also to those shown to the opposite eye. According to the textbook account, interocular transfer (IOT) of adaptation is around 60% of the within-eye effect. However, most previous studies on this were limited to using high spatial frequencies, sustained presentation, and criterion-dependent methods for assessing threshold. Here, we measure IOT across a wide range of spatiotemporal frequencies, using a criterion-free 2AFC method. We find little or no IOT at low spatial frequencies, consistent with other recent observations. At higher spatial frequencies, IOT was present, but weaker than previously reported (around 35%, on average, at 8c/deg). Across all conditions, monocular adaptation raised thresholds by around a factor of 2, and observers showed normal binocular summation, demonstrating that they were not binocularly compromised. These findings prompt a reassessment of our understanding of the binocular architecture implied by interocular adaptation. In particular, the output of monocular channels may be available to perceptual decision making at low spatial frequencies. Copyright © 2012 Elsevier Ltd. All rights reserved.
Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús
2014-01-01
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102
Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús
2014-04-11
This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).
Effects of spatial variability and scale on areal -average evapotranspiration
NASA Technical Reports Server (NTRS)
Famiglietti, J. S.; Wood, Eric F.
1993-01-01
This paper explores the effect of spatial variability and scale on areally-averaged evapotranspiration. A spatially-distributed water and energy balance model is employed to determine the effect of explicit patterns of model parameters and atmospheric forcing on modeled areally-averaged evapotranspiration over a range of increasing spatial scales. The analysis is performed from the local scale to the catchment scale. The study area is King's Creek catchment, an 11.7 sq km watershed located on the native tallgrass prairie of Kansas. The dominant controls on the scaling behavior of catchment-average evapotranspiration are investigated by simulation, as is the existence of a threshold scale for evapotranspiration modeling, with implications for explicit versus statistical representation of important process controls. It appears that some of our findings are fairly general, and will therefore provide a framework for understanding the scaling behavior of areally-averaged evapotranspiration at the catchment and larger scales.
Efficient Strategies for Estimating the Spatial Coherence of Backscatter
Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.
2017-01-01
The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342
Imaging Intratumor Heterogeneity: Role in Therapy Response, Resistance, and Clinical Outcome
O’Connor, James P.B.; Rose, Chris J.; Waterton, John C.; Carano, Richard A.D.; Parker, Geoff J.M.; Jackson, Alan
2014-01-01
Tumors exhibit genomic and phenotypic heterogeneity which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks. These methods can establish whether one tumor is more or less heterogeneous than another and can identify sub-regions with differing biology. In this article we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, rather than be developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. PMID:25421725
Graichen, Uwe; Eichardt, Roland; Fiedler, Patrique; Strohmeier, Daniel; Zanow, Frank; Haueisen, Jens
2015-01-01
Important requirements for the analysis of multichannel EEG data are efficient techniques for signal enhancement, signal decomposition, feature extraction, and dimensionality reduction. We propose a new approach for spatial harmonic analysis (SPHARA) that extends the classical spatial Fourier analysis to EEG sensors positioned non-uniformly on the surface of the head. The proposed method is based on the eigenanalysis of the discrete Laplace-Beltrami operator defined on a triangular mesh. We present several ways to discretize the continuous Laplace-Beltrami operator and compare the properties of the resulting basis functions computed using these discretization methods. We apply SPHARA to somatosensory evoked potential data from eleven volunteers and demonstrate the ability of the method for spatial data decomposition, dimensionality reduction and noise suppression. When employing SPHARA for dimensionality reduction, a significantly more compact representation can be achieved using the FEM approach, compared to the other discretization methods. Using FEM, to recover 95% and 99% of the total energy of the EEG data, on average only 35% and 58% of the coefficients are necessary. The capability of SPHARA for noise suppression is shown using artificial data. We conclude that SPHARA can be used for spatial harmonic analysis of multi-sensor data at arbitrary positions and can be utilized in a variety of other applications. PMID:25885290
NASA Astrophysics Data System (ADS)
Crawford, Ben; Grimmond, Sue; Kent, Christoph; Gabey, Andrew; Ward, Helen; Sun, Ting; Morrison, William
2017-04-01
Remotely sensed data from satellites have potential to enable high-resolution, automated calculation of urban surface energy balance terms and inform decisions about urban adaptations to environmental change. However, aerodynamic resistance methods to estimate sensible heat flux (QH) in cities using satellite-derived observations of surface temperature are difficult in part due to spatial and temporal variability of the thermal aerodynamic resistance term (rah). In this work, we extend an empirical function to estimate rah using observational data from several cities with a broad range of surface vegetation land cover properties. We then use this function to calculate spatially and temporally variable rah in London based on high-resolution (100 m) land cover datasets and in situ meteorological observations. In order to calculate high-resolution QH based on satellite-observed land surface temperatures, we also develop and employ novel methods to i) apply source area-weighted averaging of surface and meteorological variables across the study spatial domain, ii) calculate spatially variable, high-resolution meteorological variables (wind speed, friction velocity, and Obukhov length), iii) incorporate spatially interpolated urban air temperatures from a distributed sensor network, and iv) apply a modified Monte Carlo approach to assess uncertainties with our results, methods, and input variables. Modeled QH using the aerodynamic resistance method is then compared to in situ observations in central London from a unique network of scintillometers and eddy-covariance measurements.
Lossless Compression of JPEG Coded Photo Collections.
Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng
2016-04-06
The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.
Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.
2006-01-01
We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media, Inc.
Simulations of spray autoignition and flame establishment with two-dimensional CMC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Y.M.; Boulouchos, K.; De Paola, G.
2005-12-01
The unsteady two-dimensional conditional moment closure (CMC) model with first-order closure of the chemistry and supplied with standard models for the conditional convection and turbulent diffusion terms has been interfaced with a commercial engine CFD code and analyzed with two numerical methods, an 'exact' calculation with the method of lines and a faster fractional-step method. The aim was to examine the sensitivity of the predictions to the operator splitting errors and to identify the extent to which spatial transport terms are important for spray autoignition problems. Despite the underlying simplifications, solution of the full CMC equations allows a single modelmore » to be used for the autoignition, flame propagation ('premixed mode'), and diffusion flame mode of diesel combustion, which makes CMC a good candidate model for practical engine calculations. It was found that (i) the conditional averages have significant spatial gradients before ignition and during the premixed mode and (ii) that the inclusion of physical-space transport affects the calculation of the autoignition delay time, both of which suggest that volume-averaged CMC approaches may be inappropriate for diesel-like problems. A balance of terms in the CMC equation before and after autoignition shows the relative magnitude of spatial transport and allows conjectures on the structure of the premixed phase of diesel combustion. Very good agreement with available experimental data is found concerning ignition delays and the effect of background air turbulence on them.« less
NASA Astrophysics Data System (ADS)
Ziemann, Astrid; Starke, Manuela; Schütze, Claudia
2017-11-01
An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single measurement. Instantaneous wind components can be derived with a maximum uncertainty of 0.3 m s-1 depending on sampling, signal analysis, and environmental influences on sound propagation. Averaging over a period of 30 min, the standard error of the mean values can be decreased by a factor of at least 0.5 for OP-FTIR and 0.1 for A-TOM depending on the required spatial resolution. The presented validation of the joint application of the two independent, nonintrusive methods is in the focus of attention concerning their ability to quantify advective fluxes.
Interpolating precipitation and its relation to runoff and non-point source pollution.
Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L
2005-01-01
When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.
Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas
2012-01-01
Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602
Longitudinal variability in Jupiter's zonal winds derived from multi-wavelength HST observations
NASA Astrophysics Data System (ADS)
Johnson, Perianne E.; Morales-Juberías, Raúl; Simon, Amy; Gaulme, Patrick; Wong, Michael H.; Cosentino, Richard G.
2018-06-01
Multi-wavelength Hubble Space Telescope (HST) images of Jupiter from the Outer Planets Atmospheres Legacy (OPAL) and Wide Field Coverage for Juno (WFCJ) programs in 2015, 2016, and 2017 are used to derive wind profiles as a function of latitude and longitude. Wind profiles are typically zonally averaged to reduce measurement uncertainties. However, doing this destroys any variations of the zonal-component of winds in the longitudinal direction. Here, we present the results derived from using a "sliding-window" correlation method. This method adds longitudinal specificity, and allows for the detection of spatial variations in the zonal winds. Spatial variations are identified in two jets: 1 at 17 ° N, the location of a prominent westward jet, and the other at 7 ° S, the location of the chevrons. Temporal and spatial variations at the 24°N jet and the 5-μm hot spots are also examined.
Topography and refractometry of nanostructures using spatial light interference microscopy.
Wang, Zhuo; Chun, Ik Su; Li, Xiuling; Ong, Zhun-Yong; Pop, Eric; Millet, Larry; Gillette, Martha; Popescu, Gabriel
2010-01-15
Spatial light interference microscopy (SLIM) is a novel method developed in our laboratory that provides quantitative phase images of transparent structures with a 0.3 nm spatial and 0.03 nm temporal accuracy owing to the white light illumination and its common path interferometric geometry. We exploit these features and demonstrate SLIM's ability to perform topography at a single atomic layer in graphene. Further, using a decoupling procedure that we developed for cylindrical structures, we extract the axially averaged refractive index of semiconductor nanotubes and a neurite of a live hippocampal neuron in culture. We believe that this study will set the basis for novel high-throughput topography and refractometry of man-made and biological nanostructures.
Yu, Haofei; Stuart, Amy L
2013-08-01
Intra-urban differences in concentrations of oxides of nitrogen (NO(x)) and exposure disparities in the Tampa area were investigated across temporal scales through emissions estimation, dispersion modeling, and analysis of residential subpopulation exposures. A hybrid estimation method was applied to provide link-level hourly on-road mobile source emissions. Ambient concentrations in 2002 at 1 km resolution were estimated using the CALPUFF dispersion model. Results were combined with residential demographic data at the block-group level, to investigate exposures and inequality for select racioethnic, age, and income population subgroups. Results indicate that on-road mobile sources contributed disproportionately to ground-level concentrations and dominated the spatial footprint across temporal scales (annual average to maximum hour). The black, lower income (less than $40K annually), and Hispanic subgroups had higher estimated exposures than the county average; the white and higher income (greater than $60K) subgroups had lower than average exposures. As annual average concentration increased, the disparity between groups generally increased. However for the highest 1-hr concentrations, reverse disparities were also found. Current studies of air pollution exposure inequality have not fully considered differences by time scale and are often limited in spatial resolution. The modeling methods and the results presented here can be used to improve understanding of potential impacts of urban growth form on health and to improve urban sustainability. Results suggest focusing urban design interventions on reducing on-road mobile source emissions in areas with high densities of minority and low income groups.
A Context-sensitive Approach to Anonymizing Spatial Surveillance Data: Impact on Outbreak Detection
Cassa, Christopher A.; Grannis, Shaun J.; Overhage, J. Marc; Mandl, Kenneth D.
2006-01-01
Objective: The use of spatially based methods and algorithms in epidemiology and surveillance presents privacy challenges for researchers and public health agencies. We describe a novel method for anonymizing individuals in public health data sets by transposing their spatial locations through a process informed by the underlying population density. Further, we measure the impact of the skew on detection of spatial clustering as measured by a spatial scanning statistic. Design: Cases were emergency department (ED) visits for respiratory illness. Baseline ED visit data were injected with artificially created clusters ranging in magnitude, shape, and location. The geocoded locations were then transformed using a de-identification algorithm that accounts for the local underlying population density. Measurements: A total of 12,600 separate weeks of case data with artificially created clusters were combined with control data and the impact on detection of spatial clustering identified by a spatial scan statistic was measured. Results: The anonymization algorithm produced an expected skew of cases that resulted in high values of data set k-anonymity. De-identification that moves points an average distance of 0.25 km lowers the spatial cluster detection sensitivity by less than 4% and lowers the detection specificity less than 1%. Conclusion: A population-density–based Gaussian spatial blurring markedly decreases the ability to identify individuals in a data set while only slightly decreasing the performance of a standardly used outbreak detection tool. These findings suggest new approaches to anonymizing data for spatial epidemiology and surveillance. PMID:16357353
Liao, Jiaqiang; Yu, Shicheng; Yang, Fang; Yang, Min; Hu, Yuehua; Zhang, Juying
2016-01-01
Hand, Foot, and Mouth Disease (HFMD) is a worldwide infectious disease. In China, many provinces have reported HFMD cases, especially the south and southwest provinces. Many studies have found a strong association between the incidence of HFMD and climatic factors such as temperature, rainfall, and relative humidity. However, few studies have analyzed cluster effects between various geographical units. The nonlinear relationships and lag effects between weekly HFMD cases and climatic variables were estimated for the period of 2008-2013 using a polynomial distributed lag model. The extra-Poisson multilevel spatial polynomial model was used to model the exact relationship between weekly HFMD incidence and climatic variables after considering cluster effects, provincial correlated structure of HFMD incidence and overdispersion. The smoothing spline methods were used to detect threshold effects between climatic factors and HFMD incidence. The HFMD incidence spatial heterogeneity distributed among provinces, and the scale measurement of overdispersion was 548.077. After controlling for long-term trends, spatial heterogeneity and overdispersion, temperature was highly associated with HFMD incidence. Weekly average temperature and weekly temperature difference approximate inverse "V" shape and "V" shape relationships associated with HFMD incidence. The lag effects for weekly average temperature and weekly temperature difference were 3 weeks and 2 weeks. High spatial correlated HFMD incidence were detected in northern, central and southern province. Temperature can be used to explain most of variation of HFMD incidence in southern and northeastern provinces. After adjustment for temperature, eastern and Northern provinces still had high variation HFMD incidence. We found a relatively strong association between weekly HFMD incidence and weekly average temperature. The association between the HFMD incidence and climatic variables spatial heterogeneity distributed across provinces. Future research should explore the risk factors that cause spatial correlated structure or high variation of HFMD incidence which can be explained by temperature. When analyzing association between HFMD incidence and climatic variables, spatial heterogeneity among provinces should be evaluated. Moreover, the extra-Poisson multilevel model was capable of modeling the association between overdispersion of HFMD incidence and climatic variables.
Average variograms to guide soil sampling
NASA Astrophysics Data System (ADS)
Kerry, R.; Oliver, M. A.
2004-10-01
To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.
Chen, Guang-Hong; Li, Yinsheng
2015-08-01
In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial-temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial-temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial-temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. In numerical simulations, the 240(∘) short scan angular span was divided into four consecutive 60(∘) angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200(∘), three 66(∘) angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60(∘) angular subsectors.
NASA Astrophysics Data System (ADS)
Molero, B.; Leroux, D. J.; Richaume, P.; Kerr, Y. H.; Merlin, O.; Cosh, M. H.; Bindlish, R.
2018-01-01
We conduct a novel comprehensive investigation that seeks to prove the connection between spatial scales and timescales in surface soil moisture (SM) within the satellite footprint ( 50 km). Modeled and measured point series at Yanco and Little Washita in situ networks are first decomposed into anomalies at timescales ranging from 0.5 to 128 days, using wavelet transforms. Then, their degree of spatial representativeness is evaluated on a per-timescale basis by comparison to large spatial scale data sets (the in situ spatial average, SMOS, AMSR2, and ECMWF). Four methods are used for this: temporal stability analysis (TStab), triple collocation (TC), percentage of correlated areas (CArea), and a new proposed approach that uses wavelet-based correlations (WCor). We found that the mean of the spatial representativeness values tends to increase with the timescale but so does their dispersion. Locations exhibit poor spatial representativeness at scales below 4 days, while either very good or poor representativeness at seasonal scales. Regarding the methods, TStab cannot be applied to the anomaly series due to their multiple zero-crossings, and TC is suitable for week and month scales but not for other scales where data set cross-correlations are found low. In contrast, WCor and CArea give consistent results at all timescales. WCor is less sensitive to the spatial sampling density, so it is a robust method that can be applied to sparse networks (one station per footprint). These results are promising to improve the validation and downscaling of satellite SM series and the optimization of SM networks.
NASA Astrophysics Data System (ADS)
Steyn-Ross, Moira L.; Steyn-Ross, D. A.
2016-02-01
Mean-field models of the brain approximate spiking dynamics by assuming that each neuron responds to its neighbors via a naive spatial average that neglects local fluctuations and correlations in firing activity. In this paper we address this issue by introducing a rigorous formalism to enable spatial coarse-graining of spiking dynamics, scaling from the microscopic level of a single type 1 (integrator) neuron to a macroscopic assembly of spiking neurons that are interconnected by chemical synapses and nearest-neighbor gap junctions. Spiking behavior at the single-neuron scale ℓ ≈10 μ m is described by Wilson's two-variable conductance-based equations [H. R. Wilson, J. Theor. Biol. 200, 375 (1999), 10.1006/jtbi.1999.1002], driven by fields of incoming neural activity from neighboring neurons. We map these equations to a coarser spatial resolution of grid length B ℓ , with B ≫1 being the blocking ratio linking micro and macro scales. Our method systematically eliminates high-frequency (short-wavelength) spatial modes q ⃗ in favor of low-frequency spatial modes Q ⃗ using an adiabatic elimination procedure that has been shown to be equivalent to the path-integral coarse graining applied to renormalization group theory of critical phenomena. This bottom-up neural regridding allows us to track the percolation of synaptic and ion-channel noise from the single neuron up to the scale of macroscopic population-average variables. Anticipated applications of neural regridding include extraction of the current-to-firing-rate transfer function, investigation of fluctuation criticality near phase-transition tipping points, determination of spatial scaling laws for avalanche events, and prediction of the spatial extent of self-organized macrocolumnar structures. As a first-order exemplar of the method, we recover nonlinear corrections for a coarse-grained Wilson spiking neuron embedded in a network of identical diffusively coupled neurons whose chemical synapses have been disabled. Intriguingly, we find that reblocking transforms the original type 1 Wilson integrator into a type 2 resonator whose spike-rate transfer function exhibits abrupt spiking onset with near-vertical takeoff and chaotic dynamics just above threshold.
Callegary, J.B.; Leenhouts, J.M.; Paretti, N.V.; Jones, Christopher A.
2007-01-01
To classify recharge potential (RCP) in ephemeral-stream channels, a method was developed that incorporates information about channel geometry, vegetation characteristics, and bed-sediment apparent electrical conductivity (??a). Recharge potential is not independently measurable, but is instead formulated as a site-specific, qualitative parameter. We used data from 259 transects across two ephemeral-stream channels near Sierra Vista, Arizona, a location with a semiarid climate. Seven data types were collected: ??a averaged over two depth intervals (0-3 m, and 0-6 m), channel incision depth and width, diameter-at-breast-height of the largest tree, woody-plant and grass density. A two-tiered system was used to classify a transect's RCP. In the first tier, transects were categorized by estimates of near-surface-sediment hydraulic permeability as low, moderate, or high using measurements of 0-3 m-depth ??a. Each of these categories was subdivided into low, medium, or high RCP classes using the remaining six data types, thus yielding a total of nine RCP designations. Six sites in the study area were used to compare RCP and ??a with previously measured surrogates for hydraulic permeability. Borehole-averaged percent fines showed a moderate correlation with both shallow and deep ??a measurements, however, correlation of point measurements of saturated hydraulic conductivity, percent fines, and cylinder infiltrometer measurements with ??a and RCP was generally poor. The poor correlation was probably caused by the relatively large measurement volume and spatial averaging of ??a compared with the spatially-limited point measurements. Because of the comparatively large spatial extent of measurement transects and variety of data types collected, RCP estimates can give a more complete picture of the major factors affecting recharge at a site than is possible through point or borehole-averaged estimates of hydraulic permeability alone. ?? 2007 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Price, J.; Lakshmi, V.
2013-12-01
The advancement of remote sensing technology has led to better understanding of the spatial and temporal variation in many physical and biological parameters, such as, temperature, salinity, soil moisture, vegetation cover, and community composition. This research takes a novel approach in understanding the temporal and spatial variability of mussel body growth using remotely sensed surface temperatures and chlorophyll-a concentration. Within marine rocky intertidal ecosystems, temperature and food availability influence species abundance, physiological performance, and distribution of mussel species. Current methods to determine the temperature mussel species experience range from in-situ field observations, temperature loggers, temperature models, and using other temperature variables. However, since the temperature that mussel species experience is different from the air temperature due to physical and biological characteristics (size, color, gaping, etc.), it is difficult to accurately predict the thermal stresses they experience. Methods to determine food availability (chlorophyll-a concentration used as a proxy) for mussel species are mostly done at specific study sites using water sampling. This implies that analysis of temperature and food availability across large spatial scales and long temporal scales is not a trivial task given spatial heterogeneity. However, this is an essential step in determination of the impact of changing climate on vulnerable ecosystems such as the marine rocky intertidal system. The purpose of this study was to investigate the potential of using remotely sensed surface temperatures and chlorophyll-a concentration to better understand the temporal and spatial variability of the body growth of the ecologically and economically important rocky intertidal mussel species, Mytilus californianus. Remotely sensed sea surface temperature (SST), land surface temperature (LST), intertidal surface temperature (IST), chlorophyll-a concentration, and mussel body growth were collected for eight study sites along the coast of Oregon, USA for a 12 year period from 2000 through 2011. Differences in surface temperatures, chlorophyll-a concentration, and mussel body growth were seen across study sites. The northernmost study site, Cape Meares, had the highest average SST and the lowest average chlorophyll-a concentration. Interestingly, it also had high average mussel growth. Whereas, Cape Arago and Cape Blanco, the two southernmost study sites, had the lowest average SST and lowest average mussel growth, but had higher average chlorophyll-a concentrations. Furthermore, some study sites showed that mussel growth was related to temperature and at other study sites chlorophyll-a concentration was related to mussel growth. The strongest relationship between either temperature or chlorophyll-a concentration, was found at Boiler Bay, Oregon. Approximately 81% of the variations in mean size-specific mussel growth was explained by mean annual LST anomalies. This means that at Boiler Bay, cooler LST years resulted in less mussel growth and warmer years resulted in higher mussel growth. Results suggest that SST may influence mussel body growth more than chlorophyll-a concentration.
Morgenstern, Hai; Rafaely, Boaz
2018-02-01
Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.
Spatial ability of slow learners based on Hubert Maier theory
NASA Astrophysics Data System (ADS)
Permatasari, I.; Pramudya, I.; Kusmayadi, T. A.
2018-03-01
Slow learners are children who have low learning achievement (under the average of normal children) in one or all of the academic field, but they are not classified as a mentally retarded children. Spatial ability developed according to age and level of knowledge possessed, both from the neighborhood and formal education. Analyzing the spatial ability of students is important for teachers, as an effort to improve the quality of learning for slow learners. Especially on the implementation of inclusion school which is developing in Indonesia. This research used a qualitative method and involved slow learner students as the subject. Based on the data analysis it was found the spatial ability of slow learners, there were: spatial perception, students were able to describe the other shape of object when its position changed; spatial visualisation, students were able to describe the materials that construct an object; mental rotation, students cannot describe the object being rotated; spatial relation, students cannot describe the relations of same objects; spatial orientation, students were able to describe object from the others perspective.
Runoff simulation sensitivity to remotely sensed initial soil water content
NASA Astrophysics Data System (ADS)
Goodrich, D. C.; Schmugge, T. J.; Jackson, T. J.; Unkrich, C. L.; Keefer, T. O.; Parry, R.; Bach, L. B.; Amer, S. A.
1994-05-01
A variety of aircraft remotely sensed and conventional ground-based measurements of volumetric soil water content (SW) were made over two subwatersheds (4.4 and 631 ha) of the U.S. Department of Agriculture's Agricultural Research Service Walnut Gulch experimental watershed during the 1990 monsoon season. Spatially distributed soil water contents estimated remotely from the NASA push broom microwave radiometer (PBMR), an Institute of Radioengineering and Electronics (IRE) multifrequency radiometer, and three ground-based point methods were used to define prestorm initial SW for a distributed rainfall-runoff model (KINEROS; Woolhiser et al., 1990) at a small catchment scale (4.4 ha). At a medium catchment scale (631 ha or 6.31 km2) spatially distributed PBMR SW data were aggregated via stream order reduction. The impacts of the various spatial averages of SW on runoff simulations are discussed and are compared to runoff simulations using SW estimates derived from a simple daily water balance model. It was found that at the small catchment scale the SW data obtained from any of the measurement methods could be used to obtain reasonable runoff predictions. At the medium catchment scale, a basin-wide remotely sensed average of initial water content was sufficient for runoff simulations. This has important implications for the possible use of satellite-based microwave soil moisture data to define prestorm SW because the low spatial resolutions of such sensors may not seriously impact runoff simulations under the conditions examined. However, at both the small and medium basin scale, adequate resources must be devoted to proper definition of the input rainfall to achieve reasonable runoff simulations.
Effect of spatial averaging on multifractal properties of meteorological time series
NASA Astrophysics Data System (ADS)
Hoffmann, Holger; Baranowski, Piotr; Krzyszczak, Jaromir; Zubik, Monika
2016-04-01
Introduction The process-based models for large-scale simulations require input of agro-meteorological quantities that are often in the form of time series of coarse spatial resolution. Therefore, the knowledge about their scaling properties is fundamental for transferring locally measured fluctuations to larger scales and vice-versa. However, the scaling analysis of these quantities is complicated due to the presence of localized trends and non-stationarities. Here we assess how spatially aggregating meteorological data to coarser resolutions affects the data's temporal scaling properties. While it is known that spatial aggregation may affect spatial data properties (Hoffmann et al., 2015), it is unknown how it affects temporal data properties. Therefore, the objective of this study was to characterize the aggregation effect (AE) with regard to both temporal and spatial input data properties considering scaling properties (i.e. statistical self-similarity) of the chosen agro-meteorological time series through multifractal detrended fluctuation analysis (MFDFA). Materials and Methods Time series coming from years 1982-2011 were spatially averaged from 1 to 10, 25, 50 and 100 km resolution to assess the impact of spatial aggregation. Daily minimum, mean and maximum air temperature (2 m), precipitation, global radiation, wind speed and relative humidity (Zhao et al., 2015) were used. To reveal the multifractal structure of the time series, we used the procedure described in Baranowski et al. (2015). The diversity of the studied multifractals was evaluated by the parameters of time series spectra. In order to analyse differences in multifractal properties to 1 km resolution grids, data of coarser resolutions was disaggregated to 1 km. Results and Conclusions Analysing the spatial averaging on multifractal properties we observed that spatial patterns of the multifractal spectrum (MS) of all meteorological variables differed from 1 km grids and MS-parameters were biased by -29.1 % (precipitation; width of MS) up to >4 % (min. Temperature, Radiation; asymmetry of MS). Also, the spatial variability of MS parameters was strongly affected at the highest aggregation (100 km). Obtained results confirm that spatial data aggregation may strongly affect temporal scaling properties. This should be taken into account when upscaling for large-scale studies. Acknowledgements The study was conducted within FACCE MACSUR. Please see Baranowski et al. (2015) for details on funding. References Baranowski, P., Krzyszczak, J., Sławiński, C. et al. (2015). Climate Research 65, 39-52. Hoffman, H., G. Zhao, L.G.J. Van Bussel et al. (2015). Climate Research 65, 53-69. Zhao, G., Siebert, S., Rezaei E. et al. (2015). Agricultural and Forest Meteorology 200, 156-171.
Davis, Gregory B; Laslett, Dean; Patterson, Bradley M; Johnston, Colin D
2013-03-15
Accurate estimation of biodegradation rates during remediation of petroleum impacted soil and groundwater is critical to avoid excessive costs and to ensure remedial effectiveness. Oxygen depth profiles or oxygen consumption over time are often used separately to estimate the magnitude and timeframe for biodegradation of petroleum hydrocarbons in soil and subsurface environments. Each method has limitations. Here we integrate spatial and temporal oxygen concentration data from a field experiment to develop better estimates and more reliably quantify biodegradation rates. During a nine-month bioremediation trial, 84 sets of respiration rate data (where aeration was halted and oxygen consumption was measured over time) were collected from in situ oxygen sensors at multiple locations and depths across a diesel non-aqueous phase liquid (NAPL) contaminated subsurface. Additionally, detailed vertical soil moisture (air-filled porosity) and NAPL content profiles were determined. The spatial and temporal oxygen concentration (respiration) data were modeled assuming one-dimensional diffusion of oxygen through the soil profile which was open to the atmosphere. Point and vertically averaged biodegradation rates were determined, and compared to modeled data from a previous field trial. Point estimates of biodegradation rates assuming no diffusion ranged up to 58 mg kg(-1) day(-1) while rates accounting for diffusion ranged up to 87 mg kg(-1) day(-1). Typically, accounting for diffusion increased point biodegradation rate estimates by 15-75% and vertically averaged rates by 60-80% depending on the averaging method adopted. Importantly, ignoring diffusion led to overestimation of biodegradation rates where the location of measurement was outside the zone of NAPL contamination. Over or underestimation of biodegradation rate estimates leads to cost implications for successful remediation of petroleum impacted sites. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Spatial analysis of malaria in Anhui province, China
Zhang, Wenyi; Wang, Liping; Fang, Liqun; Ma, Jiaqi; Xu, Youfu; Jiang, Jiafu; Hui, Fengming; Wang, Jianjun; Liang, Song; Yang, Hong; Cao, Wuchun
2008-01-01
Background Malaria has re-emerged in Anhui Province, China, and this province was the most seriously affected by malaria during 2005–2006. It is necessary to understand the spatial distribution of malaria cases and to identify highly endemic areas for future public health planning and resource allocation in Anhui Province. Methods The annual average incidence at the county level was calculated using malaria cases reported between 2000 and 2006 in Anhui Province. GIS-based spatial analyses were conducted to detect spatial distribution and clustering of malaria incidence at the county level. Results The spatial distribution of malaria cases in Anhui Province from 2000 to 2006 was mapped at the county level to show crude incidence, excess hazard and spatial smoothed incidence. Spatial cluster analysis suggested 10 and 24 counties were at increased risk for malaria (P < 0.001) with the maximum spatial cluster sizes at < 50% and < 25% of the total population, respectively. Conclusion The application of GIS, together with spatial statistical techniques, provide a means to quantify explicit malaria risks and to further identify environmental factors responsible for the re-emerged malaria risks. Future public health planning and resource allocation in Anhui Province should be focused on the maximum spatial cluster region. PMID:18847489
Focused ultrasound transducer spatial peak intensity estimation: a comparison of methods
NASA Astrophysics Data System (ADS)
Civale, John; Rivens, Ian; Shaw, Adam; ter Haar, Gail
2018-03-01
Characterisation of the spatial peak intensity at the focus of high intensity focused ultrasound transducers is difficult because of the risk of damage to hydrophone sensors at the high focal pressures generated. Hill et al (1994 Ultrasound Med. Biol. 20 259-69) provided a simple equation for estimating spatial-peak intensity for solid spherical bowl transducers using measured acoustic power and focal beamwidth. This paper demonstrates theoretically and experimentally that this expression is only strictly valid for spherical bowl transducers without a central (imaging) aperture. A hole in the centre of the transducer results in over-estimation of the peak intensity. Improved strategies for determining focal peak intensity from a measurement of total acoustic power are proposed. Four methods are compared: (i) a solid spherical bowl approximation (after Hill et al 1994 Ultrasound Med. Biol. 20 259-69), (ii) a numerical method derived from theory, (iii) a method using measured sidelobe to focal peak pressure ratio, and (iv) a method for measuring the focal power fraction (FPF) experimentally. Spatial-peak intensities were estimated for 8 transducers at three drive powers levels: low (approximately 1 W), moderate (~10 W) and high (20-70 W). The calculated intensities were compared with those derived from focal peak pressure measurements made using a calibrated hydrophone. The FPF measurement method was found to provide focal peak intensity estimates that agreed most closely (within 15%) with the hydrophone measurements, followed by the pressure ratio method (within 20%). The numerical method was found to consistently over-estimate focal peak intensity (+40% on average), however, for transducers with a central hole it was more accurate than using the solid bowl assumption (+70% over-estimation). In conclusion, the ability to make use of an automated beam plotting system, and a hydrophone with good spatial resolution, greatly facilitates characterisation of the FPF, and consequently gives improved confidence in estimating spatial peak intensity from measurement of acoustic power.
Qu, Zhechao; Werhahn, Olav; Ebert, Volker
2018-06-01
The effects of thermal boundary layers on tunable diode laser absorption spectroscopy (TDLAS) measurement results must be quantified when using the line-of-sight (LOS) TDLAS under conditions with spatial temperature gradient. In this paper, a new methodology based on spectral simulation is presented quantifying the LOS TDLAS measurement deviation under conditions with thermal boundary layers. The effects of different temperature gradients and thermal boundary layer thickness on spectral collisional widths and gas concentration measurements are quantified. A CO 2 TDLAS spectrometer, which has two gas cells to generate the spatial temperature gradients, was employed to validate the simulation results. The measured deviations and LOS averaged collisional widths are in very good agreement with the simulated results for conditions with different temperature gradients. We demonstrate quantification of thermal boundary layers' thickness with proposed method by exploitation of the LOS averaged the collisional width of the path-integrated spectrum.
Chen, Chunyi; Yang, Huamin; Zhou, Zhou; Zhang, Weizhi; Kavehrad, Mohsen; Tong, Shoufeng; Wang, Tianshu
2013-12-02
The temporal covariance function of irradiance-flux fluctua-tions for Gaussian Schell-model (GSM) beams propagating in atmospheric turbulence is theoretically formulated by making use of the method of effective beam parameters. Based on this formulation, new expressions for the root-mean-square (RMS) bandwidth of the irradiance-flux temporal spectrum due to GSM beams passing through atmospheric turbulence are derived. With the help of these expressions, the temporal fade statistics of the irradiance flux in free-space optical (FSO) communication systems, using spatially partially coherent sources, impaired by atmospheric turbulence are further calculated. Results show that with a given receiver aperture size, the use of a spatially partially coherent source can reduce both the fractional fade time and average fade duration of the received light signal; however, when atmospheric turbulence grows strong, the reduction in the fractional fade time becomes insignificant for both large and small receiver apertures and in the average fade duration turns inconsiderable for small receiver apertures. It is also illustrated that if the receiver aperture size is fixed, changing the transverse correlation length of the source from a larger value to a smaller one can reduce the average fade frequency of the received light signal only when a threshold parameter in decibels greater than the critical threshold level is specified.
Application of spatial methods to identify areas with lime requirement in eastern Croatia
NASA Astrophysics Data System (ADS)
Bogunović, Igor; Kisic, Ivica; Mesic, Milan; Zgorelec, Zeljka; Percin, Aleksandra; Pereira, Paulo
2016-04-01
With more than 50% of acid soils in all agricultural land in Croatia, soil acidity is recognized as a big problem. Low soil pH leads to a series of negative phenomena in plant production and therefore as a compulsory measure for reclamation of acid soils is liming, recommended on the base of soil analysis. The need for liming is often erroneously determined only on the basis of the soil pH, because the determination of cation exchange capacity, the hydrolytic acidity and base saturation is a major cost to producers. Therefore, in Croatia, as well as some other countries, the amount of liming material needed to ameliorate acid soils is calculated by considering their hydrolytic acidity. For this research, several interpolation methods were tested to identify the best spatial predictor of hidrolitic acidity. The purpose of this study was to: test several interpolation methods to identify the best spatial predictor of hidrolitic acidity; and to determine the possibility of using multivariate geostatistics in order to reduce the number of needed samples for determination the hydrolytic acidity, all with an aim that the accuracy of the spatial distribution of liming requirement is not significantly reduced. Soil pH (in KCl) and hydrolytic acidity (Y1) is determined in the 1004 samples (from 0-30 cm) randomized collected in agricultural fields near Orahovica in eastern Croatia. This study tested 14 univariate interpolation models (part of ArcGIS software package) in order to provide most accurate spatial map of hydrolytic acidity on a base of: all samples (Y1 100%), and the datasets with 15% (Y1 85%), 30% (Y1 70%) and 50% fewer samples (Y1 50%). Parallel to univariate interpolation methods, the precision of the spatial distribution of the Y1 was tested by the co-kriging method with exchangeable acidity (pH in KCl) as a covariate. The soils at studied area had an average pH (KCl) 4,81, while the average Y1 10,52 cmol+ kg-1. These data suggest that liming is necessary agrotechnical measure for soil conditioning. The results show that ordinary kriging was most accurate univariate interpolation method with smallest error (RMSE) in all four data sets, while the least precise showed Radial Basis Functions (Thin Plate Spline and Inverse Multiquadratic). Furthermore, it is noticeable a trend of increasing errors (RMSE) with a reduced number of samples tested on the most accurate univariate interpolation model: 3,096 (Y1 100%), 3,258 (Y1 85%), 3,317 (Y1 70%), 3,546 (Y1 50%). The best-fit semivariograms show a strong spatial dependence in Y1 100% (Nugget/Sill 20.19) and Y1 85% (Nugget/Sill 23.83), while a further reduction of the number of samples resulted with moderate spatial dependence (Y1 70% -35,85% and Y1 50% - 32,01). Co-kriging method resulted in a reduction in RMSE compared with univariate interpolation methods for each data set with: 2,054, 1,731 and 1,734 for Y1 85%, Y1 70%, Y1 50%, respectively. The results show the possibility for reducing sampling costs by using co-kriging method which is useful from the practical viewpoint. Reduced number of samples by half for determination of hydrolytic acidity in the interaction with the soil pH provides a higher precision for variable liming compared to the univariate interpolation methods of the entire set of data. These data provide new opportunities to reduce costs in the practical plant production in Croatia.
Spatial Patterns and Socioecological Drivers of Dengue Fever Transmission in Queensland, Australia
Clements, Archie; Williams, Gail; Tong, Shilu; Mengersen, Kerrie
2011-01-01
Background: Understanding how socioecological factors affect the transmission of dengue fever (DF) may help to develop an early warning system of DF. Objectives: We examined the impact of socioecological factors on the transmission of DF and assessed potential predictors of locally acquired and overseas-acquired cases of DF in Queensland, Australia. Methods: We obtained data from Queensland Health on the numbers of notified DF cases by local government area (LGA) in Queensland for the period 1 January 2002 through 31 December 2005. Data on weather and the socioeconomic index were obtained from the Australian Bureau of Meteorology and the Australian Bureau of Statistics, respectively. A Bayesian spatial conditional autoregressive model was fitted at the LGA level to quantify the relationship between DF and socioecological factors. Results: Our estimates suggest an increase in locally acquired DF of 6% [95% credible interval (CI): 2%, 11%] and 61% (95% CI: 2%, 241%) in association with a 1-mm increase in average monthly rainfall and a 1°C increase in average monthly maximum temperature between 2002 and 2005, respectively. By contrast, overseas-acquired DF cases increased by 1% (95% CI: 0%, 3%) and by 1% (95% CI: 0%, 2%) in association with a 1-mm increase in average monthly rainfall and a 1-unit increase in average socioeconomic index, respectively. Conclusions: Socioecological factors appear to influence the transmission of DF in Queensland, but the drivers of locally acquired and overseas-acquired DF may differ. DF risk is spatially clustered with different patterns for locally acquired and overseas-acquired cases. PMID:22015625
NASA Astrophysics Data System (ADS)
Jha, S. K.; Brockman, R. A.; Hoffman, R. M.; Sinha, V.; Pilchak, A. L.; Porter, W. J.; Buchanan, D. J.; Larsen, J. M.; John, R.
2018-05-01
Principal component analysis and fuzzy c-means clustering algorithms were applied to slip-induced strain and geometric metric data in an attempt to discover unique microstructural configurations and their frequencies of occurrence in statistically representative instantiations of a titanium alloy microstructure. Grain-averaged fatigue indicator parameters were calculated for the same instantiation. The fatigue indicator parameters strongly correlated with the spatial location of the microstructural configurations in the principal components space. The fuzzy c-means clustering method identified clusters of data that varied in terms of their average fatigue indicator parameters. Furthermore, the number of points in each cluster was inversely correlated to the average fatigue indicator parameter. This analysis demonstrates that data-driven methods have significant potential for providing unbiased determination of unique microstructural configurations and their frequencies of occurrence in a given volume from the point of view of strain localization and fatigue crack initiation.
Optimal synchronization in space
NASA Astrophysics Data System (ADS)
Brede, Markus
2010-02-01
In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.
Fast depth decision for HEVC inter prediction based on spatial and temporal correlation
NASA Astrophysics Data System (ADS)
Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi
2016-07-01
High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.
Thompson, Steven K
2006-12-01
A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.
Topography and refractometry of nanostructures using spatial light interference microscopy (SLIM)
Wang, Zhuo; Chun, Ik Su; Li, Xiuling; Ong, Zhun-Yong; Pop, Eric; Millet, Larry; Gillette, Martha; Popescu, Gabriel
2010-01-01
Spatial Light Interference Microscopy (SLIM) is a novel method developed in our laboratory that provides quantitative phase images of transparent structures with 0.3 nm spatial and 0.03 nm temporal accuracy owing to the white light illumination and its common path interferometric geometry. We exploit these features and demonstrate SLIM's ability to perform topography at a single atomic layer in graphene. Further, using a decoupling procedure that we developed for cylindrical structures, we extract the axially-averaged refractive index of semiconductor nanotubes and a neurite of a live hippocampal neuron in culture. We believe that this study will set the basis for novel high-throughput topography and refractometry of man-made and biological nanostructures. PMID:20081970
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassianov, Evgueni; Barnard, James; Flynn, Connor
Tower-based data combined with high-resolution satellite products have been used to produce surface albedo at various spatial scales over land. Because tower-based albedo data are available at only a few sites, surface albedos using these combined data are spatially limited. Moreover, tower-based albedo data are not representative of highly heterogeneous regions. To produce areal-averaged and spectrally-resolved surface albedo for regions with various degrees of surface heterogeneity, we have developed a transmission-based retrieval and demonstrated its feasibility for relatively homogeneous land surfaces. Here we demonstrate its feasibility for a highly heterogeneous coastal region. We use the atmospheric transmission measured during amore » 19-month period (June 2009 – December 2010) by a ground-based Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (0.415, 0.5, 0.615, 0.673 and 0.87 µm) at the Department of Energy’s Atmospheric Radiation Measurement (ARM) Mobile Facility (AMF) site located on Graciosa Island. We compare the MFRSR-retrieved areal-averaged surface albedo with albedo derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations, and also a composite-based albedo. Lastly, we demonstrate that these three methods produce similar spectral signatures of surface albedo; however, the MFRSR-retrieved albedo, is higher on average (≤0.04) than the MODIS-based areal-averaged surface albedo and the largest difference occurs in winter.« less
Kassianov, Evgueni; Barnard, James; Flynn, Connor; ...
2017-07-12
Tower-based data combined with high-resolution satellite products have been used to produce surface albedo at various spatial scales over land. Because tower-based albedo data are available at only a few sites, surface albedos using these combined data are spatially limited. Moreover, tower-based albedo data are not representative of highly heterogeneous regions. To produce areal-averaged and spectrally-resolved surface albedo for regions with various degrees of surface heterogeneity, we have developed a transmission-based retrieval and demonstrated its feasibility for relatively homogeneous land surfaces. Here we demonstrate its feasibility for a highly heterogeneous coastal region. We use the atmospheric transmission measured during amore » 19-month period (June 2009 – December 2010) by a ground-based Multi-Filter Rotating Shadowband Radiometer (MFRSR) at five wavelengths (0.415, 0.5, 0.615, 0.673 and 0.87 µm) at the Department of Energy’s Atmospheric Radiation Measurement (ARM) Mobile Facility (AMF) site located on Graciosa Island. We compare the MFRSR-retrieved areal-averaged surface albedo with albedo derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations, and also a composite-based albedo. Lastly, we demonstrate that these three methods produce similar spectral signatures of surface albedo; however, the MFRSR-retrieved albedo, is higher on average (≤0.04) than the MODIS-based areal-averaged surface albedo and the largest difference occurs in winter.« less
Deng, Peng; Kavehrad, Mohsen; Liu, Zhiwen; Zhou, Zhou; Yuan, Xiuhua
2013-07-01
We study the average capacity performance for multiple-input multiple-output (MIMO) free-space optical (FSO) communication systems using multiple partially coherent beams propagating through non-Kolmogorov strong turbulence, assuming equal gain combining diversity configuration and the sum of multiple gamma-gamma random variables for multiple independent partially coherent beams. The closed-form expressions of scintillation and average capacity are derived and then used to analyze the dependence on the number of independent diversity branches, power law α, refractive-index structure parameter, propagation distance and spatial coherence length of source beams. Obtained results show that, the average capacity increases more significantly with the increase in the rank of MIMO channel matrix compared with the diversity order. The effect of the diversity order on the average capacity is independent of the power law, turbulence strength parameter and spatial coherence length, whereas these effects on average capacity are gradually mitigated as the diversity order increases. The average capacity increases and saturates with the decreasing spatial coherence length, at rates depending on the diversity order, power law and turbulence strength. There exist optimal values of the spatial coherence length and diversity configuration for maximizing the average capacity of MIMO FSO links over a variety of atmospheric turbulence conditions.
Zhu, Zhonglin; Li, Guoan
2013-01-01
Fluoroscopic image technique, using either a single image or dual images, has been widely applied to measure in vivo human knee joint kinematics. However, few studies have compared the advantages of using single and dual fluoroscopic images. Furthermore, due to the size limitation of the image intensifiers, it is possible that only a portion of the knee joint could be captured by the fluoroscopy during dynamic knee joint motion. In this paper, we presented a systematic evaluation of an automatic 2D–3D image matching method in reproducing spatial knee joint positions using either single or dual fluoroscopic image techniques. The data indicated that for the femur and tibia, their spatial positions could be determined with an accuracy and precision less than 0.2 mm in translation and less than 0.4° in orientation when dual fluoroscopic images were used. Using single fluoroscopic images, the method could produce satisfactory accuracy in joint positions in the imaging plane (average up to 0.5 mm in translation and 1.3° in rotation), but large variations along the out-plane direction (in average up to 4.0 mm in translation and 2.28 in rotation). The precision of using single fluoroscopic images to determine the actual knee positions was worse than its accuracy obtained. The data also indicated that when using dual fluoroscopic image technique, if the knee joint outlines in one image were incomplete by 80%, the algorithm could still reproduce the joint positions with high precisions. PMID:21806411
Xing, Jian; Burkom, Howard; Moniz, Linda; Edgerton, James; Leuze, Michael; Tokars, Jerome
2009-01-01
Background The Centers for Disease Control and Prevention's (CDC's) BioSense system provides near-real time situational awareness for public health monitoring through analysis of electronic health data. Determination of anomalous spatial and temporal disease clusters is a crucial part of the daily disease monitoring task. Our study focused on finding useful anomalies at manageable alert rates according to available BioSense data history. Methods The study dataset included more than 3 years of daily counts of military outpatient clinic visits for respiratory and rash syndrome groupings. We applied four spatial estimation methods in implementations of space-time scan statistics cross-checked in Matlab and C. We compared the utility of these methods according to the resultant background cluster rate (a false alarm surrogate) and sensitivity to injected cluster signals. The comparison runs used a spatial resolution based on the facility zip code in the patient record and a finer resolution based on the residence zip code. Results Simple estimation methods that account for day-of-week (DOW) data patterns yielded a clear advantage both in background cluster rate and in signal sensitivity. A 28-day baseline gave the most robust results for this estimation; the preferred baseline is long enough to remove daily fluctuations but short enough to reflect recent disease trends and data representation. Background cluster rates were lower for the rash syndrome counts than for the respiratory counts, likely because of seasonality and the large scale of the respiratory counts. Conclusion The spatial estimation method should be chosen according to characteristics of the selected data streams. In this dataset with strong day-of-week effects, the overall best detection performance was achieved using subregion averages over a 28-day baseline stratified by weekday or weekend/holiday behavior. Changing the estimation method for particular scenarios involving different spatial resolution or other syndromes can yield further improvement. PMID:19615075
Single-Molecule and Superresolution Imaging in Live Bacteria Cells
Biteen, Julie S.; Moerner, W.E.
2010-01-01
Single-molecule imaging enables biophysical measurements devoid of ensemble averaging, gives enhanced spatial resolution beyond the diffraction limit, and permits superresolution reconstructions. Here, single-molecule and superresolution imaging are applied to the study of proteins in live Caulobacter crescentus cells to illustrate the power of these methods in bacterial imaging. Based on these techniques, the diffusion coefficient and dynamics of the histidine protein kinase PleC, the localization behavior of the polar protein PopZ, and the treadmilling behavior and protein superstructure of the structural protein MreB are investigated with sub-40-nm spatial resolution, all in live cells. PMID:20300204
NASA Astrophysics Data System (ADS)
Schrön, Martin; Köhli, Markus; Scheiffele, Lena; Iwema, Joost; Bogena, Heye R.; Lv, Ling; Martini, Edoardo; Baroni, Gabriele; Rosolem, Rafael; Weimar, Jannis; Mai, Juliane; Cuntz, Matthias; Rebmann, Corinna; Oswald, Sascha E.; Dietrich, Peter; Schmidt, Ulrich; Zacharias, Steffen
2017-10-01
In the last few years the method of cosmic-ray neutron sensing (CRNS) has gained popularity among hydrologists, physicists, and land-surface modelers. The sensor provides continuous soil moisture data, averaged over several hectares and tens of decimeters in depth. However, the signal still may contain unidentified features of hydrological processes, and many calibration datasets are often required in order to find reliable relations between neutron intensity and water dynamics. Recent insights into environmental neutrons accurately described the spatial sensitivity of the sensor and thus allowed one to quantify the contribution of individual sample locations to the CRNS signal. Consequently, data points of calibration and validation datasets are suggested to be averaged using a more physically based weighting approach. In this work, a revised sensitivity function is used to calculate weighted averages of point data. The function is different from the simple exponential convention by the extraordinary sensitivity to the first few meters around the probe, and by dependencies on air pressure, air humidity, soil moisture, and vegetation. The approach is extensively tested at six distinct monitoring sites: two sites with multiple calibration datasets and four sites with continuous time series datasets. In all cases, the revised averaging method improved the performance of the CRNS products. The revised approach further helped to reveal hidden hydrological processes which otherwise remained unexplained in the data or were lost in the process of overcalibration. The presented weighting approach increases the overall accuracy of CRNS products and will have an impact on all their applications in agriculture, hydrology, and modeling.
SU-C-207A-01: A Novel Maximum Likelihood Method for High-Resolution Proton Radiography/proton CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins-Fekete, C; Centre Hospitalier University de Quebec, Quebec, QC; Mass General Hospital
2016-06-15
Purpose: Multiple Coulomb scattering is the largest contributor to blurring in proton imaging. Here we tested a maximum likelihood least squares estimator (MLLSE) to improve the spatial resolution of proton radiography (pRad) and proton computed tomography (pCT). Methods: The object is discretized into voxels and the average relative stopping power through voxel columns defined from the source to the detector pixels is optimized such that it maximizes the likelihood of the proton energy loss. The length spent by individual protons in each column is calculated through an optimized cubic spline estimate. pRad images were first produced using Geant4 simulations. Anmore » anthropomorphic head phantom and the Catphan line-pair module for 3-D spatial resolution were studied and resulting images were analyzed. Both parallel and conical beam have been investigated for simulated pRad acquisition. Then, experimental data of a pediatric head phantom (CIRS) were acquired using a recently completed experimental pCT scanner. Specific filters were applied on proton angle and energy loss data to remove proton histories that underwent nuclear interactions. The MTF10% (lp/mm) was used to evaluate and compare spatial resolution. Results: Numerical simulations showed improvement in the pRad spatial resolution for the parallel (2.75 to 6.71 lp/cm) and conical beam (3.08 to 5.83 lp/cm) reconstructed with the MLLSE compared to averaging detector pixel signals. For full tomographic reconstruction, the improved pRad were used as input into a simultaneous algebraic reconstruction algorithm. The Catphan pCT reconstruction based on the MLLSE-enhanced projection showed spatial resolution improvement for the parallel (2.83 to 5.86 lp/cm) and conical beam (3.03 to 5.15 lp/cm). The anthropomorphic head pCT displayed important contrast gains in high-gradient regions. Experimental results also demonstrated significant improvement in spatial resolution of the pediatric head radiography. Conclusion: The proposed MLLSE shows promising potential to increase the spatial resolution (up to 244%) in proton imaging.« less
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-03-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.
NASA Astrophysics Data System (ADS)
Deng, Jie; Yao, Jun; Dewald, Julius P. A.
2005-12-01
In this paper, we attempt to determine a subject's intention of generating torque at the shoulder or elbow, two neighboring joints, using scalp electroencephalogram signals from 163 electrodes for a brain-computer interface (BCI) application. To achieve this goal, we have applied a time-frequency synthesized spatial patterns (TFSP) BCI algorithm with a presorting procedure. Using this method, we were able to achieve an average recognition rate of 89% in four healthy subjects, which is comparable to the highest rates reported in the literature but now for tasks with much closer spatial representations on the motor cortex. This result demonstrates, for the first time, that the TFSP BCI method can be applied to separate intentions between generating static shoulder versus elbow torque. Furthermore, in this study, the potential application of this BCI algorithm for brain-injured patients was tested in one chronic hemiparetic stroke subject. A recognition rate of 76% was obtained, suggesting that this BCI method can provide a potential control signal for neural prostheses or other movement coordination improving devices for patients following brain injury.
Imaging intratumor heterogeneity: role in therapy response, resistance, and clinical outcome.
O'Connor, James P B; Rose, Chris J; Waterton, John C; Carano, Richard A D; Parker, Geoff J M; Jackson, Alan
2015-01-15
Tumors exhibit genomic and phenotypic heterogeneity, which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as CT density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death, and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks using PET, MRI, and other emerging molecular imaging techniques. These methods can establish whether one tumor is more or less heterogeneous than another and can identify subregions with differing biology. In this article, we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, instead of being developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. ©2014 American Association for Cancer Research.
A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Bell, Thomas L.
2003-01-01
A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.
2016-12-01
Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.
A comparison of earthquake backprojection imaging methods for dense local arrays
NASA Astrophysics Data System (ADS)
Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.
2018-03-01
Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.
Restoring method for missing data of spatial structural stress monitoring based on correlation
NASA Astrophysics Data System (ADS)
Zhang, Zeyu; Luo, Yaozhi
2017-07-01
Long-term monitoring of spatial structures is of great importance for the full understanding of their performance and safety. The missing part of the monitoring data link will affect the data analysis and safety assessment of the structure. Based on the long-term monitoring data of the steel structure of the Hangzhou Olympic Center Stadium, the correlation between the stress change of the measuring points is studied, and an interpolation method of the missing stress data is proposed. Stress data of correlated measuring points are selected in the 3 months of the season when missing data is required for fitting correlation. Data of daytime and nighttime are fitted separately for interpolation. For a simple linear regression when single point's correlation coefficient is 0.9 or more, the average error of interpolation is about 5%. For multiple linear regression, the interpolation accuracy is not significantly increased after the number of correlated points is more than 6. Stress baseline value of construction step should be calculated before interpolating missing data in the construction stage, and the average error is within 10%. The interpolation error of continuous missing data is slightly larger than that of the discrete missing data. The data missing rate of this method should better not exceed 30%. Finally, a measuring point's missing monitoring data is restored to verify the validity of the method.
Application and evaluation of ISVR method in QuickBird image fusion
NASA Astrophysics Data System (ADS)
Cheng, Bo; Song, Xiaolu
2014-05-01
QuickBird satellite images are widely used in many fields, and applications have put forward high requirements for the integration of the spatial information and spectral information of the imagery. A fusion method for high resolution remote sensing images based on ISVR is identified in this study. The core principle of ISVS is taking the advantage of radicalization targeting to remove the effect of different gain and error of satellites' sensors. Transformed from DN to radiance, the multi-spectral image's energy is used to simulate the panchromatic band. The linear regression analysis is carried through the simulation process to find a new synthetically panchromatic image, which is highly linearly correlated to the original panchromatic image. In order to evaluate, test and compare the algorithm results, this paper used ISVR and other two different fusion methods to give a comparative study of the spatial information and spectral information, taking the average gradient and the correlation coefficient as an indicator. Experiments showed that this method could significantly improve the quality of fused image, especially in preserving spectral information, to maximize the spectral information of original multispectral images, while maintaining abundant spatial information.
Spatial and Statistical Analysis of Leptospirosis in Guilan Province, Iran
NASA Astrophysics Data System (ADS)
Nia, A. Mohammadi; Alimohammadi, A.; Habibi, R.; Shirzadi, M. R.
2015-12-01
The most underdiagnosed water-borne bacterial zoonosis in the world is Leptospirosis which especially impacts tropical and humid regions. According to World Health Organization (WHO), the number of human cases is not known precisely. Available reports showed that worldwide incidences vary from 0.1-1 per 100 000 per year in temperate climates to 10-100 per 100 000 in the humid tropics. Pathogenic bacteria that is spread by the urines of rats is the main reason of water and soil infections. Rice field farmers who are in contact with infected water or soil, contain the most burden of leptospirosis prevalence. In recent years, this zoonotic disease have been occurred in north of Iran endemically. Guilan as the second rice production province (average=750 000 000 Kg, 40% of country production) after Mazandaran, has one of the most rural population (Male=487 679, Female=496 022) and rice workers (47 621 insured workers) among Iran provinces. The main objectives of this study were to analyse yearly spatial distribution and the possible spatial clusters of leptospirosis to better understand epidemiological aspects of them in the province. Survey was performed during the period of 2009-2013 at rural district level throughout the study area. Global clustering methods including the average nearest neighbour distance, Moran's I and General G indices were utilized to investigate the annual spatial distribution of diseases. At the end, significant spatial clusters have been detected with the objective of informing priority areas for public health planning and resource allocation.
A simple and efficient algorithm operating with linear time for MCEEG data compression.
Titus, Geevarghese; Sudhakar, M S
2017-09-01
Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.
NASA Astrophysics Data System (ADS)
Buchhave, Preben; Velte, Clara M.
2017-08-01
We present a method for converting a time record of turbulent velocity measured at a point in a flow to a spatial velocity record consisting of consecutive convection elements. The spatial record allows computation of dynamic statistical moments such as turbulent kinetic wavenumber spectra and spatial structure functions in a way that completely bypasses the need for Taylor's hypothesis. The spatial statistics agree with the classical counterparts, such as the total kinetic energy spectrum, at least for spatial extents up to the Taylor microscale. The requirements for applying the method are access to the instantaneous velocity magnitude, in addition to the desired flow quantity, and a high temporal resolution in comparison to the relevant time scales of the flow. We map, without distortion and bias, notoriously difficult developing turbulent high intensity flows using three main aspects that distinguish these measurements from previous work in the field: (1) The measurements are conducted using laser Doppler anemometry and are therefore not contaminated by directional ambiguity (in contrast to, e.g., frequently employed hot-wire anemometers); (2) the measurement data are extracted using a correctly and transparently functioning processor and are analysed using methods derived from first principles to provide unbiased estimates of the velocity statistics; (3) the exact mapping proposed herein has been applied to the high turbulence intensity flows investigated to avoid the significant distortions caused by Taylor's hypothesis. The method is first confirmed to produce the correct statistics using computer simulations and later applied to measurements in some of the most difficult regions of a round turbulent jet—the non-equilibrium developing region and the outermost parts of the developed jet. The proposed mapping is successfully validated using corresponding directly measured spatial statistics in the fully developed jet, even in the difficult outer regions of the jet where the average convection velocity is negligible and turbulence intensities increase dramatically. The measurements in the developing region reveal interesting features of an incomplete Richardson-Kolmogorov cascade under development.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
NASA Astrophysics Data System (ADS)
Bellesia, Giovanni; Bales, Benjamin B.
2016-10-01
We investigate, via Brownian dynamics simulations, the reaction dynamics of a generic, nonlinear chemical network under spatial confinement and crowding conditions. In detail, the Willamowski-Rossler chemical reaction system has been "extended" and considered as a prototype reaction-diffusion system. Our results are potentially relevant to a number of open problems in biophysics and biochemistry, such as the synthesis of primitive cellular units (protocells) and the definition of their role in the chemical origin of life and the characterization of vesicle-mediated drug delivery processes. More generally, the computational approach presented in this work makes the case for the use of spatial stochastic simulation methods for the study of biochemical networks in vivo where the "well-mixed" approximation is invalid and both thermal and intrinsic fluctuations linked to the possible presence of molecular species in low number copies cannot be averaged out.
Lando, Asiyanthi Tabran; Nakayama, Hirofumi; Shimaoka, Takayuki
2017-01-01
Methane from landfills contributes to global warming and can pose an explosion hazard. To minimize these effects emissions must be monitored. This study proposed application of portable gas detector (PGD) in point and scanning measurements to estimate spatial distribution of methane emissions in landfills. The aims of this study were to discover the advantages and disadvantages of point and scanning methods in measuring methane concentrations, discover spatial distribution of methane emissions, cognize the correlation between ambient methane concentration and methane flux, and estimate methane flux and emissions in landfills. This study was carried out in Tamangapa landfill, Makassar city-Indonesia. Measurement areas were divided into basic and expanded area. In the point method, PGD was held one meter above the landfill surface, whereas scanning method used a PGD with a data logger mounted on a wire drawn between two poles. Point method was efficient in time, only needed one person and eight minutes in measuring 400m 2 areas, whereas scanning method could capture a lot of hot spots location and needed 20min. The results from basic area showed that ambient methane concentration and flux had a significant (p<0.01) positive correlation with R 2 =0.7109 and y=0.1544 x. This correlation equation was used to describe spatial distribution of methane emissions in the expanded area by using Kriging method. The average of estimated flux from scanning method was 71.2gm -2 d -1 higher than 38.3gm -2 d -1 from point method. Further, scanning method could capture the lower and higher value, which could be useful to evaluate and estimate the possible effects of the uncontrolled emissions in landfill. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
NASA Astrophysics Data System (ADS)
Rigden, Angela J.; Salvucci, Guido D.
2015-04-01
A novel method of estimating evapotranspiration (ET), referred to as the ETRHEQ method, is further developed, validated, and applied across the U.S. from 1961 to 2010. The ETRHEQ method estimates the surface conductance to water vapor transport, which is the key rate-limiting parameter of typical ET models, by choosing the surface conductance that minimizes the vertical variance of the calculated relative humidity profile averaged over the day. The ETRHEQ method, which was previously tested at five AmeriFlux sites, is modified for use at common weather stations and further validated at 20 AmeriFlux sites that span a wide range of climates and limiting factors. Averaged across all sites, the daily latent heat flux RMSE is ˜26 W·m-2 (or 15%). The method is applied across the U.S. at 305 weather stations and spatially interpolated using ANUSPLIN software. Gridded annual mean ETRHEQ ET estimates are compared with four data sets, including water balance-derived ET, machine-learning ET estimates based on FLUXNET data, North American Land Data Assimilation System project phase 2 ET, and a benchmark product that integrates 14 global ET data sets, with RMSEs ranging from 8.7 to 12.5 cm·yr-1. The ETRHEQ method relies only on data measured at weather stations, an estimate of vegetation height derived from land cover maps, and an estimate of soil thermal inertia. These data requirements allow it to have greater spatial coverage than direct measurements, greater historical coverage than satellite methods, significantly less parameter specification than most land surface models, and no requirement for calibration.
Gao, Kai; Chung, Eric T.; Gibson, Richard L.; ...
2015-06-05
The development of reliable methods for upscaling fine scale models of elastic media has long been an important topic for rock physics and applied seismology. Several effective medium theories have been developed to provide elastic parameters for materials such as finely layered media or randomly oriented or aligned fractures. In such cases, the analytic solutions for upscaled properties can be used for accurate prediction of wave propagation. However, such theories cannot be applied directly to homogenize elastic media with more complex, arbitrary spatial heterogeneity. We therefore propose a numerical homogenization algorithm based on multiscale finite element methods for simulating elasticmore » wave propagation in heterogeneous, anisotropic elastic media. Specifically, our method used multiscale basis functions obtained from a local linear elasticity problem with appropriately defined boundary conditions. Homogenized, effective medium parameters were then computed using these basis functions, and the approach applied a numerical discretization that is similar to the rotated staggered-grid finite difference scheme. Comparisons of the results from our method and from conventional, analytical approaches for finely layered media showed that the homogenization reliably estimated elastic parameters for this simple geometry. Additional tests examined anisotropic models with arbitrary spatial heterogeneity where the average size of the heterogeneities ranged from several centimeters to several meters, and the ratio between the dominant wavelength and the average size of the arbitrary heterogeneities ranged from 10 to 100. Comparisons to finite-difference simulations proved that the numerical homogenization was equally accurate for these complex cases.« less
Alluvial groundwater recharge estimation in semi-arid environment using remotely sensed data
NASA Astrophysics Data System (ADS)
Coelho, Victor Hugo R.; Montenegro, Suzana; Almeida, Cristiano N.; Silva, Bernardo B.; Oliveira, Leidjane M.; Gusmão, Ana Cláudia V.; Freitas, Emerson S.; Montenegro, Abelardo A. A.
2017-05-01
Data limitations on groundwater (GW) recharge over large areas are still a challenge for efficient water resource management, especially in semi-arid regions. Thus, this study seeks to integrate hydrological cycle variables from satellite imagery to estimate the spatial distribution of GW recharge in the Ipanema river basin (IRB), which is located in the State of Pernambuco in Northeast Brazil. Remote sensing data, including monthly maps (2011-2012) of rainfall, runoff and evapotranspiration, are used as input for the water balance method within Geographic Information Systems (GIS). Rainfall data are derived from the TRMM Multi-satellite Precipitation Analysis (TMPA) Version 7 (3B43V7) product and present the same monthly average temporal distributions from 15 rain gauges that are distributed over the study area (r = 0.93 and MAE = 12.7 mm), with annual average estimates of 894.3 (2011) and 300.7 mm (2012). The runoff from the Natural Resources Conservation Service (NRCS) method, which is based on regional soil information and Thematic Mapper (TM) sensor image, represents 29% of the TMPA rainfall that was observed across two years of study. Actual evapotranspiration data, which were provided by the SEBAL application of MODIS images, present annual averages of 1213 (2011) and 1067 (2012) mm. The water balance results reveal a large inter-annual difference in the IRB GW recharge, which is characterized by different rainfall regimes, with averages of 30.4 (2011) and 4.7 (2012) mm year-1. These recharges were mainly observed between January and July in regions with alluvial sediments and highly permeable soils. The GW recharge approach with remote sensing is compared to the WTF (Water Table Fluctuation) method, which is used in an area of alluvium in the IRB. The estimates from these two methods exhibit reliable annual agreement, with average values of 154.6 (WTF) and 124.6 (water balance) mm in 2011. These values correspond to 14.89 and 13.53% of the rainfall that was recorded at the rain gauges and the TMPA, respectively. Only the WTF method indicates a very low recharge of 15.9 mm for the second year. The values in this paper provide reliable insight regarding the use of remotely sensed data to evaluate the rates of alluvial GW recharge in regions where the potential runoff cannot be disregarded from WB equation and must be calculated spatially.
Income-related health inequalities across regions in Korea
2011-01-01
Introduction In addition to economic inequalities, there has been growing concern over socioeconomic inequalities in health across income levels and/or regions. This study measures income-related health inequalities within and between regions and assesses the possibility of convergence of socioeconomic inequalities in health as regional incomes converge. Methods We considered a total of 45,233 subjects (≥ 19 years) drawn from the four waves of the Korean National Health and Nutrition Examination Survey (KNHANES). We considered true health as a latent variable following a lognormal distribution. We obtained ill-health scores by matching self-rated health (SRH) to its distribution and used the Gini Coefficient (GC) and an income-related ill-health Concentration Index (CI) to examine inequalities in income and health, respectively. Results The GC estimates were 0.3763 and 0.0657 for overall and spatial inequalities, respectively. The overall CI was -0.1309, and the spatial CI was -0.0473. The spatial GC and CI estimates were smaller than their counterparts, indicating substantial inequalities in income (from 0.3199 in Daejeon to 0.4233 Chungnam) and income-related health inequalities (from -0.1596 in Jeju and -0.0844 in Ulsan) within regions. The results indicate a positive relationship between the GC and the average ill-health and a negative relationship between the CI and the average ill-health. Those regions with a low level of health tended to show an unequal distribution of income and health. In addition, there was a negative relationship between the GC and the CI, that is, the larger the income inequalities, the larger the health inequalities were. The GC was negatively related to the average regional income, indicating that an increase in a region's average income reduced income inequalities in the region. On the other hand, the CI showed a positive relationship, indicating that an increase in a region's average income reduced health inequalities in the region. Conclusion The results suggest that reducing health inequalities across regions require a more equitable distribution of income and a higher level of average income and that the higher the region's average income, the smaller its health inequalities are. PMID:21967804
The fallacy of using NII in analyzing aircraft operations. [Noise Impact Index
NASA Technical Reports Server (NTRS)
Melton, R. G.; Jacobson, I. D.
1984-01-01
Three measures of noise annoyance (Noise Impact Index, Level-Weighted Population, and Annoyed Population Number) are compared, regarding their utility in assessing noise reduction schemes for aircraft operations. While NII is intended to measure the average annoyance per person in a community, it is found that the method of averaging can lead to erroneous conclusions, particularly if the population does not have uniform spatial distribution. Level-Weighted Population and Annoyed Population Number are shown to be better indicators of noise annoyance when rating different strategies for noise reduction in a given community.
NASA Astrophysics Data System (ADS)
Schreiner-McGraw, A.; Vivoni, E. R.; Franz, T. E.; Anderson, C.
2013-12-01
Human impacts on desert ecosystems have wide ranging effects on the hydrologic cycle which, in turn, influence interactions between the critical zone and the atmosphere. In this contribution, we utilize cosmic-ray soil moisture sensors at three human-modified semiarid ecosystems in the North American monsoon region: a buffelgrass pasture in Sonora, Mexico, a woody-plant encroached savanna ecosystem in Arizona, and a woody-plant encroached shrubland ecosystem in New Mexico. In each case, landscape heterogeneity in the form of bare soil and vegetation patches of different types leads to a complex mosaic of soil moisture and land-atmosphere interactions. Historically, the measurement of spatially-averaged soil moisture at the ecosystem scale (on the order of several hundred square meters) has been problematic. Thus, new advances in measuring cosmogenically-produced neutrons present an opportunity for observational and modeling studies in these ecosystems. We discuss the calibration of the cosmic-ray soil moisture sensors at each site, present comparisons to a distributed network of in-situ measurements, and verify the spatially-aggregated observations using the watershed water balance method at two sites. We focus our efforts on the summer season 2013 and its rainfall period during the North American monsoon. To compare neutron counts to the ground sensors, we utilized an aspect-elevation weighting algorithm to compute an appropriate spatial average for the in-situ measurements. Similarly, the water balance approach utilizes precipitation, runoff, and evapotranspiration measurements in the footprint of the cosmic-ray sensors to estimate a spatially-averaged soil moisture field. Based on these complementary approaches, we empirically determined a relationship between cosmogenically-produced neutrons and the spatially-aggregated soil moisture. This approach may improve upon existing methods used to calculate soil moisture from neutron counts that typically suffer from increasing errors for higher soil moisture content. We also examined the effects of sub-footprint variability in soil moisture on the neutron readings by comparing two of the sites with large variations in topographically-mediated surface flows. Our work also synthesizes seasonal soil moisture dynamics across the desert ecosystems and attempts to tease out differences due to land cover alterations, including the seasonal greening in each study site occurring during the North American monsoon.
Assessment of surface runoff depth changes in S\\varǎţel River basin, Romania using GIS techniques
NASA Astrophysics Data System (ADS)
Romulus, Costache; Iulia, Fontanine; Ema, Corodescu
2014-09-01
S\\varǎţel River basin, which is located in Curvature Subcarpahian area, has been facing an obvious increase in frequency of hydrological risk phenomena, associated with torrential events, during the last years. This trend is highly related to the increase in frequency of the extreme climatic phenomena and to the land use changes. The present study is aimed to highlight the spatial and quantitative changes occurred in surface runoff depth in S\\varǎţel catchment, between 1990-2006. This purpose was reached by estimating the surface runoff depth assignable to the average annual rainfall, by means of SCS-CN method, which was integrated into the GIS environment through the ArcCN-Runoff extension, for ArcGIS 10.1. In order to compute the surface runoff depth, by CN method, the land cover and the hydrological soil classes were introduced as vector (polygon data), while the curve number and the average annual rainfall were introduced as tables. After spatially modeling the surface runoff depth for the two years, the 1990 raster dataset was subtracted from the 2006 raster dataset, in order to highlight the changes in surface runoff depth.
NASA Astrophysics Data System (ADS)
Schrön, M.; Köhli, M.; Rosolem, R.; Baroni, G.; Bogena, H. R.; Brenner, J.; Zink, M.; Rebmann, C.; Oswald, S. E.; Dietrich, P.; Samaniego, L. E.; Zacharias, S.
2017-12-01
Cosmic-Ray Neutron Sensing (CRNS) has become a promising and unique method to monitor water content at an effective scale of tens of hectares in area and tens of centimeters in depth. The large footprint is particularly beneficial for hydrological models that operate at these scales.However, reliable estimates of average soil moisture require a detailed knowledge about the sensitivity of the signal to spatial inhomogeneity within the footprint. From this perspective, the large integrating volume challenges data interpretation, validation, and calibration of the sensor. Can we still generate reliable data for hydrological applications? One of the top challenges in the last years was to find out where the signal comes from, and how sensitive it is to spatial variabilities of moisture. Neutron physics simulations have shown that the neutron signal represents a non-linearly weighted average of soil water in the footprint. With the help of the so-called spatial sensitivity functions it is now possible to quantify the contribution of certain regions to the neutron signal. We present examples of how this knowledge can help (1) to understand the contribution of irrigated and sealed areas in the footprint, (2) to improve calibration and validation of the method, and (3) to even reveal excess water storages, e.g. from ponding or rain interception.The spatial sensitivity concept can also explain the influence of dry roads on the neutron signal. Mobile surveys with the CRNS rover have been a common practice to measure soil moisture patterns at the kilometer scale. However, dedicated experiments across agricultural fields in Germany and England have revealed that field soil moisture is significantly underestimated when moving the sensor on roads. We show that knowledge about the spatial sensitivity helps to correct survey data for these effects, depending on road material, width, and distance from the road. The recent methodological advances allow for improved signal interpretability and for more accurate derivation of hydrologically relevant features from the CRNS data. By this, the presented methods are an essential contribution to generate reliable CRNS products and an example how combined efforts from the CRNS community contribute to turn the instrument to a highly capable tool for hydrological applications.
Stewart, Barclay T.; Gyedu, Adam; Boakye, Godfred; Lewis, Daniel; Hoogerboord, Marius; Mock, Charles
2017-01-01
Background Surgical disease burden falls disproportionately on individuals in low- and middle-income countries. These populations are also the least likely to have access to surgical care. Understanding the barriers to access in these populations is therefore necessary to meet the global surgical need. Methods Using geospatial methods, this study explores the district-level variation of two access barriers in Ghana: poverty and spatial access to care. National survey data were used to estimate the average total household expenditure (THE) in each district. Estimates of the spatial access to essential surgical care were generated from a cost-distance model based on a recent surgical capacity assessment. Correlations were analyzed using regression and displayed cartographically. Results Both THE and spatial access to surgical care were found to have statistically significant regional variation in Ghana (p < 0.001). An inverse relationship was identified between THE and spatial access to essential surgical care (β −5.15 USD, p < 0.001). Poverty and poor spatial access to surgical care were found to co-localize in the northwest of the country. Conclusions Multiple barriers to accessing surgical care can coexist within populations. A careful understanding of all access barriers is necessary to identify and target strategies to address unmet surgical need within a given population. PMID:27766400
A new multi-spectral feature level image fusion method for human interpretation
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-03-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
Yao, Xiong; Yu, Kun Yong; Liu, Jian; Yang, Su Ping; He, Ping; Deng, Yang Bo; Yu, Xin Yan; Chen, Zhang Hao
2016-03-01
Research on eco-environment vulnerability assessment contributes to the ecological environmental conservation and restoration. With Changting County as the study area, this paper selec-ted 7 indicators including slope, soil type, multi-year average precipitation, elevation deviate degree, normalized difference vegetation index, population density and land use type to build ecological vulnerability assessment system by using multicollinearity diagnostics analysis approach. The quantitative assessment of ecological vulnerability in 1999, 2006 and 2014 was calculated by using entropy weight method and comprehensive index method. The changes of the temporal-spatial distribution of ecological vulnerability were also analyzed. The results showed that the ecological vulnerability level index (EVLI) decreased overall but increased locally from 1999 to 2014. The average EVLI values in 1999, 2006 and 2014 were 0.4533±0.1216, 0.4160±0.1111 and 0.3916±0.1139, respectively, indicating that the ecological vulnerability in Changting County was at the moderate grade. The EVLI decreased from 2.92 in 1999 to 2.38 in 2006 and 2.13 in 2014. The spatial distribution of the ecological vulnerability was high inside but low outside. The high vulnerability areas were distributed mainly in Hetian Town and Tingzhou Town, where the slope was less than 15° and the altitude was lower than 500 m. During the study period, Sanzhou Town had the largest decreasing range of EVLI while Tingzhou Town had the lowest.
NASA Astrophysics Data System (ADS)
Schreiner-McGraw, A. P.; Vivoni, E. R.; Mascaro, G.; Franz, T. E.
2015-06-01
Soil moisture dynamics reflect the complex interactions of meteorological conditions with soil, vegetation and terrain properties. In this study, intermediate scale soil moisture estimates from the cosmic-ray sensing (CRS) method are evaluated for two semiarid ecosystems in the southwestern United States: a mesquite savanna at the Santa Rita Experimental Range (SRER) and a mixed shrubland at the Jornada Experimental Range (JER). Evaluations of the CRS method are performed for small watersheds instrumented with a distributed sensor network consisting of soil moisture sensor profiles, an eddy covariance tower and runoff flumes used to close the water balance. We found an excellent agreement between the CRS method and the distributed sensor network (RMSE of 0.009 and 0.013 m3 m-3 at SRER and JER) at the hourly time scale over the 19-month study period, primarily due to the inclusion of 5 cm observations of shallow soil moisture. Good agreement was obtained in soil moisture changes estimated from the CRS and watershed water balance methods (RMSE = 0.001 and 0.038 m3 m-3 at SRER and JER), with deviations due to bypassing of the CRS measurement depth during large rainfall events. This limitation, however, was used to show that drier-than-average conditions at SRER promoted plant water uptake from deeper layers, while the wetter-than-average period at JER resulted in leakage towards deeper soils. Using the distributed sensor network, we quantified the spatial variability of soil moisture in the CRS footprint and the relation between evapotranspiration and soil moisture, in both cases finding similar predictive relations at both sites that are applicable to other semiarid ecosystems in the southwestern US. Furthermore, soil moisture spatial variability was related to evapotranspiration in a manner consistent with analytical relations derived using the CRS method, opening up new possibilities for understanding land-atmosphere interactions.
Classification of spatially unresolved objects
NASA Technical Reports Server (NTRS)
Nalepka, R. F.; Horwitz, H. M.; Hyde, P. D.; Morgenstern, J. P.
1972-01-01
A proportion estimation technique for classification of multispectral scanner images is reported that uses data point averaging to extract and compute estimated proportions for a single average data point to classify spatial unresolved areas. Example extraction calculations of spectral signatures for bare soil, weeds, alfalfa, and barley prove quite accurate.
Comparison of MODIS and SWAT evapotranspiration over a complex terrain at different spatial scales
NASA Astrophysics Data System (ADS)
Abiodun, Olanrewaju O.; Guan, Huade; Post, Vincent E. A.; Batelaan, Okke
2018-05-01
In most hydrological systems, evapotranspiration (ET) and precipitation are the largest components of the water balance, which are difficult to estimate, particularly over complex terrain. In recent decades, the advent of remotely sensed data based ET algorithms and distributed hydrological models has provided improved spatially upscaled ET estimates. However, information on the performance of these methods at various spatial scales is limited. This study compares the ET from the MODIS remotely sensed ET dataset (MOD16) with the ET estimates from a SWAT hydrological model on graduated spatial scales for the complex terrain of the Sixth Creek Catchment of the Western Mount Lofty Ranges, South Australia. ET from both models was further compared with the coarser-resolution AWRA-L model at catchment scale. The SWAT model analyses are performed on daily timescales with a 6-year calibration period (2000-2005) and 7-year validation period (2007-2013). Differences in ET estimation between the SWAT and MOD16 methods of up to 31, 19, 15, 11 and 9 % were observed at respectively 1, 4, 9, 16 and 25 km2 spatial resolutions. Based on the results of the study, a spatial scale of confidence of 4 km2 for catchment-scale evapotranspiration is suggested in complex terrain. Land cover differences, HRU parameterisation in AWRA-L and catchment-scale averaging of input climate data in the SWAT semi-distributed model were identified as the principal sources of weaker correlations at higher spatial resolution.
Analysis of Extreme Snow Water Equivalent Data in Central New Hampshire
NASA Astrophysics Data System (ADS)
Vuyovich, C.; Skahill, B. E.; Kanney, J. F.; Carr, M.
2017-12-01
Heavy snowfall and snowmelt-related events have been linked to widespread flooding and damages in many regions of the U.S. Design of critical infrastructure in these regions requires spatial estimates of extreme snow water equivalent (SWE). In this study, we develop station specific and spatially explicit estimates of extreme SWE using data from fifteen snow sampling stations maintained by the New Hampshire Department of Environmental Services. The stations are located in the Mascoma, Pemigewasset, Winnipesaukee, Ossipee, Salmon Falls, Lamprey, Sugar, and Isinglass basins in New Hampshire. The average record length for the fifteen stations is approximately fifty-nine years. The spatial analysis of extreme SWE involves application of two Bayesian Hierarchical Modeling methods, one that assumes conditional independence, and another which uses the Smith max-stable process model to account for spatial dependence. We also apply additional max-stable process models, albeit not in a Bayesian framework, that better model the observed dependence among the extreme SWE data. The spatial process modeling leverages readily available and relevant spatially explicit covariate data. The noted additional max-stable process models also used the nonstationary winter North Atlantic Oscillation index, which has been observed to influence snowy weather along the east coast of the United States. We find that, for this data set, SWE return level estimates are consistently higher when derived using methods which account for the observed spatial dependence among the extreme data. This is particularly significant for design scenarios of relevance for critical infrastructure evaluation.
Liu, Mei-bing; Chen, Xing-wei; Chen, Ying
2015-07-01
Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.
Findlay, R P; Dimbylow, P J
2009-04-21
If an antenna is located close to a person, the electric and magnetic fields produced by the antenna will vary in the region occupied by the human body. To obtain a mean value of the field for comparison with reference levels, the Institute of Electrical and Electronic Engineers (IEEE) and International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommend spatially averaging the squares of the field strength over the height the body. This study attempts to assess the validity and accuracy of spatial averaging when used for half-wave dipoles at frequencies between 65 MHz and 2 GHz and distances of lambda/2, lambda/4 and lambda/8 from the body. The differences between mean electric field values calculated using ten field measurements and that of the true averaged value were approximately 15% in the 600 MHz to 2 GHz range. The results presented suggest that the use of modern survey equipment, which takes hundreds rather than tens of measurements, is advisable to arrive at a sufficiently accurate mean field value. Whole-body averaged and peak localized SAR values, normalized to calculated spatially averaged fields, were calculated for the NORMAN voxel phantom. It was found that the reference levels were conservative for all whole-body SAR values, but not for localized SAR, particularly in the 1-2 GHz region when the dipole was positioned very close to the body. However, if the maximum field is used for normalization of calculated SAR as opposed to the lower spatially averaged value, the reference levels provide a conservative estimate of the localized SAR basic restriction for all frequencies studied.
Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain
NASA Astrophysics Data System (ADS)
Löwe, H.; Helbig, N.
2012-04-01
We provide a new quasi-analytical method to compute the topographic influence on the effective albedo of complex topography as required for meteorological, land-surface or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain averages of direct, diffuse and terrain radiation and the sky view factor. Domain averaged quantities are related to a type of level-crossing probability of the random field which is approximated by longstanding results developed for acoustic scattering at ocean boundaries. This allows us to express all non-local horizon effects in terms of a local terrain parameter, namely the mean squared slope. Emerging integrals are computed numerically and fit formulas are given for practical purposes. As an implication of our approach we provide an expression for the effective albedo of complex terrain in terms of the sun elevation angle, mean squared slope, the area averaged surface albedo, and the direct-to-diffuse ratio of solar radiation. As an application, we compute the effective albedo for the Swiss Alps and discuss possible generalizations of the method.
Maity, Somsubhra; Wu, Wei-Chen; Xu, Chao; Tracy, Joseph B.; Gundogdu, Kenan; Bochinski, Jason R.; Clarke, Laura I.
2015-01-01
Heat emanates from gold nanorods (GNRs) under ultrafast optical excitation of the localized surface plasmon resonance. The steady state nanoscale temperature distribution formed within a polymer matrix embedded with GNRs undergoing pulsed femtosecond photothermal heating is determined experimentally using two independent ensemble optical techniques. Physical rotation of the nanorods reveals the average local temperature of the polymer melt in the immediate spatial volume surrounding them while fluorescence of homogeneously-distributed perylene molecules monitors temperature over sample regions at larger distances from the GNRs. Polarization-sensitive fluorescence measurements of the perylene probes provide an estimate of the average size of the quasi-molten region surrounding each nanorod (that is, the boundary between softened polymer and solid material as the temperature decreases radially away from each particle) and distinguishes the steady state temperature in the solid and melt regions. Combining these separate methods enables nanoscale spatial mapping of the average steady state temperature distribution caused by ultrafast excitation of the GNRs. These observations definitively demonstrate the presence of a steady-state temperature gradient and indicate that localized heating via the photothermal effect within materials enables nanoscale thermal manipulations without significantly altering the bulk sample temperature in these systems. These quantitative results are further verified by reorienting nanorods within a solid polymer nanofiber without inducing any morphological changes to the highly temperature-sensitive nanofiber surface. Temperature differences of 70 – 90 °C were observed over a distances of ~100 nm. PMID:25379775
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-06-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-03-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
The Choice of Spatial Interpolation Method Affects Research Conclusions
NASA Astrophysics Data System (ADS)
Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.
2017-12-01
Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.
Using Spatial Correlations of SPDC Sources for Increasing the Signal to Noise Ratio in Images
NASA Astrophysics Data System (ADS)
Ruíz, A. I.; Caudillo, R.; Velázquez, V. M.; Barrios, E.
2017-05-01
We experimentally show that, by using spatial correlations of photon pairs produced by Spontaneous Parametric Down-Conversion, it is possible to increase the Signal to Noise Ratio in images of objects illuminated with those photons; in comparison, objects illuminated with light from a laser present a minor ratio. Our simple experimental set-up was capable to produce an average improvement in signal to noise ratio of 11dB of Parametric Down-Converted light over laser light. This simple method can be easily implemented for obtaining high contrast images of faint objects and for transmitting information with low noise.
Performance characterization of a cross-flow hydrokinetic turbine in sheared inflow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbush, Dominic; Polagye, Brian; Thomson, Jim
2016-12-01
A method for constructing a non-dimensional performance curve for a cross-flow hydrokinetic turbine in sheared flow is developed for a natural river site. The river flow characteristics are quasi-steady, with negligible vertical shear, persistent lateral shear, and synoptic changes dominated by long time scales (days to weeks). Performance curves developed from inflow velocities measured at individual points (randomly sampled) yield inconclusive turbine performance characteristics because of the spatial variation in mean flow. Performance curves using temporally- and spatially-averaged inflow velocities are more conclusive. The implications of sheared inflow are considered in terms of resource assessment and turbine control.
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
Spatial and temporal stability of temperature in the first-level basins of China during 1951-2013
NASA Astrophysics Data System (ADS)
Cheng, Yuting; Li, Peng; Xu, Guoce; Li, Zhanbin; Cheng, Shengdong; Wang, Bin; Zhao, Binhua
2018-05-01
In recent years, global warming has attracted great attention around the world. Temperature change is not only involved in global climate change but also closely linked to economic development, the ecological environment, and agricultural production. In this study, based on temperature data recorded by 756 meteorological stations in China during 1951-2013, the spatial and temporal stability characteristics of annual temperature in China and its first-level basins were investigated using the rank correlation coefficient method, the relative difference method, rescaled range (R/S) analysis, and wavelet transforms. The results showed that during 1951-2013, the spatial variation of annual temperature belonged to moderate variability in the national level. Among the first-level basins, the largest variation coefficient was 114% in the Songhuajiang basin and the smallest variation coefficient was 10% in the Huaihe basin. During 1951-2013, the spatial distribution pattern of annual temperature presented extremely strong spatial and temporal stability characteristics in the national level. The variation range of Spearman's rank correlation coefficient was 0.97-0.99, and the spatial distribution pattern of annual temperature showed an increasing trend. In the national level, the Liaohe basin, the rivers in the southwestern region, the Haihe basin, the Yellow River basin, the Yangtze River basin, the Huaihe basin, the rivers in the southeastern region, and the Pearl River basin all had representative meteorological stations for annual temperature. In the Songhuajiang basin and the rivers in the northwestern region, there was no representative meteorological station. R/S analysis, the Mann-Kendall test, and the Morlet wavelet analysis of annual temperature showed that the best representative meteorological station could reflect the variation trend and the main periodic changes of annual temperature in the region. Therefore, strong temporal stability characteristics exist for annual temperature in China and its first-level basins. It was therefore feasible to estimate the annual average temperature by the annual temperature recorded by the representative meteorological station in the region. Moreover, it was of great significance to assess average temperature changes quickly and forecast future change tendencies in the region.
Climatic factors associated with amyotrophic lateral sclerosis: a spatial analysis from Taiwan.
Tsai, Ching-Piao; Tzu-Chi Lee, Charles
2013-11-01
Few studies have assessed the spatial association of amyotrophic lateral sclerosis (ALS) incidence in the world. The aim of this study was to identify the association of climatic factors and ALS incidence in Taiwan. A total of 1,434 subjects with the primary diagnosis of ALS between years 1997 and 2008 were identified in the national health insurance research database. The diagnosis was also verified by the national health insurance programme, which had issued and providing them with "serious disabling disease (SDD) certificates". Local indicators of spatial association were employed to investigate spatial clustering of age-standardised incidence ratios in the townships of the study area. Spatial regression was utilised to reveal any association of annual average climatic factors and ALS incidence for the 12-year study period. The climatic factors included the annual average time of sunlight exposure, average temperature, maximum temperature, minimum temperature, atmospheric pressure, rainfall, relative humidity and wind speed with spatial autocorrelation controlled. Significant correlations were only found for exposure to sunlight and rainfall and it was similar in both genders. The annual average of the former was found to be negatively correlated with ALS, while the latter was positively correlated with ALS incidence. While accepting that ALS is most probably multifactorial, it was concluded that sunlight deprivation and/or rainfall are associated to some degree with ALS incidence in Taiwan.
A method to estimate spatiotemporal air quality in an urban traffic corridor.
Singh, Nongthombam Premananda; Gokhale, Sharad
2015-12-15
Air quality exposure assessment using personal exposure sampling or direct measurement of spatiotemporal air pollutant concentrations has difficulty and limitations. Most statistical methods used for estimating spatiotemporal air quality do not account for the source characteristics (e.g. emissions). In this study, a prediction method, based on the lognormal probability distribution of hourly-average-spatial concentrations of carbon monoxide (CO) obtained by a CALINE4 model, has been developed and validated in an urban traffic corridor. The data on CO concentrations were collected at three locations and traffic and meteorology within the urban traffic corridor.(1) The method has been developed with the data of one location and validated at other two locations. The method estimated the CO concentrations reasonably well (correlation coefficient, r≥0.96). Later, the method has been applied to estimate the probability of occurrence [P(C≥Cstd] of the spatial CO concentrations in the corridor. The results have been promising and, therefore, may be useful to quantifying spatiotemporal air quality within an urban area. Copyright © 2015 Elsevier B.V. All rights reserved.
Long-range epidemic spreading in a random environment.
Juhász, Róbert; Kovács, István A; Iglói, Ferenc
2015-03-01
Modeling long-range epidemic spreading in a random environment, we consider a quenched, disordered, d-dimensional contact process with infection rates decaying with distance as 1/rd+σ. We study the dynamical behavior of the model at and below the epidemic threshold by a variant of the strong-disorder renormalization-group method and by Monte Carlo simulations in one and two spatial dimensions. Starting from a single infected site, the average survival probability is found to decay as P(t)∼t-d/z up to multiplicative logarithmic corrections. Below the epidemic threshold, a Griffiths phase emerges, where the dynamical exponent z varies continuously with the control parameter and tends to zc=d+σ as the threshold is approached. At the threshold, the spatial extension of the infected cluster (in surviving trials) is found to grow as R(t)∼t1/zc with a multiplicative logarithmic correction and the average number of infected sites in surviving trials is found to increase as Ns(t)∼(lnt)χ with χ=2 in one dimension.
Ranking and averaging independent component analysis by reproducibility (RAICAR).
Yang, Zhi; LaConte, Stephen; Weng, Xuchu; Hu, Xiaoping
2008-06-01
Independent component analysis (ICA) is a data-driven approach that has exhibited great utility for functional magnetic resonance imaging (fMRI). Standard ICA implementations, however, do not provide the number and relative importance of the resulting components. In addition, ICA algorithms utilizing gradient-based optimization give decompositions that are dependent on initialization values, which can lead to dramatically different results. In this work, a new method, RAICAR (Ranking and Averaging Independent Component Analysis by Reproducibility), is introduced to address these issues for spatial ICA applied to fMRI. RAICAR utilizes repeated ICA realizations and relies on the reproducibility between them to rank and select components. Different realizations are aligned based on correlations, leading to aligned components. Each component is ranked and thresholded based on between-realization correlations. Furthermore, different realizations of each aligned component are selectively averaged to generate the final estimate of the given component. Reliability and accuracy of this method are demonstrated with both simulated and experimental fMRI data. Copyright 2007 Wiley-Liss, Inc.
Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar
Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...
2016-10-18
Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less
NASA Astrophysics Data System (ADS)
Toyokuni, G.; Takenaka, H.
2007-12-01
We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-01-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie’s law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling. PMID:26927886
Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market
NASA Astrophysics Data System (ADS)
Cui, Ling-xiao; Long, Wen
2016-11-01
Dynamic mode decomposition (DMD) is an effective method to capture the intrinsic dynamical modes of complex system. In this work, we adopt DMD method to discover the evolutionary patterns in stock market and apply it to Chinese A-share stock market. We design two strategies based on DMD algorithm. The strategy which considers only timing problem can make reliable profits in a choppy market with no prominent trend while fails to beat the benchmark moving-average strategy in bull market. After considering the spatial information from spatial-temporal coherent structure of DMD modes, we improved the trading strategy remarkably. Then the DMD strategies profitability is quantitatively evaluated by performing SPA test to correct the data-snooping effect. The results further prove that DMD algorithm can model the market patterns well in sideways market.
Cosmological backreaction within the Szekeres model and emergence of spatial curvature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolejko, Krzysztof, E-mail: krzysztof.bolejko@sydney.edu.au
This paper discusses the phenomenon of backreaction within the Szekeres model. Cosmological backreaction describes how the mean global evolution of the Universe deviates from the Friedmannian evolution. The analysis is based on models of a single cosmological environment and the global ensemble of the Szekeres models (of the Swiss-Cheese-type and Styrofoam-type). The obtained results show that non-linear growth of cosmic structures is associated with the growth of the spatial curvature Ω{sub R} (in the FLRW limit Ω{sub R} → Ω {sub k} ). If averaged over global scales the result depends on the assumed global model of the Universe. Withinmore » the Swiss-Cheese model, which does have a fixed background, the volume average follows the evolution of the background, and the global spatial curvature averages out to zero (the background model is the ΛCDM model, which is spatially flat). In the Styrofoam-type model, which does not have a fixed background, the mean evolution deviates from the spatially flat ΛCDM model, and the mean spatial curvature evolves from Ω{sub R} =0 at the CMB to Ω{sub R} ∼ 0.1 at 0 z =. If the Styrofoam-type model correctly captures evolutionary features of the real Universe then one should expect that in our Universe, the spatial curvature should build up (local growth of cosmic structures) and its mean global average should deviate from zero (backreaction). As a result, this paper predicts that the low-redshift Universe should not be spatially flat (i.e. Ω {sub k} ≠ 0, even if in the early Universe Ω {sub k} = 0) and therefore when analysing low- z cosmological data one should keep Ω {sub k} as a free parameter and independent from the CMB constraints.« less
Cosmological backreaction within the Szekeres model and emergence of spatial curvature
NASA Astrophysics Data System (ADS)
Bolejko, Krzysztof
2017-06-01
This paper discusses the phenomenon of backreaction within the Szekeres model. Cosmological backreaction describes how the mean global evolution of the Universe deviates from the Friedmannian evolution. The analysis is based on models of a single cosmological environment and the global ensemble of the Szekeres models (of the Swiss-Cheese-type and Styrofoam-type). The obtained results show that non-linear growth of cosmic structures is associated with the growth of the spatial curvature ΩScript R (in the FLRW limit ΩScript R → Ωk). If averaged over global scales the result depends on the assumed global model of the Universe. Within the Swiss-Cheese model, which does have a fixed background, the volume average follows the evolution of the background, and the global spatial curvature averages out to zero (the background model is the ΛCDM model, which is spatially flat). In the Styrofoam-type model, which does not have a fixed background, the mean evolution deviates from the spatially flat ΛCDM model, and the mean spatial curvature evolves from ΩScript R =0 at the CMB to ΩScript R ~ 0.1 at 0z =. If the Styrofoam-type model correctly captures evolutionary features of the real Universe then one should expect that in our Universe, the spatial curvature should build up (local growth of cosmic structures) and its mean global average should deviate from zero (backreaction). As a result, this paper predicts that the low-redshift Universe should not be spatially flat (i.e. Ωk ≠ 0, even if in the early Universe Ωk = 0) and therefore when analysing low-z cosmological data one should keep Ωk as a free parameter and independent from the CMB constraints.
NASA Astrophysics Data System (ADS)
Tian, Yunfeng; Shen, Zheng-Kang
2016-02-01
We develop a spatial filtering method to remove random noise and extract the spatially correlated transients (i.e., common-mode component (CMC)) that deviate from zero mean over the span of detrended position time series of a continuous Global Positioning System (CGPS) network. The technique utilizes a weighting scheme that incorporates two factors—distances between neighboring sites and their correlations of long-term residual position time series. We use a grid search algorithm to find the optimal thresholds for deriving the CMC that minimizes the root-mean-square (RMS) of the filtered residual position time series. Comparing to the principal component analysis technique, our method achieves better (>13% on average) reduction of residual position scatters for the CGPS stations in western North America, eliminating regional transients of all spatial scales. It also has advantages in data manipulation: less intervention and applicable to a dense network of any spatial extent. Our method can also be used to detect CMC irrespective of its origins (i.e., tectonic or nontectonic), if such signals are of particular interests for further study. By varying the filtering distance range, the long-range CMC related to atmospheric disturbance can be filtered out, uncovering CMC associated with transient tectonic deformation. A correlation-based clustering algorithm is adopted to identify stations cluster that share the common regional transient characteristics.
NASA Astrophysics Data System (ADS)
Mokrý, Pavel; Psota, Pavel; Steiger, Kateřina; Václavík, Jan; Vápenka, David; Doleček, Roman; Vojtíšek, Petr; Sládek, Juraj; Lédl, Vít.
2016-11-01
We report on the development and implementation of the digital holographic tomography for the three-dimensio- nal (3D) observations of the domain patterns in the ferroelectric single crystals. Ferroelectric materials represent a group of materials, whose macroscopic dielectric, electromechanical, and elastic properties are greatly in uenced by the presence of domain patterns. Understanding the role of domain patterns on the aforementioned properties require the experimental techniques, which allow the precise 3D measurements of the spatial distribution of ferroelectric domains in the single crystal. Unfortunately, such techniques are rather limited at this time. The most frequently used piezoelectric atomic force microscopy allows 2D observations on the ferroelectric sample surface. Optical methods based on the birefringence measurements provide parameters of the domain patterns averaged over the sample volume. In this paper, we analyze the possibility that the spatial distribution of the ferroelectric domains can be obtained by means of the measurement of the wavefront deformation of the transmitted optical wave. We demonstrate that the spatial distribution of the ferroelectric domains can be determined by means of the measurement of the spatial distribution of the refractive index. Finally, it is demonstrated that the measurements of wavefront deformations generated in ferroelectric polydomain systems with small variations of the refractive index provide data, which can be further processed by means of the conventional tomographic methods.
Comparison of 2c- and 3cLIF droplet temperature imaging
NASA Astrophysics Data System (ADS)
Palmer, Johannes; Reddemann, Manuel A.; Kirsch, Valeri; Kneer, Reinhold
2018-06-01
This work presents "pulsed 2D-3cLIF-EET" as a measurement setup for micro-droplet internal temperature imaging. The setup relies on a third color channel that allows correcting spatially changing energy transfer rates between the two applied fluorescent dyes. First measurement results are compared with results of two slightly different versions of the recent "pulsed 2D-2cLIF-EET" method. Results reveal a higher temperature measurement accuracy of the recent 2cLIF setup. Average droplet temperature is determined by the 2cLIF setup with an uncertainty of less than 1 K and a spatial deviation of about 3.7 K. The new 3cLIF approach would become competitive, if the existing droplet size dependency is anticipated by an additional calibration and if the processing algorithm includes spatial measurement errors more appropriately.
Experimental feasibility of multistatic holography for breast microwave radar image reconstruction.
Flores-Tapia, Daniel; Rodriguez, Diego; Solis, Mario; Kopotun, Nikita; Latif, Saeed; Maizlish, Oleksandr; Fu, Lei; Gui, Yonsheng; Hu, Can-Ming; Pistorius, Stephen
2016-08-01
The goal of this study was to assess the experimental feasibility of circular multistatic holography, a novel breast microwave radar reconstruction approach, using experimental datasets recorded using a preclinical experimental setup. The performance of this approach was quantitatively evaluated by calculating the signal to clutter ratio (SCR), contrast to clutter ratio (CCR), tumor to fibroglandular response ratio (TFRR), spatial accuracy, and reconstruction time. Five datasets were recorded using synthetic phantoms with the dielectric properties of breast tissue in the 1-6 GHz range using a custom radar system developed by the authors. The datasets contained synthetic structures that mimic the dielectric properties of fibroglandular breast tissues. Four of these datasets the authors covered an 8 mm inclusion that emulated a tumor. A custom microwave radar system developed at the University of Manitoba was used to record the radar responses from the phantoms. The datasets were reconstructed using the proposed multistatic approach as well as with a monostatic holography approach that has been previously shown to yield the images with the highest contrast and focal quality. For all reconstructions, the location of the synthetic tumors in the experimental setup was consistent with the position in the both the monostatic and multistatic reconstructed images. The average spatial error was less than 4 mm, which is half the spatial resolution of the data acquisition system. The average SCR, CCR, and TFRR of the images reconstructed with the multistatic approach were 15.0, 9.4, and 10.0 dB, respectively. In comparison, monostatic images obtained using the datasets from the same experimental setups yielded average SCR, CCR, and TFRR values of 12.8, 4.9, and 5.9 dB. No artifacts, defined as responses generated by the reconstruction method of at least half the energy of the tumor signatures, were noted in the multistatic reconstructions. The average execution time of the images formed using the proposed approach was 4 s, which is one order of magnitude faster than the current state-of-the-art time-domain multistatic breast microwave radar reconstruction algorithms. The images generated by the proposed method show that multistatic holography is capable of forming spatially accurate images in real-time with signal to clutter levels and contrast values higher than other published monostatic and multistatic cylindrical radar reconstruction approaches. In comparison to the monostatic holographic approach, the images generated by the proposed multistatic approach had SCR values that were at least 50% higher. The multistatic images had CCR and TFRR values at least 200% greater than those formed using a monostatic approach.
NASA Astrophysics Data System (ADS)
Hellaby, Charles
2012-01-01
A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.
Zeid, Elias Abou; Sereshkeh, Alborz Rezazadeh; Chau, Tom
2016-12-01
In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.
NASA Astrophysics Data System (ADS)
Abou Zeid, Elias; Rezazadeh Sereshkeh, Alborz; Chau, Tom
2016-12-01
Objective. In recent years, the readiness potential (RP), a type of pre-movement neural activity, has been investigated for asynchronous electroencephalogram (EEG)-based brain-computer interfaces (BCIs). Since the RP is attenuated for involuntary movements, a BCI driven by RP alone could facilitate intentional control amid a plethora of unintentional movements. Previous studies have attempted single trial classification of RP via spatial and temporal filtering methods, or by combining the RP with event-related desynchronization. However, RP feature extraction remains challenging due to the slow non-oscillatory nature of the potential, its variability among participants and the inherent noise in EEG signals. Here, we propose a participant-specific, individually optimized pipeline of spatio-temporal filtering (PSTF) to improve RP feature extraction for laterality prediction. Approach. PSTF applies band-pass filtering on RP signals, followed by Fisher criterion spatial filtering to maximize class separation, and finally temporal window averaging for feature dimension reduction. Optimal parameters are simultaneously found by cross-validation for each participant. Using EEG data from 14 participants performing self-initiated left or right key presses as well as two benchmark BCI datasets, we compared the performance of PSTF to two popular methods: common spatial subspace decomposition, and adaptive spatio-temporal filtering. Main results. On the BCI benchmark data sets, PSTF performed comparably to both existing methods. With the key press EEG data, PSTF extracted more discriminative features, thereby leading to more accurate (74.99% average accuracy) predictions of RP laterality than that achievable with existing methods. Significance. Naturalistic and volitional interaction with the world is an important capacity that is lost with traditional system-paced BCIs. We demonstrated a significant improvement in fine movement laterality prediction from RP features alone. Our work supports further study of RP-based BCI for intuitive asynchronous control of the environment, such as augmentative communication or wheelchair navigation.
Thurman, Andrew L; Choi, Jiwoong; Choi, Sanghun; Lin, Ching-Long; Hoffman, Eric A; Lee, Chang Hyun; Chan, Kung-Sik
2017-05-10
Methacholine challenge tests are used to measure changes in pulmonary function that indicate symptoms of asthma. In addition to pulmonary function tests, which measure global changes in pulmonary function, computed tomography images taken at full inspiration before and after administration of methacholine provide local air volume changes (hyper-inflation post methacholine) at individual acinar units, indicating local airway hyperresponsiveness. Some of the acini may have extreme air volume changes relative to the global average, indicating hyperresponsiveness, and those extreme values may occur in clusters. We propose a Gaussian mixture model with a spatial smoothness penalty to improve prediction of hyperresponsive locations that occur in spatial clusters. A simulation study provides evidence that the spatial smoothness penalty improves prediction under different data-generating mechanisms. We apply this method to computed tomography data from Seoul National University Hospital on five healthy and ten asthmatic subjects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Spatiotemporal exposure modeling of ambient erythemal ultraviolet radiation.
VoPham, Trang; Hart, Jaime E; Bertrand, Kimberly A; Sun, Zhibin; Tamimi, Rulla M; Laden, Francine
2016-11-24
Ultraviolet B (UV-B) radiation plays a multifaceted role in human health, inducing DNA damage and representing the primary source of vitamin D for most humans; however, current U.S. UV exposure models are limited in spatial, temporal, and/or spectral resolution. Area-to-point (ATP) residual kriging is a geostatistical method that can be used to create a spatiotemporal exposure model by downscaling from an area- to point-level spatial resolution using fine-scale ancillary data. A stratified ATP residual kriging approach was used to predict average July noon-time erythemal UV (UV Ery ) (mW/m 2 ) biennially from 1998 to 2012 by downscaling National Aeronautics and Space Administration (NASA) Total Ozone Mapping Spectrometer (TOMS) and Ozone Monitoring Instrument (OMI) gridded remote sensing images to a 1 km spatial resolution. Ancillary data were incorporated in random intercept linear mixed-effects regression models. Modeling was performed separately within nine U.S. regions to satisfy stationarity and account for locally varying associations between UV Ery and predictors. Cross-validation was used to compare ATP residual kriging models and NASA grids to UV-B Monitoring and Research Program (UVMRP) measurements (gold standard). Predictors included in the final regional models included surface albedo, aerosol optical depth (AOD), cloud cover, dew point, elevation, latitude, ozone, surface incoming shortwave flux, sulfur dioxide (SO 2 ), year, and interactions between year and surface albedo, AOD, cloud cover, dew point, elevation, latitude, and SO 2 . ATP residual kriging models more accurately estimated UV Ery at UVMRP monitoring stations on average compared to NASA grids across the contiguous U.S. (average mean absolute error [MAE] for ATP, NASA: 15.8, 20.3; average root mean square error [RMSE]: 21.3, 25.5). ATP residual kriging was associated with positive percent relative improvements in MAE (0.6-31.5%) and RMSE (3.6-29.4%) across all regions compared to NASA grids. ATP residual kriging incorporating fine-scale spatial predictors can provide more accurate, high-resolution UV Ery estimates compared to using NASA grids and can be used in epidemiologic studies examining the health effects of ambient UV.
Application and Analysis of Measurement Model for Calibrating Spatial Shear Surface in Triaxial Test
NASA Astrophysics Data System (ADS)
Zhang, Zhihua; Qiu, Hongsheng; Zhang, Xiedong; Zhang, Hang
2017-12-01
Discrete element method has great advantages in simulating the contacts, fractures, large displacement and deformation between particles. In order to analyze the spatial distribution of the shear surface in the three-dimensional triaxial test, a measurement model is inserted in the numerical triaxial model which is generated by weighted average assembling method. Due to the non-visibility of internal shear surface in laboratory, it is largely insufficient to judge the trend of internal shear surface only based on the superficial cracks of sheared sample, therefore, the measurement model is introduced. The trend of the internal shear zone is analyzed according to the variations of porosity, coordination number and volumetric strain in each layer. It shows that as a case study on confining stress of 0.8 MPa, the spatial shear surface is calibrated with the results of the rotated particle distribution and the theoretical value with the specific characteristics of the increase of porosity, the decrease of coordination number, and the increase of volumetric strain, which represents the measurement model used in three-dimensional model is applicable.
NASA Astrophysics Data System (ADS)
Cooper, J.; Tait, S.; Marion, A.
2005-12-01
Bed-load is governed by interdependent mechanisms, the most significant being the interaction between bed roughness, surface layer composition and near-bed flow. Despite this, practically all transport rate equations are described as a function of average bed shear stress. Some workers have examined the role of turbulence in sediment transport (Nelson et al. 1995) but have not explored the potential significance of spatial variations in the near-bed flow field. This is unfortunate considering evidence showing that transport is spatially heterogeneous and could be linked to the spatial nature of the near-bed flow (Drake et al., 1988). An understanding is needed of both the temporal and spatial variability in the near-bed flow field. This paper presents detailed spatial velocity measurements of the near-bed flow field over a gravel-bed, obtained using Particle Image Velocimetry. These data have been collected in a laboratory flume under two regimes: (i) tests with one bed slope and different flow depths; and (ii) tests with a combination of flow depths and slopes at the same average bed shear stress. Results indicate spatial variation in the streamwise velocities of up to 45 per cent from the double-averaged velocity (averaged in both time and space). Under both regimes, as the depth increased, spatial variability in the flow field increased. The probability distributions of near-bed streamwise velocities became progressively more skewed towards the higher velocities. This change was more noticeable under regime (i). This has been combined with data from earlier tests in which the near-bed velocity close to an entraining grain was measured using a PIV/image analysis system (Chegini et al, 2002). This along with data on the shape of the probability density function of velocities capable of entraining individual grains derived from a discrete-particle model (Heald et al., 2004) has been used to estimate the distribution of local velocities required for grain motion in the above tests. The overlap between this distribution and the measured velocities are used to estimate entrainment rates. Predicted entrainment rates increase with relative submergence, even for similar bed shear stress. Assuming bed-load rate is the product of entrainment rate and hop length, and that hop lengths are sensibly stable, suggests that transport rate has a dependence on relative submergence. This demonstrates that transport rate is not a direct function of average bed shear stress. The results describe a mechanism that will cause river channels with contrasting morphologies (and different relative submergence) but similar levels of average bed stress to experience different levels of sediment mobility. Chegini A. Tait S. Heald J. McEwan I. 2002 The development of an automated system for the measurement of near bed turbulence and grain motion. Proc. ASCE Conf. on Hydraulic Measurements and Experimental Methods, ISBN 0-7844-0655-3. Drake T.G. Shreve R.L. Dietrich W.E. Whiting P.J. Leopold L.B. 1988 Bedload transport of fine gravel observed by motion-picture photography, J. Fluid Mech., 192, 193-217. Heald J. McEwan I. Tait, S. 2004 Sediment transport over a flat bed in a unidirectional flow: simulations and validation, Phil. Trans. Roy. Soc. of London A, 362, 1973-1986. Nelson J.M. Shreve R.L. McLean S.R. Drake T.G. 1995 Role of near-bed turbulence structure in bed-load transport and bed form mechanics, Water. Res. Res., 31, 8, 2071-2086.
Mortality atlas of the main causes of death in Switzerland, 2008-2012.
Chammartin, Frédérique; Probst-Hensch, Nicole; Utzinger, Jürg; Vounatsou, Penelope
2016-01-01
Analysis of the spatial distribution of mortality data is important for identification of high-risk areas, which in turn might guide prevention, and modify behaviour and health resources allocation. This study aimed to update the Swiss mortality atlas by analysing recent data using Bayesian statistical methods. We present average pattern for the major causes of death in Switzerland. We analysed Swiss mortality data from death certificates for the period 2008-2012. Bayesian conditional autoregressive models were employed to smooth the standardised mortality rates and assess average patterns. Additionally, we developed models for age- and gender-specific sub-groups that account for urbanisation and linguistic areas in order to assess their effects on the different sub-groups. We describe the spatial pattern of the major causes of death that occurred in Switzerland between 2008 and 2012, namely 4 cardiovascular diseases, 10 different kinds of cancer, 2 external causes of death, as well as chronic respiratory diseases, Alzheimer's disease, diabetes, influenza and pneumonia, and liver diseases. In-depth analysis of age- and gender-specific mortality rates revealed significant disparities between urbanisation and linguistic areas. We provide a contemporary overview of the spatial distribution of the main causes of death in Switzerland. Our estimates and maps can help future research to deepen our understanding of the spatial variation of major causes of death in Switzerland, which in turn is crucial for targeting preventive measures, changing behaviours and a more cost-effective allocation of health resources.
The Analytical Limits of Modeling Short Diffusion Timescales
NASA Astrophysics Data System (ADS)
Bradshaw, R. W.; Kent, A. J.
2016-12-01
Chemical and isotopic zoning in minerals is widely used to constrain the timescales of magmatic processes such as magma mixing and crystal residence, etc. via diffusion modeling. Forward modeling of diffusion relies on fitting diffusion profiles to measured compositional gradients. However, an individual measurement is essentially an average composition for a segment of the gradient defined by the spatial resolution of the analysis. Thus there is the potential for the analytical spatial resolution to limit the timescales that can be determined for an element of given diffusivity, particularly where the scale of the gradient approaches that of the measurement. Here we use a probabilistic modeling approach to investigate the effect of analytical spatial resolution on estimated timescales from diffusion modeling. Our method investigates how accurately the age of a synthetic diffusion profile can be obtained by modeling an "unknown" profile derived from discrete sampling of the synthetic compositional gradient at a given spatial resolution. We also include the effects of analytical uncertainty and the position of measurements relative to the diffusion gradient. We apply this method to the spatial resolutions of common microanalytical techniques (LA-ICP-MS, SIMS, EMP, NanoSIMS). Our results confirm that for a given diffusivity, higher spatial resolution gives access to shorter timescales, and that each analytical spacing has a minimum timescale, below which it overestimates the timescale. For example, for Ba diffusion in plagioclase at 750 °C timescales are accurate (within 20%) above 10, 100, 2,600, and 71,000 years at 0.3, 1, 5, and 25 mm spatial resolution, respectively. For Sr diffusion in plagioclase at 750 °C, timescales are accurate above 0.02, 0.2, 4, and 120 years at the same spatial resolutions. Our results highlight the importance of selecting appropriate analytical techniques to estimate accurate diffusion-based timescales.
NASA Astrophysics Data System (ADS)
Rouholahnejad, E.; Kirchner, J. W.
2016-12-01
Evapotranspiration (ET) is a key process in land-climate interactions and affects the dynamics of the atmosphere at local and regional scales. In estimating ET, most earth system models average over considerable sub-grid heterogeneity in land surface properties, precipitation (P), and potential evapotranspiration (PET). This spatial averaging could potentially bias ET estimates, due to the nonlinearities in the underlying relationships. In addition, most earth system models ignore lateral redistribution of water within and between grid cells, which could potentially alter both local and regional ET. Here we present a first attempt to quantify the effects of spatial heterogeneity and lateral redistribution on grid-cell-averaged ET as seen from the atmosphere over heterogeneous landscapes. Using a Budyko framework to express ET as a function of P and PET, we quantify how sub-grid heterogeneity affects average ET at the scale of typical earth system model grid cells. We show that averaging over sub-grid heterogeneity in P and PET, as typical earth system models do, leads to overestimates of average ET. We use a similar approach to quantify how lateral redistribution of water could affect average ET, as seen from the atmosphere. We show that where the aridity index P/PET increases with altitude, gravitationally driven lateral redistribution will increase average ET, implying that models that neglect lateral moisture redistribution will underestimate average ET. In contrast, where the aridity index P/PET decreases with altitude, gravitationally driven lateral redistribution will decrease average ET. This approach yields a simple conceptual framework and mathematical expressions for determining whether, and how much, spatial heterogeneity and lateral redistribution can affect regional ET fluxes as seen from the atmosphere. This analysis provides the basis for quantifying heterogeneity and redistribution effects on ET at regional and continental scales, which will be the focus of future work.
NASA Astrophysics Data System (ADS)
Peng, Chi; Wang, Meie; Chen, Weiping
2016-11-01
Spatial statistical methods including Cokriging interpolation, Morans I analysis, and geographically weighted regression (GWR) were used for studying the spatial characteristics of polycyclic aromatic hydrocarbon (PAH) accumulation in urban, suburban, and rural soils of Beijing. The concentrations of PAHs decreased spatially as the level of urbanization decreased. Generally, PAHs in soil showed two spatial patterns on the regional scale: (1) regional baseline depositions with a radius of 16.5 km related to the level of urbanization and (2) isolated pockets of soil contaminated with PAHs were found up to around 3.5 km from industrial point sources. In the urban areas, soil PAHs showed high spatial heterogeneity on the block scale, which was probably related to vegetation cover, land use, and physical soil disturbance. The distribution of total PAHs in urban blocks was unrelated to the indicators of the intensity of anthropogenic activity, namely population density, light intensity at night, and road density, but was significantly related to the same indicators in the suburban and rural areas. The moving averages of molecular ratios suggested that PAHs in the suburban and rural soils were a mix of local emissions and diffusion from urban areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinistore, Julie C.; Reinemann, D. J.; Izaurralde, Roberto C.
Spatial variability in yields and greenhouse gas emissions from soils has been identified as a key source of variability in life cycle assessments (LCAs) of agricultural products such as cellulosic ethanol. This study aims to conduct an LCA of cellulosic ethanol production from switchgrass in a way that captures this spatial variability and tests results for sensitivity to using spatially averaged results. The Environment Policy Integrated Climate (EPIC) model was used to calculate switchgrass yields, greenhouse gas (GHG) emissions, and nitrogen and phosphorus emissions from crop production in southern Wisconsin and Michigan at the watershed scale. These data were combinedmore » with cellulosic ethanol production data via ammonia fiber expansion and dilute acid pretreatment methods and region-specific electricity production data into an LCA model of eight ethanol production scenarios. Standard deviations from the spatial mean yields and soil emissions were used to test the sensitivity of net energy ratio, global warming potential intensity, and eutrophication and acidification potential metrics to spatial variability. Substantial variation in the eutrophication potential was also observed when nitrogen and phosphorus emissions from soils were varied. This work illustrates the need for spatially explicit agricultural production data in the LCA of biofuels and other agricultural products.« less
Ball-Damerow, Joan E.; Oboyski, Peter T.; Resh, Vincent H.
2015-01-01
Abstract The recently completed Odonata database for California consists of specimen records from the major entomology collections of the state, large Odonata collections outside of the state, previous literature, historical and recent field surveys, and from enthusiast group observations. The database includes 32,025 total records and 19,000 unique records for 106 species of dragonflies and damselflies, with records spanning 1879–2013. Records have been geographically referenced using the point-radius method to assign coordinates and an uncertainty radius to specimen locations. In addition to describing techniques used in data acquisition, georeferencing, and quality control, we present assessments of the temporal, spatial, and taxonomic distribution of records. We use this information to identify biases in the data, and to determine changes in species prevalence, latitudinal ranges, and elevation ranges when comparing records before 1976 and after 1979. The average latitude of where records occurred increased by 78 km over these time periods. While average elevation did not change significantly, the average minimum elevation across species declined by 108 m. Odonata distribution may be generally shifting northwards as temperature warms and to lower minimum elevations in response to increased summer water availability in low-elevation agricultural regions. The unexpected decline in elevation may also be partially the result of bias in recent collections towards centers of human population, which tend to occur at lower elevations. This study emphasizes the need to address temporal, spatial, and taxonomic biases in museum and observational records in order to produce reliable conclusions from such data. PMID:25709531
Feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory.
Wang, Haoyu; Miao, Yanwei; Zhou, Kun; Yu, Yanming; Bao, Shanglian; He, Qiang; Dai, Yongming; Xuan, Stephanie Y; Tarabishy, Bisher; Ye, Yongquan; Hu, Jiani
2010-09-01
To investigate the feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory. Two experiments were designed to investigate the feasibility of using reference image based compressed sensing (RICS) technique in DCE-MRI of the breast. The first experiment examined the capability of RICS to faithfully reconstruct uptake curves using undersampled data sets extracted from fully sampled clinical breast DCE-MRI data. An average approach and an approach using motion estimation and motion compensation (ME/MC) were implemented to obtain reference images and to evaluate their efficacy in reducing motion related effects. The second experiment, an in vitro phantom study, tested the feasibility of RICS for improving temporal resolution without degrading the spatial resolution. For the uptake-curve reconstruction experiment, there was a high correlation between uptake curves reconstructed from fully sampled data by Fourier transform and from undersampled data by RICS, indicating high similarity between them. The mean Pearson correlation coefficients for RICS with the ME/MC approach and RICS with the average approach were 0.977 +/- 0.023 and 0.953 +/- 0.031, respectively. The comparisons of final reconstruction results between RICS with the average approach and RICS with the ME/MC approach suggested that the latter was superior to the former in reducing motion related effects. For the in vitro experiment, compared to the fully sampled method, RICS improved the temporal resolution by an acceleration factor of 10 without degrading the spatial resolution. The preliminary study demonstrates the feasibility of RICS for faithfully reconstructing uptake curves and improving temporal resolution of breast DCE-MRI without degrading the spatial resolution.
Comprehensive time average digital holographic vibrometry
NASA Astrophysics Data System (ADS)
Psota, Pavel; Lédl, Vít; Doleček, Roman; Mokrý, Pavel; Vojtíšek, Petr; Václavík, Jan
2016-12-01
This paper presents a method that simultaneously deals with drawbacks of time-average digital holography: limited measurement range, limited spatial resolution, and quantitative analysis of the measured Bessel fringe patterns. When the frequency of the reference wave is shifted by an integer multiple of frequency at which the object oscillates, the measurement range of the method can be shifted either to smaller or to larger vibration amplitudes. In addition, phase modulation of the reference wave is used to obtain a sequence of phase-modulated fringe patterns. Such fringe patterns can be combined by means of phase-shifting algorithms, and amplitudes of vibrations can be straightforwardly computed. This approach independently calculates the amplitude values in every single pixel. The frequency shift and phase modulation are realized by proper control of Bragg cells and therefore no additional hardware is required.
Change of spatial information under rescaling: A case study using multi-resolution image series
NASA Astrophysics Data System (ADS)
Chen, Weirong; Henebry, Geoffrey M.
Spatial structure in imagery depends on a complicated interaction between the observational regime and the types and arrangements of entities within the scene that the image portrays. Although block averaging of pixels has commonly been used to simulate coarser resolution imagery, relatively little attention has been focused on the effects of simple rescaling on spatial structure and the explanation and a possible solution to the problem. Yet, if there are significant differences in spatial variance between rescaled and observed images, it may affect the reliability of retrieved biogeophysical quantities. To investigate these issues, a nested series of high spatial resolution digital imagery was collected at a research site in eastern Nebraska in 2001. An airborne Kodak DCS420IR camera acquired imagery at three altitudes, yielding nominal spatial resolutions ranging from 0.187 m to 1 m. The red and near infrared (NIR) bands of the co-registered image series were normalized using pseudo-invariant features, and the normalized difference vegetation index (NDVI) was calculated. Plots of grain sorghum planted in orthogonal crop row orientations were extracted from the image series. The finest spatial resolution data were then rescaled by averaging blocks of pixels to produce a rescaled image series that closely matched the spatial resolution of the observed image series. Spatial structures of the observed and rescaled image series were characterized using semivariogram analysis. Results for NDVI and its component bands show, as expected, that decreasing spatial resolution leads to decreasing spatial variability and increasing spatial dependence. However, compared to the observed data, the rescaled images contain more persistent spatial structure that exhibits limited variation in both spatial dependence and spatial heterogeneity. Rescaling via simple block averaging fails to consider the effect of scene object shape and extent on spatial information. As the features portrayed by pixels are equally weighted regardless of the shape and extent of the underlying scene objects, the rescaled image retains more of the original spatial information than would occur through direct observation at a coarser sensor spatial resolution. In contrast, for the observed images, due to the effect of the modulation transfer function (MTF) of the imaging system, high frequency features like edges are blurred or lost as the pixel size increases, resulting in greater variation in spatial structure. Successive applications of a low-pass spatial convolution filter are shown to mimic a MTF. Accordingly, it is recommended that such a procedure be applied prior to rescaling by simple block averaging, if insufficient image metadata exist to replicate the net MTF of the imaging system, as might be expected in land cover change analysis studies using historical imagery.
Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting
NASA Astrophysics Data System (ADS)
Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.
2018-04-01
Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.
Kalin, Latif; Hantush, Mohamed M
2009-02-01
An index based method is developed that ranks the subwatersheds of a watershed based on their relative impacts on watershed response to anticipated land developments, and then applied to an urbanizing watershed in Eastern Pennsylvania. Simulations with a semi-distributed hydrologic model show that computed low- and high-flow frequencies at the main outlet increase significantly with the projected landscape changes in the watershed. The developed index is utilized to prioritize areas in the urbanizing watershed based on their contributions to alterations in the magnitude of selected flow characteristics at two spatial resolutions. The low-flow measure, 7Q10, rankings are shown to mimic the spatial trend of groundwater recharge rates, whereas average annual maximum daily flow, QAMAX, and average monthly median of daily flows, QMMED, rankings are influenced by both recharge and proximity to watershed outlet. Results indicate that, especially with the higher resolution, areas having quicker responses are not necessarily the more critical areas for high-flow scenarios. Subwatershed rankings are shown to vary slightly with the location of water quality/quantity criteria enforcement. It is also found that rankings of subwatersheds upstream from the site of interest, which could be the main outlet or any interior point in the watershed, may be influenced by the time scale of the hydrologic processes.
Zhang, Chu; Liu, Fei; He, Yong
2018-02-01
Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.
Spatially resolved organic analysis of the Allende meteorite
NASA Technical Reports Server (NTRS)
Zenobi, Renato; Philippoz, Jean-Michel; Zare, Richard N.; Buseck, Peter R.
1989-01-01
The distribution of polycyclic aromatic hydrocarbons (PAHs) in the Allende meteorite has been probed with two-step laser desorption/laser multiphoton ionization mass spectrometry. This method allows direct in situ analysis with a spatial resolution of 1 sq mm or better of selected organic molecules. Spectra from freshly fractured interior surfaces of the meteorite show that PAH concentrations are locally high compared to the average concentrations found by wet chemical analysis of pulverized samples. The data suggest that the PAHs are primarily associated with the fine-grained matrix, where the organic polymer occurs. In addition, highly substituted PAH skeletons were observed. Interiors of individual chondrules were devoid of PAHs at the detection limit (about 0.05 ppm).
Method for extracting long-equivalent wavelength interferometric information
NASA Technical Reports Server (NTRS)
Hochberg, Eric B. (Inventor)
1991-01-01
A process for extracting long-equivalent wavelength interferometric information from a two-wavelength polychromatic or achromatic interferometer. The process comprises the steps of simultaneously recording a non-linear sum of two different frequency visible light interferograms on a high resolution film and then placing the developed film in an optical train for Fourier transformation, low pass spatial filtering and inverse transformation of the film image to produce low spatial frequency fringes corresponding to a long-equivalent wavelength interferogram. The recorded non-linear sum irradiance derived from the two-wavelength interferometer is obtained by controlling the exposure so that the average interferogram irradiance is set at either the noise level threshold or the saturation level threshold of the film.
Phase-resolved and time-averaged puff motions of an excited stack-issued transverse jet
NASA Astrophysics Data System (ADS)
Hsu, C. M.; Huang, R. F.
2013-07-01
The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.
Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo
2012-01-01
The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626
aerosol radiative effects and forcing: spatial and temporal distributions
NASA Astrophysics Data System (ADS)
Kinne, Stefan
2014-05-01
A monthly climatology for aerosol optical properties based on a synthesis from global modeling and observational data has been applied to illustrate spatial distributions and global averages of aerosol radiative impacts. With the help of a pre-industrial reference for aerosol optical properties from global modeling, also the aerosol direct forcing (ca -0.35W/m2 globally and annual averaged) and their spatial and seasonal distributions and contributions by individual aerosol components are estimated. Finally, CCN and IN concentrations associated with this climatology are applied to estimate aerosol indirect effects and forcing.
Kabara, J F; Bonds, A B
2001-12-01
Responses of cat striate cortical cells to a drifting sinusoidal grating were modified by the superimposition of a second, perturbing grating (PG) that did not excite the cell when presented alone. One consequence of the presence of a PG was a shift in the tuning curves. The orientation tuning of all 41 cells exposed to a PG and the spatial frequency tuning of 83% of the 23 cells exposed to a PG showed statistically significant dislocations of both the response function peak and center of mass from their single grating values. As found in earlier reports, the presence of PGs suppressed responsiveness. However, reductions measured at the single grating optimum orientation or spatial frequency were on average 1.3 times greater than the suppression found at the peak of the response function modified by the presence of the PG. Much of the loss in response seen at the single grating optimum is thus a result of a shift in the tuning function rather than outright suppression. On average orientation shifts were repulsive and proportional (approximately 0.10 deg/deg) to the angle between the perturbing stimulus and the optimum single grating orientation. Shifts in the spatial frequency response function were both attractive and repulsive, resulting in an overall average of zero. For both simple and complex cells, PGs generally broadened orientation response function bandwidths. Similarly, complex cell spatial frequency response function bandwidths broadened. Simple cell spatial frequency response functions usually did not change, and those that did broadened only 4% on average. These data support the hypothesis that additional sinusoidal components in compound stimuli retune cells' response functions for orientation and spatial frequency.
Spatial Heterogeneity in the Effects of Immigration and Diversity on Neighborhood Homicide Rates
Graif, Corina; Sampson, Robert J.
2010-01-01
This paper examines the connection of immigration and diversity to homicide by advancing a recently developed approach to modeling spatial dynamics—geographically weighted regression. In contrast to traditional global averaging, we argue on substantive grounds that neighborhood characteristics vary in their effects across neighborhood space, a process of “spatial heterogeneity.” Much like treatment-effect heterogeneity and distinct from spatial spillover, our analysis finds considerable evidence that neighborhood characteristics in Chicago vary significantly in predicting homicide, in some cases showing countervailing effects depending on spatial location. In general, however, immigrant concentration is either unrelated or inversely related to homicide, whereas language diversity is consistently linked to lower homicide. The results shed new light on the immigration-homicide nexus and suggest the pitfalls of global averaging models that hide the reality of a highly diversified and spatially stratified metropolis. PMID:20671811
Caveats for the spatial arrangement method: Comment on Hout, Goldinger, and Ferguson (2013).
Verheyen, Steven; Voorspoels, Wouter; Vanpaemel, Wolf; Storms, Gert
2016-03-01
The gold standard among proximity data collection methods for multidimensional scaling is the (dis)similarity rating of pairwise presented stimuli. A drawback of the pairwise method is its lengthy duration, which may cause participants to change their strategy over time, become fatigued, or disengage altogether. Hout, Goldinger, and Ferguson (2013) recently made a case for the Spatial Arrangement Method (SpAM) as an alternative to the pairwise method, arguing that it is faster and more engaging. SpAM invites participants to directly arrange stimuli on a computer screen such that the interstimuli distances are proportional to psychological proximity. Based on a reanalysis of the Hout et al. (2013), data we identify three caveats for SpAM. An investigation of the distributional characteristics of the SpAM proximity data reveals that the spatial nature of SpAM imposes structure on the data, invoking a bias against featural representations. Individual-differences scaling of the SpAM proximity data reveals that the two-dimensional nature of SpAM allows individuals to only communicate two dimensions of variation among stimuli properly, invoking a bias against high-dimensional scaling representations. Monte Carlo simulations indicate that in order to obtain reliable estimates of the group average, SpAM requires more individuals to be tested. We conclude with an overview of considerations that can inform the choice between SpAM and the pairwise method and offer suggestions on how to overcome their respective limitations. (c) 2016 APA, all rights reserved).
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
Mao, Yingming; Sang, Shuxun; Liu, Shiqi; Jia, Jinlong
2014-05-01
The spatial variation of soil pH and soil organic matter (SOM) in the urban area of Xuzhou, China, was investigated in this study. Conventional statistics, geostatistics, and a geographical information system (GIS) were used to produce spatial distribution maps and to provide information about land use types. A total of 172 soil samples were collected based on grid method in the study area. Soil pH ranged from 6.47 to 8.48, with an average of 7.62. SOM content was very variable, ranging from 3.51 g/kg to 17.12 g/kg, with an average of 8.26 g/kg. Soil pH followed a normal distribution, while SOM followed a log-normal distribution. The results of semi-variograms indicated that soil pH and SOM had strong (21%) and moderate (44%) spatial dependence, respectively. The variogram model was spherical for soil pH and exponential for SOM. The spatial distribution maps were achieved using kriging interpolation. The high pH and high SOM tended to occur in the mixed forest land cover areas such as those in the southwestern part of the urban area, while the low values were found in the eastern and the northern parts, probably due to the effect of industrial and human activities. In the central urban area, the soil pH was low, but the SOM content was high, which is mainly attributed to the disturbance of regional resident activities and urban transportation. Furthermore, anthropogenic organic particles are possible sources of organic matter after entering the soil ecosystem in urban areas. These maps provide useful information for urban planning and environmental management. Copyright © 2014 Académie des sciences. Published by Elsevier SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Green, T. R.; Erksine, R. H.; David, O.; Ascough, J. C., II; Kipka, H.; Lloyd, W. J.; McMaster, G. S.
2015-12-01
Water movement and storage within a watershed may be simulated at different spatial resolutions of land areas or hydrological response units (HRUs). Here, effects of HRU size on simulated soil water and surface runoff are tested using the AgroEcoSystem-Watershed (AgES-W) model with three different resolutions of HRUs. We studied a 56-ha agricultural watershed in northern Colorado, USA farmed primarily under a wheat-fallow rotation. The delineation algorithm was based upon topography (surface flow paths), land use (crop management strips and native grass), and mapped soil units (three types), which produced HRUs that follow the land use and soil boundaries. AgES-W model parameters that control surface and subsurface hydrology were calibrated using simulated daily soil moisture at different landscape positions and depths where soil moisture was measured hourly and averaged up to daily values. Parameter sets were both uniform and spatially variable with depth and across the watershed (5 different calibration approaches). Although forward simulations were computationally efficient (less than 1 minute each), each calibration required thousands of model runs. Execution of such large jobs was facilitated by using the Object Modeling System with the Cloud Services Innovation Platform to manage four virtual machines on a commercial web service configured with a total of 64 computational cores and 120 GB of memory. Results show how spatially distributed and averaged soil moisture and runoff at the outlet vary with different HRU delineations. The results will help guide HRU delineation, spatial resolution and parameter estimation methods for improved hydrological simulations in this and other semi-arid agricultural watersheds.
NASA Astrophysics Data System (ADS)
Monasson, R.; Rosay, S.
2014-03-01
The dynamics of a neural model for hippocampal place cells storing spatial maps is studied. In the absence of external input, depending on the number of cells and on the values of control parameters (number of environments stored, level of neural noise, average level of activity, connectivity of place cells), a "clump" of spatially localized activity can diffuse or remains pinned due to crosstalk between the environments. In the single-environment case, the macroscopic coefficient of diffusion of the clump and its effective mobility are calculated analytically from first principles and corroborated by numerical simulations. In the multienvironment case the heights and the widths of the pinning barriers are analytically characterized with the replica method; diffusion within one map is then in competition with transitions between different maps. Possible mechanisms enhancing mobility are proposed and tested.
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Kerber, A. G.; Sellers, P. J.
1993-01-01
Spatial averaging errors which may occur when creating hemispherical reflectance maps for different cover types from direct nadir technique to estimate the hemispherical reflectance are assessed by comparing the results with those obtained with a knowledge-based system called VEG (Kimes et al., 1991, 1992). It was found that hemispherical reflectance errors provided by using VEG are much less than those using the direct nadir techniques, depending on conditions. Suggestions are made concerning sampling and averaging strategies for creating hemispherical reflectance maps for photosynthetic, carbon cycle, and climate change studies.
Song, Weize; Jia, Haifeng; Li, Zhilin; Tang, Deliang
2018-08-01
Urban air pollutant distribution is a concern in environmental and health studies. Particularly, the spatial distribution of NO 2 and PM 2.5 , which represent photochemical smog and haze pollution in urban areas, is of concern. This paper presents a study quantifying the seasonal differences between urban NO 2 and PM 2.5 distributions in Foshan, China. A geographical semi-variogram analysis was conducted to delineate the spatial variation in daily NO 2 and PM 2.5 concentrations. The data were collected from 38 sites in the government-operated monitoring network. The results showed that the total spatial variance of NO 2 is 38.5% higher than that of PM 2.5 . The random spatial variance of NO 2 was 1.6 times than that of PM 2.5 . The nugget effect (i.e., random to total spatial variance ratio) values of NO 2 and PM 2.5 were 29.7 and 20.9%, respectively. This indicates that urban NO 2 distribution was affected by both local and regional influencing factors, while urban PM 2.5 distribution was dominated by regional influencing factors. NO 2 had a larger seasonally averaged spatial autocorrelation distance (48km) than that of PM 2.5 (33km). The spatial range of NO 2 autocorrelation was larger in winter than the other seasons, and PM 2.5 has a smaller range of spatial autocorrelation in winter than the other seasons. Overall, the geographical semi-variogram analysis is a very effective method to enrich the understanding of NO 2 and PM 2.5 distributions. It can provide scientific evidences for the buffering radius selection of spatial predictors for land use regression models. It will also be beneficial for developing the targeted policies and measures to reduce NO 2 and PM 2.5 pollution levels. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFadden, Derek; Zhang Beibei; Brock, Kristy K.
Purpose: Increasing the magnetic resonance imaging (MRI) field strength can improve image resolution and quality, but concerns remain regarding the influence on geometric fidelity. The objectives of the present study were to spatially investigate the effect of 3-Tesla (3T) MRI on clinical target localization for stereotactic radiosurgery. Methods and Materials: A total of 39 patients were enrolled in a research ethics board-approved prospective clinical trial. Imaging (1.5T and 3T MRI and computed tomography) was performed after stereotactic frame placement. Stereotactic target localization at 1.5T vs. 3T was retrospectively analyzed in a representative cohort of patients with tumor (n = 4)more » and functional (n = 5) radiosurgical targets. The spatial congruency of the tumor gross target volumes was determined by the mean discrepancy between the average gross target volume surfaces at 1.5T and 3T. Reproducibility was assessed by the displacement from an averaged surface and volume congruency. Spatial congruency and the reproducibility of functional radiosurgical targets was determined by comparing the mean and standard deviation of the isocenter coordinates. Results: Overall, the mean absolute discrepancy across all patients was 0.67 mm (95% confidence interval, 0.51-0.83), significantly <1 mm (p < .010). No differences were found in the overall interuser target volume congruence (mean, 84% for 1.5T vs. 84% for 3T, p > .4), and the gross target volume surface mean displacements were similar within and between users. The overall average isocenter coordinate discrepancy for the functional targets at 1.5T and 3T was 0.33 mm (95% confidence interval, 0.20-0.48), with no patient-specific differences between the mean values (p >.2) or standard deviations (p >.1). Conclusion: Our results have provided clinically relevant evidence supporting the spatial validity of 3T MRI for use in stereotactic radiosurgery under the imaging conditions used.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Chetty, I; Mao, W
Purpose: To utilize deformable dose accumulation (DDA) to determine how cold spots within the PTV change over the course of fractionated head and neck (H&N) radiotherapy. Methods: Voxel-based dose was tracked using a DDA platform. The DDA process consisted of B-spline-based deformable image registration (DIR) and dose accumulation between planning CT’s and daily cone-beam CT’s for 10 H&N cancer patients. Cold spots within the PTV (regions receiving less than the prescription, 70 Gy) were contoured on the cumulative dose distribution. These cold spots were mapped to each fraction, starting from the first fraction to determine how they changed. Spatial correlationmore » between cold spot regions over each fraction, relative to the last fraction, was computed using the Jaccard index Jk (Mk,N), where N is the cold spot within the PTV at the end of the treatment, and Mk the same region for fraction k. Results: Figure 1 shows good spatial correlation between cold spots, and highlights expansion of the cold spot region over the course of treatment, as a result of setup uncertainties, and anatomical changes. Figure 2 shows a plot of Jk versus fraction number k averaged over 10 patients. This confirms the good spatial correlation between cold spots over the course of treatment. On average, Jk reaches ∼90% at fraction 22, suggesting that possible intervention (e.g. reoptimization) may mitigate the cold spot region. The cold spot, D99, averaged over 10 patients corresponded to a dose of ∼65 Gy, relative to the prescription dose of 70 Gy. Conclusion: DDA-based tracking provides spatial dose information, which can be used to monitor dose in different regions of the treatment plan, thereby enabling appropriate mid-treatment interventions. This work is supported in part by Varian Medical Systems, Palo Alto, CA.« less
Experimental criteria for the determination of fractal parameters of premixed turbulent flames
NASA Astrophysics Data System (ADS)
Shepherd, I. G.; Cheng, Robert K.; Talbot, L.
1992-10-01
The influence of spatial resolution, digitization noise, the number of records used for averaging, and the method of analysis on the determination of the fractal parameters of a high Damköhler number, methane/air, premixed, turbulent stagnation-point flame are investigated in this paper. The flow exit velocity was 5 m/s and the turbulent Reynolds number was 70 based on a integral scale of 3 mm and a turbulent intensity of 7%. The light source was a copper vapor laser which delivered 20 nsecs, 5 mJ pulses at 4 kHz and the tomographic cross-sections of the flame were recorded by a high speed movie camera. The spatial resolution of the images is 155 × 121 μm/pixel with a field of view of 50 × 65 mm. The stepping caliper technique for obtaining the fractal parameters is found to give the clearest indication of the cutoffs and the effects of noise. It is necessary to ensemble average the results from more than 25 statistically independent images to reduce sufficiently the scatter in the fractal parameters. The effects of reduced spatial resolution on fractal plots are estimated by artificial degradation of the resolution of the digitized flame boundaries. The effect of pixel resolution, an apparent increase in flame length below the inner scale rolloff, appears in the fractal plots when the measurent scale is less than approximately twice the pixel resolution. Although a clearer determination of fractal parameters is obtained by local averaging of the flame boundaries which removes digitization noise, at low spatial resolution this technique can reduce the fractal dimension. The degree of fractal isotropy of the flame surface can have a significant effect on the estimation of the flame surface area and hence burning rate from two-dimensional images. To estimate this isotropy a determination of the outer cutoff is required and three-dimensional measurements are probably also necessary.
Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie
2018-01-01
Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content. PMID:29652811
Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei
2018-04-13
Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.
NASA Astrophysics Data System (ADS)
Chybicki, Andrzej; Łubniewski, Zbigniew
2017-09-01
Satellite imaging systems have known limitations regarding their spatial and temporal resolution. The approaches based on subpixel mapping of the Earth's environment, which rely on combining the data retrieved from sensors of higher temporal and lower spatial resolution with the data characterized by lower temporal but higher spatial resolution, are of considerable interest. The paper presents the downscaling process of the land surface temperature (LST) derived from low resolution imagery acquired by the Advanced Very High Resolution Radiometer (AVHRR), using the inverse technique. The effective emissivity derived from another data source is used as a quantity describing thermal properties of the terrain in higher resolution, and allows the downsampling of low spatial resolution LST images. The authors propose an optimized downscaling method formulated as the inverse problem and show that the proposed approach yields better results than the use of other downsampling methods. The proposed method aims to find estimation of high spatial resolution LST data by minimizing the global error of the downscaling. In particular, for the investigated region of the Gulf of Gdansk, the RMSE between the AVHRR image downscaled by the proposed method and the Landsat 8 LST reference image was 2.255°C with correlation coefficient R equal to 0.828 and Bias = 0.557°C. For comparison, using the PBIM method, it was obtained RMSE = 2.832°C, R = 0.775 and Bias = 0.997°C for the same satellite scene. It also has been shown that the obtained results are also good in local scale and can be used for areas much smaller than the entire satellite imagery scene, depicting diverse biophysical conditions. Specifically, for the analyzed set of small sub-datasets of the whole scene, the obtained RSME between the downscaled and reference image was smaller, by approx. 0.53°C on average, in the case of applying the proposed method than in the case of using the PBIM method.
NASA Astrophysics Data System (ADS)
Park, Jonghee; Yoon, Kuk-Jin
2015-02-01
We propose a real-time line matching method for stereo systems. To achieve real-time performance while retaining a high level of matching precision, we first propose a nonparametric transform to represent the spatial relations between neighboring lines and nearby textures as a binary stream. Since the length of a line can vary across images, the matching costs between lines are computed within an overlap area (OA) based on the binary stream. The OA is determined for each line pair by employing the properties of a rectified image pair. Finally, the line correspondence is determined using a winner-takes-all method with a left-right consistency check. To reduce the computational time requirements further, we filter out unreliable matching candidates in advance based on their rectification properties. The performance of the proposed method was compared with state-of-the-art methods in terms of the computational time, matching precision, and recall. The proposed method required 47 ms to match lines from an image pair in the KITTI dataset with an average precision of 95%. We also verified the proposed method under image blur, illumination variation, and viewpoint changes.
Globally optimal tumor segmentation in PET-CT images: a graph-based co-segmentation method.
Han, Dongfeng; Bayouth, John; Song, Qi; Taurani, Aakant; Sonka, Milan; Buatti, John; Wu, Xiaodong
2011-01-01
Tumor segmentation in PET and CT images is notoriously challenging due to the low spatial resolution in PET and low contrast in CT images. In this paper, we have proposed a general framework to use both PET and CT images simultaneously for tumor segmentation. Our method utilizes the strength of each imaging modality: the superior contrast of PET and the superior spatial resolution of CT. We formulate this problem as a Markov Random Field (MRF) based segmentation of the image pair with a regularized term that penalizes the segmentation difference between PET and CT. Our method simulates the clinical practice of delineating tumor simultaneously using both PET and CT, and is able to concurrently segment tumor from both modalities, achieving globally optimal solutions in low-order polynomial time by a single maximum flow computation. The method was evaluated on clinically relevant tumor segmentation problems. The results showed that our method can effectively make use of both PET and CT image information, yielding segmentation accuracy of 0.85 in Dice similarity coefficient and the average median hausdorff distance (HD) of 6.4 mm, which is 10% (resp., 16%) improvement compared to the graph cuts method solely using the PET (resp., CT) images.
Spatial patterns of erosion in a bedrock gorge
NASA Astrophysics Data System (ADS)
Beer, Alexander. R.; Turowski, Jens M.; Kirchner, James W.
2017-01-01
Understanding the physical processes driving bedrock channel formation is essential for interpreting and predicting the evolution of mountain landscapes. Here we analyze bedrock erosion patterns measured at unprecedented spatial resolution (mm) over 2 years in a natural bedrock gorge. These spatial patterns show that local bedrock erosion rates depend on position in the channel cross section, height above the streambed, and orientation relative to the main streamflow and sediment path. These observations are consistent with the expected spatial distribution of impacting particles (the tools effect) and shielding by sediment on the bed (the cover effect). Vertical incision by bedrock abrasion averaged 1.5 mm/a, lateral abrasion averaged 0.4 mm/a, and downstream directed abrasion of flow obstacles averaged 2.6 mm/a. However, a single plucking event locally exceeded these rates by orders of magnitude (˜100 mm/a), and accounted for one third of the eroded volume in the studied gorge section over the 2 year study period. Hence, if plucking is spatially more frequent than we observed in this study period, it may contribute substantially to long-term erosion rates, even in the relatively massive bedrock at our study site. Our observations demonstrate the importance of bedrock channel morphology and the spatial distribution of moving and static sediment in determining local erosion rates.
Reconstructing paleoclimate fields using online data assimilation with a linear inverse model
NASA Astrophysics Data System (ADS)
Perkins, Walter A.; Hakim, Gregory J.
2017-05-01
We examine the skill of a new approach to climate field reconstructions (CFRs) using an online paleoclimate data assimilation (PDA) method. Several recent studies have foregone climate model forecasts during assimilation due to the computational expense of running coupled global climate models (CGCMs) and the relatively low skill of these forecasts on longer timescales. Here we greatly diminish the computational cost by employing an empirical forecast model (linear inverse model, LIM), which has been shown to have skill comparable to CGCMs for forecasting annual-to-decadal surface temperature anomalies. We reconstruct annual-average 2 m air temperature over the instrumental period (1850-2000) using proxy records from the PAGES 2k Consortium Phase 1 database; proxy models for estimating proxy observations are calibrated on GISTEMP surface temperature analyses. We compare results for LIMs calibrated using observational (Berkeley Earth), reanalysis (20th Century Reanalysis), and CMIP5 climate model (CCSM4 and MPI) data relative to a control offline reconstruction method. Generally, we find that the usage of LIM forecasts for online PDA increases reconstruction agreement with the instrumental record for both spatial fields and global mean temperature (GMT). Specifically, the coefficient of efficiency (CE) skill metric for detrended GMT increases by an average of 57 % over the offline benchmark. LIM experiments display a common pattern of skill improvement in the spatial fields over Northern Hemisphere land areas and in the high-latitude North Atlantic-Barents Sea corridor. Experiments for non-CGCM-calibrated LIMs reveal region-specific reductions in spatial skill compared to the offline control, likely due to aspects of the LIM calibration process. Overall, the CGCM-calibrated LIMs have the best performance when considering both spatial fields and GMT. A comparison with the persistence forecast experiment suggests that improvements are associated with the linear dynamical constraints of the forecast and not simply persistence of temperature anomalies.
Exploring Spatial and Temporal Distribution of Cutaneous Leishmaniasis in the Americas, 2001–2011
E. Yadón, Zaida; Idali Saboyá Díaz, Martha; de Fátima de Araújo Lucena, Francisca; Gerardo Castellanos, Luis; J. Sanchez-Vazquez, Manuel
2016-01-01
Leishmaniasis is an important health problem in several countries in the Americas and cases notification is limited and underreported. In 2008, the Pan American Health Organization (PAHO/WHO) met with endemic countries to discuss the status and need of improvement of systems region-wide. The objective is to describe the temporal and spatial distribution of cutaneous leishmaniasis (CL) cases reported to PAHO/WHO by the endemic countries between 2001 and 2011 in the Americas. Methods Cases reported in the period of 2001–2011 from 14/18 CL endemic countries were included in this study by using two spreadsheet to collect the data. Two indicators were analyzed: CL cases and incidence rate. The local regression method was used to analyze case trends and incidence rates for all the studied period, and for 2011 the spatial distribution of each indicator was analyzed by quartile and stratified into four groups. Results From 2001–2011, 636,683 CL cases were reported by 14 countries and with an increase of 30% of the reported cases. The average incidence rate in the Americas was 15.89/100,000 inhabitants. In 2011, 15 countries reported cases in 180 from a total of 292 units of first subnational level. The global incidence rate for all countries was 17.42 cases per 100,000 inhabitants; while in 180 administrative units at the first subnational level, the average incidence rate was 57.52/100,000 inhabitants. Nicaragua and Panama had the highest incidence but more cases occurred in Brazil and Colombia. Spatial distribution was heterogeneous for each indicator, and when analyzed in different administrative level. The results showed different distribution patterns, illustrating the limitation of the use of individual indicators and the need to classify higher-risk areas in order to prioritize the actions. This study shows the epidemiological patterns using secondary data and the importance of using multiple indicators to define and characterize smaller territorial units for surveillance and control of leishmaniasis. PMID:27824881
NASA Astrophysics Data System (ADS)
Soto, Marcelo A.; Denisov, Andrey; Angulo-Vinuesa, Xabier; Martin-Lopez, Sonia; Thévenaz, Luc; Gonzalez-Herraez, Miguel
2017-04-01
A method for distributed birefringence measurements is proposed based on the interference pattern generated by the interrogation of a dynamic Brillouin grating (DBG) using two short consecutive optical pulses. Compared to existing DBG interrogation techniques, the method here offers an improved sensitivity to birefringence changes thanks to the interferometric effect generated by the reflections of the two pulses. Experimental results demonstrate the possibility to obtain the longitudinal birefringence profile of a 20 m-long Panda fibre with an accuracy of 10-8 using 16 averages and 30 cm spatial resolution. The method enables sub-metric and highly-accurate distributed temperature and strain sensing.
A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.
Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan
2016-07-01
Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cronin, Keith R.; Runge, Troy M.; Zhang, Xuesong
By modeling the life cycle of fuel pathways for cellulosic ethanol (CE) it can help identify logistical barriers and anticipated impacts for the emerging commercial CE industry. Such models contain high amounts of variability, primarily due to the varying nature of agricultural production but also because of limitations in the availability of data at the local scale, resulting in the typical practice of using average values. In this study, 12 spatially explicit, cradle-to-refinery gate CE pathways were developed that vary by feedstock (corn stover, switchgrass, and Miscanthus), nitrogen application rate (higher, lower), pretreatment method (ammonia fiber expansion [AFEX], dilute acid),more » and co-product treatment method (mass allocation, sub-division), in which feedstock production was modeled at the watershed scale over a nine-county area in Southwestern Michigan. When comparing feedstocks, the model showed that corn stover yielded higher global warming potential (GWP), acidification potential (AP), and eutrophication potential (EP) than the perennial feedstocks of switchgrass and Miscanthus, on an average per area basis. Full life cycle results per MJ of produced ethanol demonstrated more mixed results, with corn stover-derived CE scenarios that use sub-division as a co-product treatment method yielding similarly favorable outcomes as switchgrass- and Miscanthus-derived CE scenarios. Variability was found to be greater between feedstocks than watersheds. Additionally, scenarios using dilute acid pretreatment had more favorable results than those using AFEX pretreatment.« less
Cronin, Keith R.; Runge, Troy M.; Zhang, Xuesong; ...
2016-07-13
By modeling the life cycle of fuel pathways for cellulosic ethanol (CE) it can help identify logistical barriers and anticipated impacts for the emerging commercial CE industry. Such models contain high amounts of variability, primarily due to the varying nature of agricultural production but also because of limitations in the availability of data at the local scale, resulting in the typical practice of using average values. In this study, 12 spatially explicit, cradle-to-refinery gate CE pathways were developed that vary by feedstock (corn stover, switchgrass, and Miscanthus), nitrogen application rate (higher, lower), pretreatment method (ammonia fiber expansion [AFEX], dilute acid),more » and co-product treatment method (mass allocation, sub-division), in which feedstock production was modeled at the watershed scale over a nine-county area in Southwestern Michigan. When comparing feedstocks, the model showed that corn stover yielded higher global warming potential (GWP), acidification potential (AP), and eutrophication potential (EP) than the perennial feedstocks of switchgrass and Miscanthus, on an average per area basis. Full life cycle results per MJ of produced ethanol demonstrated more mixed results, with corn stover-derived CE scenarios that use sub-division as a co-product treatment method yielding similarly favorable outcomes as switchgrass- and Miscanthus-derived CE scenarios. Variability was found to be greater between feedstocks than watersheds. Additionally, scenarios using dilute acid pretreatment had more favorable results than those using AFEX pretreatment.« less
NASA Astrophysics Data System (ADS)
Hendrickx, Jan M. H.; Kleissl, Jan; Gómez Vélez, Jesús D.; Hong, Sung-ho; Fábrega Duque, José R.; Vega, David; Moreno Ramírez, Hernán A.; Ogden, Fred L.
2007-04-01
Accurate estimation of sensible and latent heat fluxes as well as soil moisture from remotely sensed satellite images poses a great challenge. Yet, it is critical to face this challenge since the estimation of spatial and temporal distributions of these parameters over large areas is impossible using only ground measurements. A major difficulty for the calibration and validation of operational remote sensing methods such as SEBAL, METRIC, and ALEXI is the ground measurement of sensible heat fluxes at a scale similar to the spatial resolution of the remote sensing image. While the spatial length scale of remote sensing images covers a range from 30 m (LandSat) to 1000 m (MODIS) direct methods to measure sensible heat fluxes such as eddy covariance (EC) only provide point measurements at a scale that may be considerably smaller than the estimate obtained from a remote sensing method. The Large Aperture scintillometer (LAS) flux footprint area is larger (up to 5000 m long) and its spatial extent better constraint than that of EC systems. Therefore, scintillometers offer the unique possibility of measuring the vertical flux of sensible heat averaged over areas comparable with several pixels of a satellite image (up to about 40 Landsat thermal pixels or about 5 MODIS thermal pixels). The objective of this paper is to present our experiences with an existing network of seven scintillometers in New Mexico and a planned network of three scintillometers in the humid tropics of Panama and Colombia.
Patient-specific estimation of spatially variant image noise for a pinhole cardiac SPECT camera.
Cuddy-Walsh, Sarah G; Wells, R Glenn
2018-05-01
New single photon emission computed tomography (SPECT) cameras using fixed pinhole collimation are increasingly popular. Pinhole collimators are known to have variable sensitivity with distance and angle from the pinhole aperture. It follows that pinhole SPECT systems will also have spatially variant sensitivity and hence spatially variant image noise. The objective of this study was to develop and validate a rapid method for analytically estimating a map of the noise magnitude in a reconstructed image using data from a single clinical acquisition. The projected voxel (PV) noise estimation method uses a modified forward projector with attenuation effects to estimate the number of photons detected from each voxel in the field-of-view. We approximate the noise for each voxel as the standard deviation of a Poisson distribution with a mean equal to the number of detected photons. An empirical formula is used to address scaling discrepancies caused by image reconstruction. Calibration coefficients are determined for the PV method by comparing it with noise measured from a nonparametrically bootstrapped set of images of a spherical uniformly filled Tc-99m water phantom. Validation studies compare PV noise estimates with bootstrapped measured noise for 31 patient images (5 min, 340 MBq, 99m Tc-tetrofosmin rest study). Bland-Altman analysis shows R 2 correlations ≥70% between the PV-estimated and -measured image noise. For the 31 patient cardiac images, the PV noise estimate has an average bias of 0.1% compared to bootstrapped noise and have a coefficient of variation (CV) ≤ 17%. The bootstrap approach to noise measurement requires 5 h of computation for each image, whereas the PV noise estimate requires only 64 s. In cardiac images, image noise due to attenuation and camera sensitivity varies on average from 4% at the apex to 9% in the basal posterior region of the heart. The standard deviation between 15 healthy patient study images (including physiological variability in the population) ranges from 6% to 16.5% over the length of the heart. The PV method provides a rapid estimate for spatially variant patient-specific image noise magnitude in a pinhole-collimated dedicated cardiac SPECT camera with a bias of -0.3% and better than 83% precision. © 2018 American Association of Physicists in Medicine.
[Natural forming causes of China population distribution].
Fang, Yu; Ouyang, Zhi-Yun; Zheng, Hua; Xiao, Yi; Niu, Jun-Feng; Chen, Sheng-Bin; Lu, Fei
2012-12-01
The diverse natural environment in China causes the spatial heterogeneity of China population distribution. It is essential to understand the interrelations between the population distribution pattern and natural environment to enhance the understanding of the man-land relationship and the realization of the sustainable management for the population, resources, and environment. This paper analyzed the China population distribution by adopting the index of population density (PD) in combining with spatial statistic method and Lorenz curve, and discussed the effects of the natural factors on the population distribution and the interrelations between the population distribution and 16 indices including average annual precipitation (AAP), average annual temperature (AAT), average annual sunshine duration (AASD), precipitation variation (PV), temperature variation (TV), sunshine duration variation (SDV), relative humidity (RH), aridity index (AI), warmth index ( WI), > or = 5 degrees C annual accumulated temperature (AACT), average elevation (AE), relative height difference (RHD), surface roughness (SR), water system density (WSD), net primary productivity (NPP), and shortest distance to seashore (SDTS). There existed an obvious aggregation phenomenon in the population distribution in China. The PD was high in east China, medium in central China, and low in west China, presenting an obvious positive spatial association. The PD was significantly positively correlated with WSD, AAT, AAP, NPP, AACT, PV, RH, and WI, and significantly negatively correlated with RHD, AE, SDV, SR, and SDTS. The climate factors (AAT, WI, PV, and NPP), topography factors (SR and RHD), and water system factor (WSD) together determined the basic pattern of the population distribution in China. It was suggested that the monitoring of the eco-environment in the east China of high population density should be strengthened to avoid the eco-environmental degradation due to the expanding population, and the conservation of the eco-environment in the central and west China with vulnerable eco-environment should also be strengthened to enhance the population carrying ability of these regions and to mitigate the eco-environmental pressure in the east China of high population density.
Spatial accessibility of primary health care in China: A case study in Sichuan Province.
Wang, Xiuli; Yang, Huazhen; Duan, Zhanqi; Pan, Jay
2018-05-10
Access to primary health care is considered a fundamental right and an important facilitator of overall population health. Township health centers (THCs) and Community health centers (CHCs) serve as central hubs of China's primary health care system and have been emphasized during recent health care reforms. Accessibility of these hubs is poorly understood and a better understanding of the current situation is essential for proper decision making. This study assesses spatial access to health care provided by primary health care institutions (THCs/CHCs) in Sichuan Province as a microcosm in China. The Nearest-Neighbor method, Enhanced Two-Step Floating Catchment Area (E2SFCA) method, and Gini Coefficient are utilized to represent travel impedance, spatial accessibility, and disparity of primary health care resources (hospital beds, doctors, and health professionals). Accessibilities and Gini Coefficients are correlated with social development indexes (GDP, ethnicity, etc.) to identify influencing factors. Spatial access to primary health care is better in southeastern Sichuan compared to northwestern Sichuan in terms of shorter travel time, higher spatial accessibility, and lower inequity. Social development indexes all showed significant correlation with county averaged spatial accessibilities/Gini Coefficients, with population density ranking top. The disparity of access to primary health care is also apparent between ethnic minority and non-minority regions. To improve spatial access to primary health care and narrow the inequity, more township health centers staffed by qualified health professionals are recommended for northwestern Sichuan. Improved road networks will also help. Among areas with insufficient primary health care, the specific counties where demographics are dominated by older people and children due to widespread rural-urban migration of the workforce, and by ethnic minorities, should be especially emphasized in future planning. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
González-Zamora, Ángel; Sánchez, Nilda; Martínez-Fernández, José; Gumuzzio, Ángela; Piles, María; Olmedo, Estrella
The European Space Agency's Soil Moisture and Ocean Salinity (SMOS) Level 2 soil moisture and the new L3 product from the Barcelona Expert Center (BEC) were validated from January 2010 to June 2014 using two in situ networks in Spain. The first network is the Soil Moisture Measurement Stations Network of the University of Salamanca (REMEDHUS), which has been extensively used for validating remotely sensed observations of soil moisture. REMEDHUS can be considered a small-scale network that covers a 1300 km2 region. The second network is a large-scale network that covers the main part of the Duero Basin (65,000 km2). At an existing meteorological network in the Castilla y Leon region (Inforiego), soil moisture probes were installed in 2012 to provide data until 2014. Comparisons of the temporal series using different strategies (total average, land use, and soil type) as well as using the collocated data at each location were performed. Additionally, spatial correlations on each date were computed for specific days. Finally, an improved version of the Triple Collocation (TC) method, i.e., the Extended Triple Collocation (ETC), was used to compare satellite and in situ soil moisture estimates with outputs of the Soil Water Balance Model Green-Ampt (SWBM-GA). The results of this work showed that SMOS estimates were consistent with in situ measurements in the time series comparisons, with Pearson correlation coefficients (R) and an Agreement Index (AI) higher than 0.8 for the total average and the land-use averages and higher than 0.85 for the soil-texture averages. The results obtained at the Inforiego network showed slightly better results than REMEDHUS, which may be related to the larger scale of the former network. Moreover, the best results were obtained when all networks were jointly considered. In contrast, the spatial matching produced worse results for all the cases studied. These results showed that the recent reprocessing of the L2 products (v5.51) improved the accuracy of soil moisture retrievals such that they are now suitable for developing new L3 products, such as the presented in this work. Additionally, the validation based on comparisons between dense/sparse networks and satellite retrievals at a coarse resolution showed that temporal patterns in the soil moisture are better reproduced than spatial patterns.
Stochastic Growth Theory of Spatially-Averaged Distributions of Langmuir Fields in Earth's Foreshock
NASA Technical Reports Server (NTRS)
Boshuizen, Christopher R.; Cairns, Iver H.; Robinson, P. A.
2001-01-01
Langmuir-like waves in the foreshock of Earth are characteristically bursty and irregular, and are the subject of a number of recent studies. Averaged over the foreshock, it is observed that the probability distribution is power-law P(bar)(log E) in the wave field E with the bar denoting this averaging over position, In this paper it is shown that stochastic growth theory (SGT) can explain a power-law spatially-averaged distributions P(bar)(log E), when the observed power-law variations of the mean and standard deviation of log E with position are combined with the log normal statistics predicted by SGT at each location.
Plasma properties in electron-bombardment ion thrusters
NASA Technical Reports Server (NTRS)
Matossian, J. N.; Beattie, J. R.
1987-01-01
The paper describes a technique for computing volume-averaged plasma properties within electron-bombardment ion thrusters, using spatially varying Langmuir-probe measurements. Average values of the electron densities are defined by integrating the spatially varying Maxwellian and primary electron densities over the ionization volume, and then dividing by the volume. Plasma properties obtained in the 30-cm-diameter J-series and ring-cusp thrusters are analyzed by the volume-averaging technique. The superior performance exhibited by the ring-cusp thruster is correlated with a higher average Maxwellian electron temperature. The ring-cusp thruster maintains the same fraction of primary electrons as does the J-series thruster, but at a much lower ion production cost. The volume-averaged predictions for both thrusters are compared with those of a detailed thruster performance model.
NASA Astrophysics Data System (ADS)
Gillespie, Jonathan; Masey, Nicola; Heal, Mathew R.; Hamilton, Scott; Beverland, Iain J.
2017-02-01
Determination of intra-urban spatial variations in air pollutant concentrations for exposure assessment requires substantial time and monitoring equipment. The objective of this study was to establish if short-duration measurements of air pollutants can be used to estimate longer-term pollutant concentrations. We compared 5-min measurements of black carbon (BC) and particle number (PN) concentrations made once per week on 5 occasions, with 4 consecutive 1-week average nitrogen dioxide (NO2) concentrations at 18 locations at a range of distances from busy roads in Glasgow, UK. 5-min BC and PN measurements (averaged over the two 5-min periods at the start and end of a week) explained 40-80%, and 7-64% respectively, of spatial variation in the intervening 1-week NO2 concentrations for individual weeks. Adjustment for variations in background concentrations increased the percentage of explained variation in the bivariate relationship between the full set of NO2 and BC measurements over the 4-week period from 28% to 50% prior to averaging of repeat measurements. The averages of five 5-min BC and PN measurements made over 5 weeks explained 75% and 33% respectively of the variation in average 1-week NO2 concentrations over the same period. The relatively high explained variation observed between BC and NO2 measured on different time scales suggests that, with appropriate steps to correct or average out temporal variations, repeated short-term measurements can be used to provide useful information on longer-term spatial patterns for these traffic-related pollutants.
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Katz, R.; Wilson, J. W.
1998-01-01
An analytic method is described for evaluating the average radial electron spectrum and the radial and total frequency-event spectrum for high-energy ions. For high-energy ions, indirect events make important contributions to frequency-event spectra. The method used for evaluating indirect events is to fold the radial electron spectrum with measured frequency-event spectrum for photons or electrons. The contribution from direct events is treated using a spatially restricted linear energy transfer (LET). We find that high-energy heavy ions have a significantly reduced frequency-averaged final energy (yF) compared to LET, while relativistic protons have a significantly increased yF and dose-averaged lineal energy (yD) for typical site sizes used in tissue equivalent proportional counters. Such differences represent important factors in evaluating event spectra with laboratory beams, in space- flight, or in atmospheric radiation studies and in validation of radiation transport codes. The inadequacy of LET as descriptor because of deviations in values of physical quantities, such as track width, secondary electron spectrum, and yD for ions of identical LET is also discussed.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.
Uchiyama, Yuta; Mori, Koichiro
2017-08-15
The purpose of this paper is to analyze how different definitions and methods for delineating the spatial boundaries of cities have an impact on the values of city sustainability indicators. It is necessary to distinguish the inside of cities from the outside when calculating the values of sustainability indicators that assess the impacts of human activities within cities on areas beyond their boundaries. For this purpose, spatial boundaries of cities should be practically detected on the basis of a relevant definition of a city. Although no definition of a city is commonly shared among academic fields, three practical methods for identifying urban areas are available in remote sensing science. Those practical methods are based on population density, landcover, and night-time lights. These methods are correlated, but non-negligible differences exist in their determination of urban extents and urban population. Furthermore, critical and statistically significant differences in some urban environmental sustainability indicators result from the three different urban detection methods. For example, the average values of CO 2 emissions per capita and PM 10 concentration in cities with more than 1 million residents are significantly different among the definitions. When analyzing city sustainability indicators and disseminating the implication of the results, the values based on the different definitions should be simultaneously investigated. It is necessary to carefully choose a relevant definition to analyze sustainability indicators for policy making. Otherwise, ineffective and inefficient policies will be developed. Copyright © 2017 Elsevier B.V. All rights reserved.
Xu, Hongmei; Ho, Steven Sai Hang; Gao, Meiling; Cao, Junji; Guinot, Benjamin; Ho, Kin Fai; Long, Xin; Wang, Jingzhi; Shen, Zhenxing; Liu, Suixin; Zheng, Chunli; Zhang, Qian
2016-11-01
Spatial variability of polycyclic aromatic hydrocarbons (PAHs) associated with fine particulate matter (PM 2.5 ) was investigated in Xi'an, China, in summer of 2013. Sixteen priority PAHs were quantified in 24-h integrated air samples collected simultaneously at nine urban and suburban communities. The total quantified PAHs mass concentrations ranged from 32.4 to 104.7 ng m -3 , with an average value of 57.1 ± 23.0 ng m -3 . PAHs were observed higher concentrations at suburban communities (average: 86.3 ng m -3 ) than at urban ones (average: 48.8 ng m -3 ) due to a better enforcement of the pollution control policies at the urban scale, and meanwhile the disorganized management of motor vehicles and massive building constructions in the suburbs. Elevated PAH levels were observed in the industrialized regions (west and northwest of Xi'an) from Kriging interpolation analysis. Satellite-based visual interpretations of land use were also applied for the supporting the spatial distribution of PAHs among the communities. The average benzo[a]pyrene-equivalent toxicity (Σ[BaP] eq ) at the nine communities was 6.9 ± 2.2 ng m -3 during the sampling period, showing a generally similar spatial distribution to PAHs levels. On average, the excess inhalation lifetime cancer risk derived from Σ[BaP] eq indicated that eight persons per million of community residents would develop cancer due to PM 2.5 -bound PAHs exposure in Xi'an. The great in-city spatial variability of PAHs confirmed the importance of multiple points sampling to conduct exposure health risk assessment. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fischer, Jason L.; Bennion, David; Roseman, Edward F.; Manny, Bruce A.
2015-01-01
Lake sturgeon (Acipenser fulvescens) populations have suffered precipitous declines in the St. Clair–Detroit River system, following the removal of gravel spawning substrates and overfishing in the late 1800s to mid-1900s. To assist the remediation of lake sturgeon spawning habitat, three hydrodynamic models were integrated into a spatial model to identify areas in two large rivers, where water velocities were appropriate for the restoration of lake sturgeon spawning habitat. Here we use water velocity data collected with an acoustic Doppler current profiler (ADCP) to assess the ability of the spatial model and its sub-models to correctly identify areas where water velocities were deemed suitable for restoration of fish spawning habitat. ArcMap 10.1 was used to create raster grids of water velocity data from model estimates and ADCP measurements which were compared to determine the percentage of cells similarly classified as unsuitable, suitable, or ideal for fish spawning habitat remediation. The spatial model categorized 65% of the raster cells the same as depth-averaged water velocity measurements from the ADCP and 72% of the raster cells the same as surface water velocity measurements from the ADCP. Sub-models focused on depth-averaged velocities categorized the greatest percentage of cells similar to ADCP measurements where 74% and 76% of cells were the same as depth-averaged water velocity measurements. Our results indicate that integrating depth-averaged and surface water velocity hydrodynamic models may have biased the spatial model and overestimated suitable spawning habitat. A model solely integrating depth-averaged velocity models could improve identification of areas suitable for restoration of fish spawning habitat.
Generalized Bootstrap Method for Assessment of Uncertainty in Semivariogram Inference
Olea, R.A.; Pardo-Iguzquiza, E.
2011-01-01
The semivariogram and its related function, the covariance, play a central role in classical geostatistics for modeling the average continuity of spatially correlated attributes. Whereas all methods are formulated in terms of the true semivariogram, in practice what can be used are estimated semivariograms and models based on samples. A generalized form of the bootstrap method to properly model spatially correlated data is used to advance knowledge about the reliability of empirical semivariograms and semivariogram models based on a single sample. Among several methods available to generate spatially correlated resamples, we selected a method based on the LU decomposition and used several examples to illustrate the approach. The first one is a synthetic, isotropic, exhaustive sample following a normal distribution, the second example is also a synthetic but following a non-Gaussian random field, and a third empirical sample consists of actual raingauge measurements. Results show wider confidence intervals than those found previously by others with inadequate application of the bootstrap. Also, even for the Gaussian example, distributions for estimated semivariogram values and model parameters are positively skewed. In this sense, bootstrap percentile confidence intervals, which are not centered around the empirical semivariogram and do not require distributional assumptions for its construction, provide an achieved coverage similar to the nominal coverage. The latter cannot be achieved by symmetrical confidence intervals based on the standard error, regardless if the standard error is estimated from a parametric equation or from bootstrap. ?? 2010 International Association for Mathematical Geosciences.
NASA Astrophysics Data System (ADS)
Huang, X.; Tan, J.
2014-11-01
Commutes in urban areas create interesting travel patterns that are often stored in regional transportation databases. These patterns can vary based on the day of the week, the time of the day, and commuter type. This study proposes methods to detect underlying spatio-temporal variability among three groups of commuters (senior citizens, child/students, and adults) using data mining and spatial analytics. Data from over 36 million individual trip records collected over one week (March 2012) on the Singapore bus and Mass Rapid Transit (MRT) system by the fare collection system were used. Analyses of such data are important for transportation and landuse designers and contribute to a better understanding of urban dynamics. Specifically, descriptive statistics, network analysis, and spatial analysis methods are presented. Descriptive variables were proposed such as density and duration to detect temporal features of people. A directed weighted graph G ≡ (N , L, W) was defined to analyze the global network properties of every pair of the transportation link in the city during an average workday for all three categories. Besides, spatial interpolation and spatial statistic tools were used to transform the discrete network nodes into structured human movement landscape to understand the role of transportation systems in urban areas. The travel behaviour of the three categories follows a certain degree of temporal and spatial universality but also displays unique patterns within their own specialties. Each category is characterized by their different peak hours, commute distances, and specific locations for travel on weekdays.
Performance of a SiPM based semi-monolithic scintillator PET detector
NASA Astrophysics Data System (ADS)
Zhang, Xianming; Wang, Xiaohui; Ren, Ning; Kuang, Zhonghua; Deng, Xinhan; Fu, Xin; Wu, San; Sang, Ziru; Hu, Zhanli; Liang, Dong; Liu, Xin; Zheng, Hairong; Yang, Yongfeng
2017-10-01
A depth encoding PET detector module using semi-monolithic scintillation crystal single-ended readout by a SiPM array was built and its performance was measured. The semi-monolithic scintillator detector consists of 11 polished LYSO slices measuring 1 × 11.6 × 10 mm3. The slices are glued together with enhanced specular reflector (ESR) in between and outside of the slices. The bottom surface of the slices is coupled to a 4 × 4 SiPM array with a 1 mm light guide and silicon grease between them. No reflector is used on the top surface and two sides of the slices to reduce the scintillation photon reflection. The signals of the 4 × 4 SiPM array are grouped along rows and columns separately into eight signals. Four SiPM column signals are used to identify the slices according to the center of the gravity of the scintillation photon distribution in the pixelated direction. Four SiPM row signals are used to estimate the y (monolithic direction) and z (depth of interaction) positions according to the center of the gravity and the width of the scintillation photon distribution in the monolithic direction, respectively. The detector was measured with 1 mm sampling interval in both the y and z directions with electronic collimation by using a 0.25 mm diameter 22Na point source and a 1 × 1 × 20 mm3 LYSO crystal detector. An average slice based energy resolution of 14.9% was obtained. All slices of 1 mm thick were clearly resolved and a detector with even thinner slices could be used. The y positions calculated with the center of gravity method are different for interactions happening at the same y, but different z positions due to depth dependent edge effects. The least-square minimization and the maximum likelihood positioning algorithms were developed and both methods improved the spatial resolution at the edges of the detector as compared with the center of gravity method. A mean absolute error (MAE) which is defined as the probability-weighted mean of the absolute value of the positioning error is used to evaluate the spatial resolution. An average MAE spatial resolution of ~1.15 mm was obtained in both y and z directions without rejection of the multiple scattering events. The average MAE spatial resolution was ~0.7 mm in both y and z directions after the multiple scattering events were rejected. The timing resolution of the detector is 575 ps. In the next step, long rectangle detector will be built to reduce edge effects and improve the spatial resolution of the semi-monolithic detector. Thick detector up to 20 mm will be explored and the positioning algorithms will be further optimized.
Performance of a SiPM based semi-monolithic scintillator PET detector.
Zhang, Xianming; Wang, Xiaohui; Ren, Ning; Kuang, Zhonghua; Deng, Xinhan; Fu, Xin; Wu, San; Sang, Ziru; Hu, Zhanli; Liang, Dong; Liu, Xin; Zheng, Hairong; Yang, Yongfeng
2017-09-21
A depth encoding PET detector module using semi-monolithic scintillation crystal single-ended readout by a SiPM array was built and its performance was measured. The semi-monolithic scintillator detector consists of 11 polished LYSO slices measuring 1 × 11.6 × 10 mm 3 . The slices are glued together with enhanced specular reflector (ESR) in between and outside of the slices. The bottom surface of the slices is coupled to a 4 × 4 SiPM array with a 1 mm light guide and silicon grease between them. No reflector is used on the top surface and two sides of the slices to reduce the scintillation photon reflection. The signals of the 4 × 4 SiPM array are grouped along rows and columns separately into eight signals. Four SiPM column signals are used to identify the slices according to the center of the gravity of the scintillation photon distribution in the pixelated direction. Four SiPM row signals are used to estimate the y (monolithic direction) and z (depth of interaction) positions according to the center of the gravity and the width of the scintillation photon distribution in the monolithic direction, respectively. The detector was measured with 1 mm sampling interval in both the y and z directions with electronic collimation by using a 0.25 mm diameter 22 Na point source and a 1 × 1 × 20 mm 3 LYSO crystal detector. An average slice based energy resolution of 14.9% was obtained. All slices of 1 mm thick were clearly resolved and a detector with even thinner slices could be used. The y positions calculated with the center of gravity method are different for interactions happening at the same y, but different z positions due to depth dependent edge effects. The least-square minimization and the maximum likelihood positioning algorithms were developed and both methods improved the spatial resolution at the edges of the detector as compared with the center of gravity method. A mean absolute error (MAE) which is defined as the probability-weighted mean of the absolute value of the positioning error is used to evaluate the spatial resolution. An average MAE spatial resolution of ~1.15 mm was obtained in both y and z directions without rejection of the multiple scattering events. The average MAE spatial resolution was ~0.7 mm in both y and z directions after the multiple scattering events were rejected. The timing resolution of the detector is 575 ps. In the next step, long rectangle detector will be built to reduce edge effects and improve the spatial resolution of the semi-monolithic detector. Thick detector up to 20 mm will be explored and the positioning algorithms will be further optimized.
A Phase Field Study of the Effect of Microstructure Grain Size Heterogeneity on Grain Growth
NASA Astrophysics Data System (ADS)
Crist, David J. D.
Recent studies conducted with sharp-interface models suggest a link between the spatial distribution of grain size variance and average grain growth rate. This relationship and its effect on grain growth rate was examined using the diffuse-interface Phase Field Method on a series of microstructures with different degrees of grain size gradation. Results from this work indicate that the average grain growth rate has a positive correlation with the average grain size dispersion for phase field simulations, confirming previous observations. It is also shown that the grain growth rate in microstructures with skewed grain size distributions is better measured through the change in the volume-weighted average grain size than statistical mean grain size. This material is based upon work supported by the National Science Foundation under Grant No. 1334283. The NSF project title is "DMREF: Real Time Control of Grain Growth in Metals" and was awarded by the Civil, Mechanical and Manufacturing Innovation division under the Designing Materials to Revolutionize and Engineer our Future (DMREF) program.
Spatial and spatiotemporal pattern analysis of coconut lethal yellowing in Mozambique.
Bonnot, F; de Franqueville, H; Lourenço, E
2010-04-01
Coconut lethal yellowing (LY) is caused by a phytoplasma and is a major threat for coconut production throughout its growing area. Incidence of LY was monitored visually on every coconut tree in six fields in Mozambique for 34 months. Disease progress curves were plotted and average monthly disease incidence was estimated. Spatial patterns of disease incidence were analyzed at six assessment times. Aggregation was tested by the coefficient of spatial autocorrelation of the beta-binomial distribution of diseased trees in quadrats. The binary power law was used as an assessment of overdispersion across the six fields. Spatial autocorrelation between symptomatic trees was measured by the BB join count statistic based on the number of pairs of diseased trees separated by a specific distance and orientation, and tested using permutation methods. Aggregation of symptomatic trees was detected in every field in both cumulative and new cases. Spatiotemporal patterns were analyzed with two methods. The proximity of symptomatic trees at two assessment times was investigated using the spatiotemporal BB join count statistic based on the number of pairs of trees separated by a specific distance and orientation and exhibiting the first symptoms of LY at the two times. The semivariogram of times of appearance of LY was calculated to characterize how the lag between times of appearance of LY was related to the distance between symptomatic trees. Both statistics were tested using permutation methods. A tendency for new cases to appear in the proximity of previously diseased trees and a spatially structured pattern of times of appearance of LY within clusters of diseased trees were detected, suggesting secondary spread of the disease.
An assessment of air pollutant exposure methods in Mexico City, Mexico.
Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S
2015-05-01
Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.
NASA Astrophysics Data System (ADS)
Bindhu, V. M.; Narasimhan, B.
2015-03-01
Normalized Difference Vegetation Index (NDVI), a key parameter in understanding the vegetation dynamics, has high spatial and temporal variability. However, continuous monitoring of NDVI is not feasible at fine spatial resolution (<60 m) owing to the long revisit time needed by the satellites to acquire the fine spatial resolution data. Further, the study attains significance in the case of humid tropical regions of the earth, where the prevailing atmospheric conditions restrict availability of fine resolution cloud free images at a high temporal frequency. As an alternative to the lack of high resolution images, the current study demonstrates a novel disaggregation method (DisNDVI) which integrates the spatial information from a single fine resolution image and temporal information in terms of crop phenology from time series of coarse resolution images to generate estimates of NDVI at fine spatial and temporal resolution. The phenological variation of the pixels captured at the coarser scale provides the basis for relating the temporal variability of the pixel with the NDVI available at fine resolution. The proposed methodology was tested over a 30 km × 25 km spatially heterogeneous study area located in the south of Tamil Nadu, India. The robustness of the algorithm was assessed by an independent comparison of the disaggregated NDVI and observed NDVI obtained from concurrent Landsat ETM+ imagery. The results showed good spatial agreement across the study area dominated with agriculture and forest pixels, with a root mean square error of 0.05. The validation done at the coarser scale showed that disaggregated NDVI spatially averaged to 240 m compared well with concurrent MODIS NDVI at 240 m (R2 > 0.8). The validation results demonstrate the effectiveness of DisNDVI in improving the spatial and temporal resolution of NDVI images for utility in fine scale hydrological applications such as crop growth monitoring and estimation of evapotranspiration.
Response Classification Images in Vernier Acuity
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Beard, B. L.; Ellis, Stephen R. (Technical Monitor)
1997-01-01
Orientation selective and local sign mechanisms have been proposed as the basis for vernier acuity judgments. Linear image features contributing to discrimination can be determined for a two choice task by adding external noise to the images and then averaging the noises separately for the four types of stimulus/response trials. This method is applied to a vernier acuity task with different spatial separations to compare the predictions of the two theories. Three well-practiced observers were presented around 5000 trials of a vernier stimulus consisting of two dark horizontal lines (5 min by 0.3 min) within additive low-contrast white noise. Two spatial separations were tested, abutting and a 10 min horizontal separation. The task was to determine whether the target lines were aligned or vertically offset. The noises were averaged separately for the four stimulus/response trial types (e.g., stimulus = offset, response = aligned). The sum of the two 'not aligned' images was then subtracted from the sum of the 'aligned' images to obtain an overall image. Spatially smoothed images were quantized according to expected variability in the smoothed images to allow estimation of the statistical significance of image features. The response images from the 10 min separation condition are consistent with the local sign theory, having the appearance of two linear operators measuring vertical position with opposite sign. The images from the abutting stimulus have the same appearance with the two operators closer together. The image predicted by an oriented filter model is similar, but has its greatest weight in the abutting region, while the response images fall to nonsignificance there. The response correlation image method, previously demonstrated for letter discrimination, clarifies the features used in vernier acuity.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-12-12
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.
Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel
2014-01-01
The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734
NASA Astrophysics Data System (ADS)
Winebrenner, D. P.; Kintner, P. M. S.; MacGregor, J. A.
2017-12-01
Over deep Antarctic subglacial lakes, spatially varying ice thickness and the pressure-dependent melting point of ice result in areas of melting and accretion at the ice-water interface, i.e., the lake lid. These ice mass fluxes drive lake circulation and, because basal Antarctic ice contains air-clathrate, affect the input of oxygen to the lake, with implications for subglacial life. Inferences of melting and accretion from radar-layer tracking and geodesy are limited in spatial coverage and resolution. Here we develop a new method to estimate rates of accretion, melting, and the resulting oxygen input at a lake lid, using airborne radar data over Lake Vostok together with ice-temperature and chemistry data from the Vostok ice core. Because the lake lid is a coherent reflector of known reflectivity (at our radar frequency), we can infer depth-averaged radiowave attenuation in the ice, with spatial resolution 1 km along flight lines. Spatial variation in attenuation depends mostly on variation in ice temperature near the lid, which in turn varies strongly with ice mass flux at the lid. We model ice temperature versus depth with ice mass flux as a parameter, thus linking that flux to (observed) depth-averaged attenuation. The resulting map of melt- and accretion-rates independently reproduces features known from earlier studies, but now covers the entire lid. We find that accretion is dominant when integrated over the lid, with an ice imbalance of 0.05 to 0.07 km3 a-1, which is robust against uncertainties.
Distribution of randomly diffusing particles in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A.
2017-09-01
Diffusion can be conceptualized, at microscopic scales, as the random hopping of particles between neighboring lattice sites. In the case of diffusion in inhomogeneous media, distinct spatial domains in the system may yield distinct particle hopping rates. Starting from the master equations (MEs) governing diffusion in inhomogeneous media we derive here, for arbitrary spatial dimensions, the deterministic lattice equations (DLEs) specifying the average particle number at each lattice site for randomly diffusing particles in inhomogeneous media. We consider the case of free (Fickian) diffusion with no steric constraints on the maximum particle number per lattice site as well as the case of diffusion under steric constraints imposing a maximum particle concentration. We find, for both transient and asymptotic regimes, excellent agreement between the DLEs and kinetic Monte Carlo simulations of the MEs. The DLEs provide a computationally efficient method for predicting the (average) distribution of randomly diffusing particles in inhomogeneous media, with the number of DLEs associated with a given system being independent of the number of particles in the system. From the DLEs we obtain general analytic expressions for the steady-state particle distributions for free diffusion and, in special cases, diffusion under steric constraints in inhomogeneous media. We find that, in the steady state of the system, the average fraction of particles in a given domain is independent of most system properties, such as the arrangement and shape of domains, and only depends on the number of lattice sites in each domain, the particle hopping rates, the number of distinct particle species in the system, and the total number of particles of each particle species in the system. Our results provide general insights into the role of spatially inhomogeneous particle hopping rates in setting the particle distributions in inhomogeneous media.
Simulation of Vortex Structure in Supersonic Free Shear Layer Using Pse Method
NASA Astrophysics Data System (ADS)
Guo, Xin; Wang, Qiang
The method of parabolized stability equations (PSE) are applied in the analysis of nonlinear stability and the simulation of flow structure in supersonic free shear layer. High accuracy numerical techniques including self-similar basic flow, high order differential method, appropriate transformation and decomposition of nonlinear terms are adopted and developed to solve the PSE effectively for free shear layer. The spatial evolving unstable waves which dominate the flow structure are investigated through nonlinear coupling spatial marching methods. The nonlinear interactions between harmonic waves are further analyzed and instantaneous flow field are obtained by adding the harmonic waves into basic flow. Relevant data agree well with that of DNS. The results demonstrate that T-S wave does not keeping growing exponential as the linear evolution, the energy transfer to high order harmonic modes and finally all harmonic modes get saturation due to the nonlinear interaction; Mean flow distortion is produced by the nonlinear interaction between the harmonic and its conjugate harmonic, makes great change to the average flow and increases the thickness of shear layer; PSE methods can well capture the large scale nonlinear flow structure in the supersonic free shear layer such as vortex roll-up, vortex pairing and nonlinear saturation.
NASA Astrophysics Data System (ADS)
Al-Omran, Abdulrasoul M.; Aly, Anwar A.; Al-Wabel, Mohammad I.; Al-Shayaa, Mohammad S.; Sallam, Abdulazeam S.; Nadeem, Mahmoud E.
2017-11-01
The analyses of 180 groundwater samples of Al-Kharj, Saudi Arabia, recorded that most groundwaters are unsuitable for drinking uses due to high salinity; however, they can be used for irrigation with some restriction. The electric conductivity of studied groundwater ranged between 1.05 and 10.15 dS m-1 with an average of 3.0 dS m-1. Nitrate was also found in high concentration in some groundwater. Piper diagrams revealed that the majority of water samples are magnesium-calcium/sulfate-chloride water type. The Gibbs's diagram revealed that the chemical weathering of rock-forming minerals and evaporation are influencing the groundwater chemistry. A kriging method was used for predicting spatial distribution of salinity (EC dS m-1) and NO3 - (mg L-1) in Al-Kharj's groundwater using data of 180 different locations. After normalization of data, variogram was drawn, for selecting suitable model for fitness on experimental variogram, less residual sum of squares value was used. Then cross-validation and root mean square error were used to select the best method for interpolation. The kriging method was found suitable methods for groundwater interpolation and management using either GS+ or ArcGIS.
Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark
2016-01-01
Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayati, Arash Nemati; Stoll, Rob; Kim, J. J.
Three computational fluid dynamics (CFD) methods with different levels of flow-physics modelling are comprehensively evaluated against high-spatial-resolution wind-tunnel velocity data from step-down street canyons (i.e., a short building downwind of a tall building). The first method is a semi-empirical fast-response approach using the Quick Urban Industrial Complex (QUIC-URB) model. The second method solves the Reynolds-averaged Navier–Stokes (RANS) equations, and the third one utilizes a fully-coupled fluid-structure interaction large-eddy simulation (LES) model with a grid-turbulence inflow generator. Unlike typical point-by-point evaluation comparisons, here the entire two-dimensional wind-tunnel dataset is used to evaluate the dynamics of dominant flow topological features in themore » street canyon. Each CFD method is scrutinized for several geometric configurations by varying the downwind-to-upwind building-height ratio (H d/H u) and street canyon-width to building-width aspect ratio (S / W) for inflow winds perpendicular to the upwind building front face. Disparities between the numerical results and experimental data are quantified in terms of their ability to capture flow topological features for different geometric configurations. Ultimately, all three methods qualitatively predict the primary flow topological features, including a saddle point and a primary vortex. But, the secondary flow topological features, namely an in-canyon separation point and secondary vortices, are only well represented by the LES method despite its failure for taller downwind building cases. Misrepresentation of flow-regime transitions, exaggeration of the coherence of recirculation zones and wake fields, and overestimation of downwards vertical velocity into the canyon are the main defects in QUIC-URB, RANS and LES results, respectively. All three methods underestimate the updrafts and, surprisingly, QUIC-URB outperforms RANS for the streamwise velocity component, while RANS is superior to QUIC-URB for the vertical velocity component in the street canyon.« less
Hayati, Arash Nemati; Stoll, Rob; Kim, J. J.; ...
2017-05-18
Three computational fluid dynamics (CFD) methods with different levels of flow-physics modelling are comprehensively evaluated against high-spatial-resolution wind-tunnel velocity data from step-down street canyons (i.e., a short building downwind of a tall building). The first method is a semi-empirical fast-response approach using the Quick Urban Industrial Complex (QUIC-URB) model. The second method solves the Reynolds-averaged Navier–Stokes (RANS) equations, and the third one utilizes a fully-coupled fluid-structure interaction large-eddy simulation (LES) model with a grid-turbulence inflow generator. Unlike typical point-by-point evaluation comparisons, here the entire two-dimensional wind-tunnel dataset is used to evaluate the dynamics of dominant flow topological features in themore » street canyon. Each CFD method is scrutinized for several geometric configurations by varying the downwind-to-upwind building-height ratio (H d/H u) and street canyon-width to building-width aspect ratio (S / W) for inflow winds perpendicular to the upwind building front face. Disparities between the numerical results and experimental data are quantified in terms of their ability to capture flow topological features for different geometric configurations. Ultimately, all three methods qualitatively predict the primary flow topological features, including a saddle point and a primary vortex. But, the secondary flow topological features, namely an in-canyon separation point and secondary vortices, are only well represented by the LES method despite its failure for taller downwind building cases. Misrepresentation of flow-regime transitions, exaggeration of the coherence of recirculation zones and wake fields, and overestimation of downwards vertical velocity into the canyon are the main defects in QUIC-URB, RANS and LES results, respectively. All three methods underestimate the updrafts and, surprisingly, QUIC-URB outperforms RANS for the streamwise velocity component, while RANS is superior to QUIC-URB for the vertical velocity component in the street canyon.« less
NASA Astrophysics Data System (ADS)
Hayati, Arash Nemati; Stoll, Rob; Kim, J. J.; Harman, Todd; Nelson, Matthew A.; Brown, Michael J.; Pardyjak, Eric R.
2017-08-01
Three computational fluid dynamics (CFD) methods with different levels of flow-physics modelling are comprehensively evaluated against high-spatial-resolution wind-tunnel velocity data from step-down street canyons (i.e., a short building downwind of a tall building). The first method is a semi-empirical fast-response approach using the Quick Urban Industrial Complex (QUIC-URB) model. The second method solves the Reynolds-averaged Navier-Stokes (RANS) equations, and the third one utilizes a fully-coupled fluid-structure interaction large-eddy simulation (LES) model with a grid-turbulence inflow generator. Unlike typical point-by-point evaluation comparisons, here the entire two-dimensional wind-tunnel dataset is used to evaluate the dynamics of dominant flow topological features in the street canyon. Each CFD method is scrutinized for several geometric configurations by varying the downwind-to-upwind building-height ratio (H_d/H_u) and street canyon-width to building-width aspect ratio ( S / W) for inflow winds perpendicular to the upwind building front face. Disparities between the numerical results and experimental data are quantified in terms of their ability to capture flow topological features for different geometric configurations. Overall, all three methods qualitatively predict the primary flow topological features, including a saddle point and a primary vortex. However, the secondary flow topological features, namely an in-canyon separation point and secondary vortices, are only well represented by the LES method despite its failure for taller downwind building cases. Misrepresentation of flow-regime transitions, exaggeration of the coherence of recirculation zones and wake fields, and overestimation of downwards vertical velocity into the canyon are the main defects in QUIC-URB, RANS and LES results, respectively. All three methods underestimate the updrafts and, surprisingly, QUIC-URB outperforms RANS for the streamwise velocity component, while RANS is superior to QUIC-URB for the vertical velocity component in the street canyon.
Nonlinear mesomechanics of composites with periodic microstructure
NASA Technical Reports Server (NTRS)
Walker, Kevin P.; Jordan, Eric H.; Freed, Alan D.
1989-01-01
This work is concerned with modeling the mechanical deformation or constitutive behavior of composites comprised of a periodic microstructure under small displacement conditions at elevated temperature. A mesomechanics approach is adopted which relates the microimechanical behavior of the heterogeneous composite with its in-service macroscopic behavior. Two different methods, one based on a Fourier series approach and the other on a Green's function approach, are used in modeling the micromechanical behavior of the composite material. Although the constitutive formulations are based on a micromechanical approach, it should be stressed that the resulting equations are volume averaged to produce overall effective constitutive relations which relate the bulk, volume averaged, stress increment to the bulk, volume averaged, strain increment. As such, they are macromodels which can be used directly in nonlinear finite element programs such as MARC, ANSYS and ABAQUS or in boundary element programs such as BEST3D. In developing the volume averaged or efective macromodels from the micromechanical models, both approaches will require the evaluation of volume integrals containing the spatially varying strain distributions throughout the composite material. By assuming that the strain distributions are spatially constant within each constituent phase-or within a given subvolume within each constituent phase-of the composite material, the volume integrals can be obtained in closed form. This simplified micromodel can then be volume averaged to obtain an effective macromodel suitable for use in the MARC, ANSYS and ABAQUS nonlinear finite element programs via user constitutive subroutines such as HYPELA and CMUSER. This effective macromodel can be used in a nonlinear finite element structural analysis to obtain the strain-temperature history at those points in the structure where thermomechanical cracking and damage are expected to occur, the so called damage critical points of the structure.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.
2016-01-01
Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.
Nystuen, Jeffrey A; Amitai, Eyal; Anagnostou, Emmanuel N; Anagnostou, Marios N
2008-04-01
An experiment to evaluate the inherent spatial averaging of the underwater acoustic signal from rainfall was conducted in the winter of 2004 in the Ionian Sea southwest of Greece. A mooring with four passive aquatic listeners (PALs) at 60, 200, 1000, and 2000 m was deployed at 36.85 degrees N, 21.52 degrees E, 17 km west of a dual-polarization X-band coastal radar at Methoni, Greece. The acoustic signal is classified into wind, rain, shipping, and whale categories. It is similar at all depths and rainfall is detected at all depths. A signal that is consistent with the clicking of deep-diving beaked whales is present 2% of the time, although there was no visual confirmation of whale presence. Co-detection of rainfall with the radar verifies that the acoustic detection of rainfall is excellent. Once detection is made, the correlation between acoustic and radar rainfall rates is high. Spatial averaging of the radar rainfall rates in concentric circles over the mooring verifies the larger inherent spatial averaging of the rainfall signal with recording depth. For the PAL at 2000 m, the maximum correlation was at 3-4 km, suggesting a listening area for the acoustic rainfall measurement of roughly 30-50 km(2).
Łopata, Michał; Popielarczyk, Dariusz; Templin, Tomasz; Dunalska, Julita; Wiśniewski, Grzegorz; Bigaj, Izabela; Szymański, Daniel
2014-01-01
We investigated changes in the spatial distribution of phosphorus (P) and nitrogen (N) in the deep, mesotrophic Lake Hańcza. The raw data collection, supported by global navigation satellite system (GNSS) positioning, was conducted on 79 sampling points. A geostatistical method (kriging) was applied in spatial interpolation. Despite the relatively small area of the lake (3.04 km(2)), compact shape (shore development index of 2.04) and low horizontal exchange of water (retention time 11.4 years), chemical gradients in the surface waters were found. The largest variation concerns the main biogenic element - phosphorus. The average value was 0.032 at the extreme values of 0.019 to 0.265 mg L(-1) (coefficient of variation 87%). Smaller differences are related to nitrogen compounds (0.452-1.424 mg L(-1) with an average value of 0.583 mg L(-1), the coefficient of variation 20%). The parts of the lake which are fed with tributaries are the richest in phosphorus. The water quality of the oligo-mesotrophic Lake Hańcza has been deteriorating in recent years. Our results indicate that inferences about trends in the evolution of examined lake trophic status should be based on an analysis of the data, taking into account the local variation in water chemistry.
Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D
2017-12-01
In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Schweizer, Manuel; Ayé, Raffael; Kashkarov, Roman; Roth, Tobias
2014-01-01
Although phylogenetic diversity has been suggested to be relevant from a conservation point of view, its role is still limited in applied nature conservation. Recently, the practice of investing conservation resources based on threatened species was identified as a reason for the slow integration of phylogenetic diversity in nature conservation planning. One of the main arguments is based on the observation that threatened species are not evenly distributed over the phylogenetic tree. However this argument seems to dismiss the fact that conservation action is a spatially explicit process, and even if threatened species are not evenly distributed over the phylogenetic tree, the occurrence of threatened species could still indicate areas with above average phylogenetic diversity and consequently could protect phylogenetic diversity. Here we aim to study the selection of important bird areas in Central Asia, which were nominated largely based on the presence of threatened bird species. We show that although threatened species occurring in Central Asia do not capture phylogenetically more distinct species than expected by chance, the current spatially explicit conservation approach of selecting important bird areas covers above average taxonomic and phylogenetic diversity of breeding and wintering birds. We conclude that the spatially explicit processes of conservation actions need to be considered in the current discussion of whether new prioritization methods are needed to complement conservation action based on threatened species. PMID:25337861
Flow over bedforms in a large sand-bed river: A field investigation
Holmes, Robert R.; Garcia, Marcelo H.
2008-01-01
An experimental field study of flows over bedforms was conducted on the Missouri River near St. Charles, Missouri. Detailed velocity data were collected under two different flow conditions along bedforms in this sand-bed river. The large river-scale data reflect flow characteristics similar to those of laboratory-scale flows, with flow separation occurring downstream of the bedform crest and flow reattachment on the stoss side of the next downstream bedform. Wave-like responses of the flow to the bedforms were detected, with the velocity decreasing throughout the flow depth over bedform troughs, and the velocity increasing over bedform crests. Local and spatially averaged velocity distributions were logarithmic for both datasets. The reach-wise spatially averaged vertical-velocity profile from the standard velocity-defect model was evaluated. The vertically averaged mean flow velocities for the velocity-defect model were within 5% of the measured values and estimated spatially averaged point velocities were within 10% for the upper 90% of the flow depth. The velocity-defect model, neglecting the wake function, was evaluated and found to estimate thevertically averaged mean velocity within 1% of the measured values.
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio
Definition of the Spatial Resolution of X-Ray Microanalysis in Thin Foils
NASA Technical Reports Server (NTRS)
Williams, D. B.; Michael, J. R.; Goldstein, J. I.; Romig, A. D., Jr.
1992-01-01
The spatial resolution of X-ray microanalysis in thin foils is defined in terms of the incident electron beam diameter and the average beam broadening. The beam diameter is defined as the full width tenth maximum of a Gaussian intensity distribution. The spatial resolution is calculated by a convolution of the beam diameter and the average beam broadening. This definition of the spatial resolution can be related simply to experimental measurements of composition profiles across interphase interfaces. Monte Carlo calculations using a high-speed parallel supercomputer show good agreement with this definition of the spatial resolution and calculations based on this definition. The agreement is good over a range of specimen thicknesses and atomic number, but is poor when excessive beam tailing distorts the assumed Gaussian electron intensity distributions. Beam tailing occurs in low-Z materials because of fast secondary electrons and in high-Z materials because of plural scattering.
Turbulent dispersal promotes species coexistence
Berkley, Heather A; Kendall, Bruce E; Mitarai, Satoshi; Siegel, David A
2010-01-01
Several recent advances in coexistence theory emphasize the importance of space and dispersal, but focus on average dispersal rates and require spatial heterogeneity, spatio-temporal variability or dispersal-competition tradeoffs to allow coexistence. We analyse a model with stochastic juvenile dispersal (driven by turbulent flow in the coastal ocean) and show that a low-productivity species can coexist with a high-productivity species by having dispersal patterns sufficiently uncorrelated from those of its competitor, even though, on average, dispersal statistics are identical and subsequent demography and competition is spatially homogeneous. This produces a spatial storage effect, with an ephemeral partitioning of a ‘spatial niche’, and is the first demonstration of a physical mechanism for a pure spatiotemporal environmental response. ‘Turbulent coexistence’ is widely applicable to marine species with pelagic larval dispersal and relatively sessile adult life stages (and perhaps some wind-dispersed species) and complements other spatial and temporal storage effects previously documented for such species. PMID:20455921
Thompson, E.M.; Wald, D.J.
2012-01-01
Despite obvious limitations as a proxy for site amplification, the use of time-averaged shear-wave velocity over the top 30 m (VS30) remains widely practiced, most notably through its use as an explanatory variable in ground motion prediction equations (and thus hazard maps and ShakeMaps, among other applications). As such, we are developing an improved strategy for producing VS30 maps given the common observational constraints. Using the abundant VS30 measurements in Taiwan, we compare alternative mapping methods that combine topographic slope, surface geology, and spatial correlation structure. The different VS30 mapping algorithms are distinguished by the way that slope and geology are combined to define a spatial model of VS30. We consider the globally applicable slope-only model as a baseline to which we compare two methods of combining both slope and geology. For both hybrid approaches, we model spatial correlation structure of the residuals using the kriging-with-a-trend technique, which brings the map into closer agreement with the observations. Cross validation indicates that we can reduce the uncertainty of the VS30 map by up to 16% relative to the slope-only approach.
Improving the surface metrology accuracy of optical profilers by using multiple measurements
NASA Astrophysics Data System (ADS)
Xu, Xudong; Huang, Qiushi; Shen, Zhengxiang; Wang, Zhanshan
2016-10-01
The performance of high-resolution optical systems is affected by small angle scattering at the mid-spatial-frequency irregularities of the optical surface. Characterizing these irregularities is, therefore, important. However, surface measurements obtained with optical profilers are influenced by additive white noise, as indicated by the heavy-tail effect observable on their power spectral density (PSD). A multiple-measurement method is used to reduce the effects of white noise by averaging individual measurements. The intensity of white noise is determined using a model based on the theoretical PSD of fractal surface measurements with additive white noise. The intensity of white noise decreases as the number of times of multiple measurements increases. Using multiple measurements also increases the highest observed spatial frequency; this increase is derived and calculated. Additionally, the accuracy obtained using multiple measurements is carefully studied, with the analysis of both the residual reference error after calibration, and the random errors appearing in the range of measured spatial frequencies. The resulting insights on the effects of white noise in optical profiler measurements and the methods to mitigate them may prove invaluable to improve the quality of surface metrology with optical profilers.
A factor analysis of the SSQ (Speech, Spatial, and Qualities of Hearing Scale).
Akeroyd, Michael A; Guy, Fiona H; Harrison, Dawn L; Suller, Sharon L
2014-02-01
The speech, spatial, and qualities of hearing questionnaire (SSQ) is a self-report test of auditory disability. The 49 items ask how well a listener would do in many complex listening situations illustrative of real life. The scores on the items are often combined into the three main sections or into 10 pragmatic subscales. We report here a factor analysis of the SSQ that we conducted to further investigate its statistical properties and to determine its structure. Statistical factor analysis of questionnaire data, using parallel analysis to determine the number of factors to retain, oblique rotation of factors, and a bootstrap method to estimate the confidence intervals. 1220 people who have attended MRC IHR over the last decade. We found three clear factors, essentially corresponding to the three main sections of the SSQ. They are termed "speech understanding", "spatial perception", and "clarity, separation, and identification". Thirty-five of the SSQ questions were included in the three factors. There was partial evidence for a fourth factor, "effort and concentration", representing two more questions. These results aid in the interpretation and application of the SSQ and indicate potential methods for generating average scores.
Scaling effect of fraction of vegetation cover retrieved by algorithms based on linear mixture model
NASA Astrophysics Data System (ADS)
Obata, Kenta; Miura, Munenori; Yoshioka, Hiroki
2010-08-01
Differences in spatial resolution among sensors have been a source of error among satellite data products, known as a scaling effect. This study investigates the mechanism of the scaling effect on fraction of vegetation cover retrieved by a linear mixture model which employs NDVI as one of the constraints. The scaling effect is induced by the differences in texture, and the differences between the true endmember spectra and the endmember spectra assumed during retrievals. A mechanism of the scaling effect was analyzed by focusing on the monotonic behavior of spatially averaged FVC as a function of spatial resolution. The number of endmember is limited into two to proceed the investigation analytically. Although the spatially-averaged NDVI varies monotonically along with spatial resolution, the corresponding FVC values does not always vary monotonically. The conditions under which the averaged FVC varies monotonically for a certain sequence of spatial resolutions, were derived analytically. The increasing and decreasing trend of monotonic behavior can be predicted from the true and assumed endmember spectra of vegetation and non-vegetation classes regardless the distributions of the vegetation class within a fixed area. The results imply that the scaling effect on FVC is more complicated than that on NDVI, since, unlike NDVI, FVC becomes non-monotonic under a certain condition determined by the true and assumed endmember spectra.
Cai, Yefeng; Wu, Ming; Yang, Jun
2014-02-01
This paper describes a method for focusing the reproduced sound in the bright zone without disturbing other people in the dark zone in personal audio systems. The proposed method combines the least-squares and acoustic contrast criteria. A constrained parameter is introduced to tune the balance between two performance indices, namely, the acoustic contrast and the spatial average error. An efficient implementation of this method using convex optimization is presented. Offline simulations and real-time experiments using a linear loudspeaker array are conducted to evaluate the performance of the presented method. Results show that compared with the traditional acoustic contrast control method, the proposed method can improve the flatness of response in the bright zone by sacrificing the level of acoustic contrast.
Increased fMRI Sensitivity at Equal Data Burden Using Averaged Shifted Echo Acquisition
Witt, Suzanne T.; Warntjes, Marcel; Engström, Maria
2016-01-01
There is growing evidence as to the benefits of collecting BOLD fMRI data with increased sampling rates. However, many of the newly developed acquisition techniques developed to collect BOLD data with ultra-short TRs require hardware, software, and non-standard analytic pipelines that may not be accessible to all researchers. We propose to incorporate the method of shifted echo into a standard multi-slice, gradient echo EPI sequence to achieve a higher sampling rate with a TR of <1 s with acceptable spatial resolution. We further propose to incorporate temporal averaging of consecutively acquired EPI volumes to both ameliorate the reduced temporal signal-to-noise inherent in ultra-fast EPI sequences and reduce the data burden. BOLD data were collected from 11 healthy subjects performing a simple, event-related visual-motor task with four different EPI sequences: (1) reference EPI sequence with TR = 1440 ms, (2) shifted echo EPI sequence with TR = 700 ms, (3) shifted echo EPI sequence with every two consecutively acquired EPI volumes averaged and effective TR = 1400 ms, and (4) shifted echo EPI sequence with every four consecutively acquired EPI volumes averaged and effective TR = 2800 ms. Both the temporally averaged sequences exhibited increased temporal signal-to-noise over the shifted echo EPI sequence. The shifted echo sequence with every two EPI volumes averaged also had significantly increased BOLD signal change compared with the other three sequences, while the shifted echo sequence with every four EPI volumes averaged had significantly decreased BOLD signal change compared with the other three sequences. The results indicated that incorporating the method of shifted echo into a standard multi-slice EPI sequence is a viable method for achieving increased sampling rate for collecting event-related BOLD data. Further, consecutively averaging every two consecutively acquired EPI volumes significantly increased the measured BOLD signal change and the subsequently calculated activation map statistics. PMID:27932947
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
NASA Astrophysics Data System (ADS)
Bair, Edward H.; Abreu Calfa, Andre; Rittger, Karl; Dozier, Jeff
2018-05-01
In the mountains, snowmelt often provides most of the runoff. Operational estimates use imagery from optical and passive microwave sensors, but each has its limitations. An accurate approach, which we validate in Afghanistan and the Sierra Nevada USA, reconstructs spatially distributed snow water equivalent (SWE) by calculating snowmelt backward from a remotely sensed date of disappearance. However, reconstructed SWE estimates are available only retrospectively; they do not provide a forecast. To estimate SWE throughout the snowmelt season, we consider physiographic and remotely sensed information as predictors and reconstructed SWE as the target. The period of analysis matches the AMSR-E radiometer's lifetime from 2003 to 2011, for the months of April through June. The spatial resolution of the predictions is 3.125 km, to match the resolution of a microwave brightness temperature product. Two machine learning techniques - bagged regression trees and feed-forward neural networks - produced similar mean results, with 0-14 % bias and 46-48 mm RMSE on average. Nash-Sutcliffe efficiencies averaged 0.68 for all years. Daily SWE climatology and fractional snow-covered area are the most important predictors. We conclude that these methods can accurately estimate SWE during the snow season in remote mountains, and thereby provide an independent estimate to forecast runoff and validate other methods to assess the snow resource.
Kim, Anna J.; Takahashi, Lois; Wiebe, Douglas J.
2015-01-01
Objective Social determinants of health may be substantially affected by spatial factors, which together may explain the persistence of health inequities. Clustering of possible sources of negative health and social outcomes points to a spatial focus for future interventions. We analyzed the spatial clustering of sex work businesses in Southern California to examine where and why they cluster. We explored economic and legal factors as possible explanations of clustering. Methods We manually coded data from a website used by paying members to post reviews of female massage parlor workers. We identified clusters of sexually oriented massage parlor businesses using spatial autocorrelation tests. We conducted spatial regression using census tract data to identify predictors of clustering. Results A total of 889 venues were identified. Clusters of tracts having higher-than-expected numbers of sexually oriented massage parlors (“hot spots”) were located outside downtowns. These hot spots were characterized by a higher proportion of adult males, a higher proportion of households below the federal poverty level, and a smaller average household size. Conclusion Sexually oriented massage parlors in Los Angeles and Orange counties cluster in particular neighborhoods. More research is needed to ascertain the causal factors of such clusters and how interventions can be designed to leverage these spatial factors. PMID:26327731
Spatio-temporal patterns of soil water storage under dryland agriculture at the watershed scale
NASA Astrophysics Data System (ADS)
Ibrahim, Hesham M.; Huggins, David R.
2011-07-01
SummarySpatio-temporal patterns of soil water are major determinants of crop yield potential in dryland agriculture and can serve as the basis for delineating precision management zones. Soil water patterns can vary significantly due to differences in seasonal precipitation, soil properties and topographic features. In this study we used empirical orthogonal function (EOF) analysis to characterize the spatial variability of soil water at the Washington State University Cook Agronomy Farm (CAF) near Pullman, WA. During the period 1999-2006, the CAF was divided into three roughly equal blocks (A, B, and C), and soil water at 0.3 m intervals to a depth of 1.5 m measured gravimetrically at approximately one third of the 369 geo-referenced points on the 37-ha watershed. These data were combined with terrain attributes, soil bulk density and apparent soil conductivity (EC a). The first EOF generated from the three blocks explained 73-76% of the soil water variability. Field patterns of soil water based on EOF interpolation varied between wet and dry conditions during spring and fall seasons. Under wet conditions, elevation and wetness index were the dominant factors regulating the spatial patterns of soil water. As soil dries out during summer and fall, soil properties (EC a and bulk density) become more important in explaining the spatial patterns of soil water. The EOFs generated from block B, which represents average topographic and soil properties, provided better estimates of soil water over the entire watershed with larger Nash-Sutcliffe Coefficient of Efficiency (NSCE) values, especially when the first two EOFs were retained. Including more than the first two EOFs did not significantly increase the NSCE of soil water estimate. The EOF interpolation method to estimate soil water variability worked slightly better during spring than during fall, with average NSCE values of 0.23 and 0.20, respectively. The predictable patterns of stored soil water in the spring could serve as the basis for delineating precision management zones as yield potential is largely driven by water availability. The EOF-based method has the advantage of estimating the soil water variability based on soil water data from several measurement times, whereas in regression methods only soil water measurement at a single time are used. The EOF-based method can also be used to estimate soil water at any time other than measurement times, assuming the average soil water of the watershed is known at that time.
Testing averaged cosmology with type Ia supernovae and BAO data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, B.; Alcaniz, J.S.; Coley, A.A.
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less
Keihaninejad, Shiva; Ryan, Natalie S; Malone, Ian B; Modat, Marc; Cash, David; Ridgway, Gerard R; Zhang, Hui; Fox, Nick C; Ourselin, Sebastien
2012-01-01
Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a "ground truth" for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations.
Keihaninejad, Shiva; Ryan, Natalie S.; Malone, Ian B.; Modat, Marc; Cash, David; Ridgway, Gerard R.; Zhang, Hui; Fox, Nick C.; Ourselin, Sebastien
2012-01-01
Tract-based spatial statistics (TBSS) is a popular method for the analysis of diffusion tensor imaging data. TBSS focuses on differences in white matter voxels with high fractional anisotropy (FA), representing the major fibre tracts, through registering all subjects to a common reference and the creation of a FA skeleton. This work considers the effect of choice of reference in the TBSS pipeline, which can be a standard template, an individual subject from the study, a study-specific template or a group-wise average. While TBSS attempts to overcome registration error by searching the neighbourhood perpendicular to the FA skeleton for the voxel with maximum FA, this projection step may not compensate for large registration errors that might occur in the presence of pathology such as atrophy in neurodegenerative diseases. This makes registration performance and choice of reference an important issue. Substantial work in the field of computational anatomy has shown the use of group-wise averages to reduce biases while avoiding the arbitrary selection of a single individual. Here, we demonstrate the impact of the choice of reference on: (a) specificity (b) sensitivity in a simulation study and (c) a real-world comparison of Alzheimer's disease patients to controls. In (a) and (b), simulated deformations and decreases in FA were applied to control subjects to simulate changes of shape and WM integrity similar to what would be seen in AD patients, in order to provide a “ground truth” for evaluating the various methods of TBSS reference. Using a group-wise average atlas as the reference outperformed other references in the TBSS pipeline in all evaluations. PMID:23139736
Fodor, Nándor; Foskolos, Andreas; Topp, Cairistiona F E; Moorby, Jon M; Pásztor, László; Foyer, Christine H
2018-01-01
Dairy farming is one the most important sectors of United Kingdom (UK) agriculture. It faces major challenges due to climate change, which will have direct impacts on dairy cows as a result of heat stress. In the absence of adaptations, this could potentially lead to considerable milk loss. Using an 11-member climate projection ensemble, as well as an ensemble of 18 milk loss estimation methods, temporal changes in milk production of UK dairy cows were estimated for the 21st century at a 25 km resolution in a spatially-explicit way. While increases in UK temperatures are projected to lead to relatively low average annual milk losses, even for southern UK regions (<180 kg/cow), the 'hottest' 25×25 km grid cell in the hottest year in the 2090s, showed an annual milk loss exceeding 1300 kg/cow. This figure represents approximately 17% of the potential milk production of today's average cow. Despite the potential considerable inter-annual variability of annual milk loss, as well as the large differences between the climate projections, the variety of calculation methods is likely to introduce even greater uncertainty into milk loss estimations. To address this issue, a novel, more biologically-appropriate mechanism of estimating milk loss is proposed that provides more realistic future projections. We conclude that South West England is the region most vulnerable to climate change economically, because it is characterised by a high dairy herd density and therefore potentially high heat stress-related milk loss. In the absence of mitigation measures, estimated heat stress-related annual income loss for this region by the end of this century may reach £13.4M in average years and £33.8M in extreme years.
Tidal and tidally averaged circulation characteristics of Suisun Bay, California
Smith, Lawrence H.; Cheng, Ralph T.
1987-01-01
Availability of extensive field data permitted realistic calibration and validation of a hydrodynamic model of tidal circulation and salt transport for Suisun Bay, California. Suisun Bay is a partially mixed embayment of northern San Francisco Bay located just seaward of the Sacramento-San Joaquin Delta. The model employs a variant of an alternating direction implicit finite-difference method to solve the hydrodynamic equations and an Eulerian-Lagrangian method to solve the salt transport equation. An upwind formulation of the advective acceleration terms of the momentum equations was employed to avoid oscillations in the tidally averaged velocity field produced by central spatial differencing of these terms. Simulation results of tidal circulation and salt transport demonstrate that tides and the complex bathymetry determine the patterns of tidal velocities and that net changes in the salinity distribution over a few tidal cycles are small despite large changes during each tidal cycle. Computations of tidally averaged circulation suggest that baroclinic and wind effects are important influences on tidally averaged circulation during low freshwater-inflow conditions. Exclusion of baroclinic effects would lead to overestimation of freshwater inflow by several hundred m3/s for a fixed set of model boundary conditions. Likewise, exclusion of wind would cause an underestimation of flux rates between shoals and channels by 70–100%.
Martínez, Francisco J; Márquez, Andrés; Gallego, Sergi; Ortuño, Manuel; Francés, Jorge; Pascual, Inmaculada; Beléndez, Augusto
2015-02-20
Parallel-aligned (PA) liquid-crystal on silicon (LCoS) microdisplays are especially appealing in a wide range of spatial light modulation applications since they enable phase-only operation. Recently we proposed a novel polarimetric method, based on Stokes polarimetry, enabling the characterization of their linear retardance and the magnitude of their associated phase fluctuations or flicker, exhibited by many LCoS devices. In this work we apply the calibrated values obtained with this technique to show their capability to predict the performance of spatially varying phase multilevel elements displayed onto the PA-LCoS device. Specifically we address a series of multilevel phase blazed gratings. We analyze both their average diffraction efficiency ("static" analysis) and its associated time fluctuation ("dynamic" analysis). Two different electrical configuration files with different degrees of flicker are applied in order to evaluate the actual influence of flicker on the expected performance of the diffractive optical elements addressed. We obtain a good agreement between simulation and experiment, thus demonstrating the predictive capability of the calibration provided by the average Stokes polarimetric technique. Additionally, it is obtained that for electrical configurations with less than 30° amplitude for the flicker retardance, they may not influence the performance of the blazed gratings. In general, we demonstrate that the influence of flicker greatly diminishes when the number of quantization levels in the optical element increases.
Modulation of a methane Bunsen flame by upstream perturbations
NASA Astrophysics Data System (ADS)
de Souza, T. Cardoso; Bastiaans, R. J. M.; De Goey, L. P. H.; Geurts, B. J.
2017-04-01
In this paper the effects of an upstream spatially periodic modulation acting on a turbulent Bunsen flame are investigated using direct numerical simulations of the Navier-Stokes equations coupled with the flamelet generated manifold (FGM) method to parameterise the chemistry. The premixed Bunsen flame is spatially agitated with a set of coherent large-scale structures of specific wave-number, K. The response of the premixed flame to the external modulation is characterised in terms of time-averaged properties, e.g. the average flame height ⟨H⟩ and the flame surface wrinkling ⟨W⟩. Results show that the flame response is notably selective to the size of the length scales used for agitation. For example, both flame quantities ⟨H⟩ and ⟨W⟩ present an optimal response, in comparison with an unmodulated flame, when the modulation scale is set to relatively low wave-numbers, 4π/L ≲ K ≲ 6π/L, where L is a characteristic scale. At the agitation scales where the optimal response is observed, the average flame height, ⟨H⟩, takes a clearly defined minimal value while the surface wrinkling, ⟨W⟩, presents an increase by more than a factor of 2 in comparison with the unmodulated reference case. Combined, these two response quantities indicate that there is an optimal scale for flame agitation and intensification of combustion rates in turbulent Bunsen flames.
Spatial Statistical Data Fusion (SSDF)
NASA Technical Reports Server (NTRS)
Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel
2013-01-01
As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is fundamentally different than other approaches to data fusion for remote sensing data because it is inferential rather than merely descriptive. All approaches combine data in a way that minimizes some specified loss function. Most of these are more or less ad hoc criteria based on what looks good to the eye, or some criteria that relate only to the data at hand.
Atmospheric turbulence profiling with SLODAR using multiple adaptive optics wavefront sensors.
Wang, Lianqi; Schöck, Matthias; Chanan, Gary
2008-04-10
The slope detection and ranging (SLODAR) method recovers atmospheric turbulence profiles from time averaged spatial cross correlations of wavefront slopes measured by Shack-Hartmann wavefront sensors. The Palomar multiple guide star unit (MGSU) was set up to test tomographic multiple guide star adaptive optics and provided an ideal test bed for SLODAR turbulence altitude profiling. We present the data reduction methods and SLODAR results from MGSU observations made in 2006. Wind profiling is also performed using delayed wavefront cross correlations along with SLODAR analysis. The wind profiling analysis is shown to improve the height resolution of the SLODAR method and in addition gives the wind velocities of the turbulent layers.
The importance of magnetic methods for soil mapping and process modelling. Case study in Ukraine
NASA Astrophysics Data System (ADS)
Menshov, Oleksandr; Pereira, Paulo; Kruglov, Oleksandr; Sukhorada, Anatoliy
2016-04-01
The correct planning of agriculture areas is fundamental for a sustainable future in Ukraine. After the recent political problems in Ukraine, new challenges emerged regarding sustainability questions. At the same time the soil mapping and modelling are intensively developing all over the world (Pereira et al., 2015; Brevik et al., in press). Magnetic susceptibility (MS) methods are low cost and accurate for the developing maps of agricultural areas, fundamental for Ukrain's economy.This allow to colleact a great amount of soil data, usefull for a better understading of the spatial distribution of soil properties. Recently, this method have been applied in other works in Ukraine and elsewhere (Jordanova et al., 2011; Menshov et al., 2015). The objective of this work is to study the spatial distribution of MS and humus content on the topsoils (0-5 cm) in two different areas. The first is located in Poltava region and the second in Kharkiv region. The results showed that MS depends of soil type, topography and anthropogenic influence. For the interpretation of MS spatial distribution in top soil we consider the frequency and time after the last tillage, tilth depth, fertilizing, and the puddling regarding the vehicle model. On average the soil MS of the top soil of these two cases is about 30-70×10-8 m3/kg. In Poltava region not disturbed soil has on average MS values of 40-50×10-8 m3/kg, for Kharkiv region 50-60×10-8 m3/kg. The tilled soil of Poltava region has on average an MS of 60×10-8 m3/kg, and 70×10-8 m3/kg in Kharkiv region. MS is higher in non-tilled soils than in the tilled ones. The correlation between MS and soil humus content is very high ( up to 0.90) in both cases. Breivik, E., Baumgarten, A., Calzolari, C., Miller, B., Pereira, P., Kabala, C., Jordán, A. Soil mapping, classification, and modelling: history and future directions. Geoderma (in press), doi:10.1016/j.geoderma.2015.05.017 Jordanova D., Jordanova N., Atanasova A., Tsacheva T., Petrov P., (2011). Soil tillage erosion by using magnetism of soils - a case study from Bulgaria. Environ. Monit. Assess, 183, 381-394. Menshov O. Pereira P., Kruglov O., (2015). Spatial variability of soil magnetic susceptibility in an agricultural field located in Eastern Ukraine. Geophysical Research Abstracts, 17, EGU2015-578-2. Pereira, P., Cerdà, A., Úbeda, X., Mataix-Solera, J. Arcenegui, V., Zavala, L. (2015) Modelling the impacts of wildfire on ash thickness in a short-term period, Land Degradation and Development, 26, 180-192. DOI: 10.1002/ldr.2195
A method of 3D object recognition and localization in a cloud of points
NASA Astrophysics Data System (ADS)
Bielicki, Jerzy; Sitnik, Robert
2013-12-01
The proposed method given in this article is prepared for analysis of data in the form of cloud of points directly from 3D measurements. It is designed for use in the end-user applications that can directly be integrated with 3D scanning software. The method utilizes locally calculated feature vectors (FVs) in point cloud data. Recognition is based on comparison of the analyzed scene with reference object library. A global descriptor in the form of a set of spatially distributed FVs is created for each reference model. During the detection process, correlation of subsets of reference FVs with FVs calculated in the scene is computed. Features utilized in the algorithm are based on parameters, which qualitatively estimate mean and Gaussian curvatures. Replacement of differentiation with averaging in the curvatures estimation makes the algorithm more resistant to discontinuities and poor quality of the input data. Utilization of the FV subsets allows to detect partially occluded and cluttered objects in the scene, while additional spatial information maintains false positive rate at a reasonably low level.
NASA Astrophysics Data System (ADS)
Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.
2013-02-01
Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.
Correlation of gravestone decay and air quality 1960-2010
NASA Astrophysics Data System (ADS)
Mooers, H. D.; Carlson, M. J.; Harrison, R. M.; Inkpen, R. J.; Loeffler, S.
2017-03-01
Evaluation of spatial and temporal variability in surface recession of lead-lettered Carrara marble gravestones provides a quantitative measure of acid flux to the stone surfaces and is closely related to local land use and air quality. Correlation of stone decay, land use, and air quality for the period after 1960 when reliable estimates of atmospheric pollution are available is evaluated. Gravestone decay and SO2 measurements are interpolated spatially using deterministic and geostatistical techniques. A general lack of spatial correlation was identified and therefore a land-use-based technique for correlation of stone decay and air quality is employed. Decadally averaged stone decay is highly correlated with land use averaged spatially over an optimum radius of ≈7 km even though air quality, determined by records from the UK monitoring network, is not highly correlated with gravestone decay. The relationships among stone decay, air-quality, and land use is complicated by the relatively low spatial density of both gravestone decay and air quality data and the fact that air quality data is available only as annual averages and therefore seasonal dependence cannot be evaluated. However, acid deposition calculated from gravestone decay suggests that the deposition efficiency of SO2 has increased appreciably since 1980 indicating an increase in the SO2 oxidation process possibly related to reactions with ammonia.
Phillips, Steven P.; Belitz, Kenneth
1991-01-01
The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.
NASA Astrophysics Data System (ADS)
Abitew, T. A.; van Griensven, A.; Bauwens, W.
2015-12-01
Evapotranspiration is the main process in hydrology (on average around 60%), though has not received as much attention in the evaluation and calibration of hydrological models. In this study, Remote Sensing (RS) derived Evapotranspiration (ET) is used to improve the spatially distributed processes of ET of SWAT model application in the upper Mara basin (Kenya) and the Blue Nile basin (Ethiopia). The RS derived ET data is obtained from recently compiled global datasets (continuously monthly data at 1 km resolution from MOD16NBI,SSEBop,ALEXI,CMRSET models) and from regionally applied Energy Balance Models (for several cloud free days). The RS-RT data is used in different forms: Method 1) to evaluate spatially distributed evapotransiration model resultsMethod 2) to calibrate the evotranspiration processes in hydrological modelMethod 3) to bias-correct the evapotranpiration in hydrological model during simulation after changing the SWAT codesAn inter-comparison of the RS-ET products shows that at present there is a significant bias, but at the same time an agreement on the spatial variability of ET. The ensemble mean of different ET products seems the most realistic estimation and was further used in this study.The results show that:Method 1) the spatially mapped evapotranspiration of hydrological models shows clear differences when compared to RS derived evapotranspiration (low correlations). Especially evapotranspiration in forested areas is strongly underestimated compared to other land covers.Method 2) Calibration allows to improve the correlations between the RS and hydrological model results to some extent.Method 3) Bias-corrections are efficient in producing (sesonal or annual) evapotranspiration maps from hydrological models which are very similar to the patterns obtained from RS data.Though the bias-correction is very efficient, it is advised to improve the model results by better representing the ET processes by improved plant/crop computations, improved agricultural management practices or by providing improved meteorological data.
Kos, Bor; Valič, Blaž; Kotnik, Tadej; Gajšek, Peter
2012-10-07
Induction heating equipment is a source of strong and nonhomogeneous magnetic fields, which can exceed occupational reference levels. We investigated a case of an induction tempering tunnel furnace. Measurements of the emitted magnetic flux density (B) were performed during its operation and used to validate a numerical model of the furnace. This model was used to compute the values of B and the induced in situ electric field (E) for 15 different body positions relative to the source. For each body position, the computed B values were used to determine their maximum and average values, using six spatial averaging schemes (9-285 averaging points) and two averaging algorithms (arithmetic mean and quadratic mean). Maximum and average B values were compared to the ICNIRP reference level, and E values to the ICNIRP basic restriction. Our results show that in nonhomogeneous fields, the maximum B is an overly conservative predictor of overexposure, as it yields many false positives. The average B yielded fewer false positives, but as the number of averaging points increased, false negatives emerged. The most reliable averaging schemes were obtained for averaging over the torso with quadratic averaging, with no false negatives even for the maximum number of averaging points investigated.
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
NASA Astrophysics Data System (ADS)
Wu, Yenan; Zhong, Ping-an; Xu, Bin; Zhu, Feilin; Fu, Jisi
2017-06-01
Using climate models with high performance to predict the future climate changes can increase the reliability of results. In this paper, six kinds of global climate models that selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) under Representative Concentration Path (RCP) 4.5 scenarios were compared to the measured data during baseline period (1960-2000) and evaluate the simulation performance on precipitation. Since the results of single climate models are often biased and highly uncertain, we examine the back propagation (BP) neural network and arithmetic mean method in assembling the precipitation of multi models. The delta method was used to calibrate the result of single model and multimodel ensembles by arithmetic mean method (MME-AM) during the validation period (2001-2010) and the predicting period (2011-2100). We then use the single models and multimodel ensembles to predict the future precipitation process and spatial distribution. The result shows that BNU-ESM model has the highest simulation effect among all the single models. The multimodel assembled by BP neural network (MME-BP) has a good simulation performance on the annual average precipitation process and the deterministic coefficient during the validation period is 0.814. The simulation capability on spatial distribution of precipitation is: calibrated MME-AM > MME-BP > calibrated BNU-ESM. The future precipitation predicted by all models tends to increase as the time period increases. The order of average increase amplitude of each season is: winter > spring > summer > autumn. These findings can provide useful information for decision makers to make climate-related disaster mitigation plans.
Enhancing hyperspectral spatial resolution using multispectral image fusion: A wavelet approach
NASA Astrophysics Data System (ADS)
Jazaeri, Amin
High spectral and spatial resolution images have a significant impact in remote sensing applications. Because both spatial and spectral resolutions of spaceborne sensors are fixed by design and it is not possible to further increase the spatial or spectral resolution, techniques such as image fusion must be applied to achieve such goals. This dissertation introduces the concept of wavelet fusion between hyperspectral and multispectral sensors in order to enhance the spectral and spatial resolution of a hyperspectral image. To test the robustness of this concept, images from Hyperion (hyperspectral sensor) and Advanced Land Imager (multispectral sensor) were first co-registered and then fused using different wavelet algorithms. A regression-based fusion algorithm was also implemented for comparison purposes. The results show that the fused images using a combined bi-linear wavelet-regression algorithm have less error than other methods when compared to the ground truth. In addition, a combined regression-wavelet algorithm shows more immunity to misalignment of the pixels due to the lack of proper registration. The quantitative measures of average mean square error show that the performance of wavelet-based methods degrades when the spatial resolution of hyperspectral images becomes eight times less than its corresponding multispectral image. Regardless of what method of fusion is utilized, the main challenge in image fusion is image registration, which is also a very time intensive process. Because the combined regression wavelet technique is computationally expensive, a hybrid technique based on regression and wavelet methods was also implemented to decrease computational overhead. However, the gain in faster computation was offset by the introduction of more error in the outcome. The secondary objective of this dissertation is to examine the feasibility and sensor requirements for image fusion for future NASA missions in order to be able to perform onboard image fusion. In this process, the main challenge of image registration was resolved by registering the input images using transformation matrices of previously acquired data. The composite image resulted from the fusion process remarkably matched the ground truth, indicating the possibility of real time onboard fusion processing.
Kumar, S.; Simonson, S.E.; Stohlgren, T.J.
2009-01-01
We investigated butterfly responses to plot-level characteristics (plant species richness, vegetation height, and range in NDVI [normalized difference vegetation index]) and spatial heterogeneity in topography and landscape patterns (composition and configuration) at multiple spatial scales. Stratified random sampling was used to collect data on butterfly species richness from seventy-six 20 ?? 50 m plots. The plant species richness and average vegetation height data were collected from 76 modified-Whittaker plots overlaid on 76 butterfly plots. Spatial heterogeneity around sample plots was quantified by measuring topographic variables and landscape metrics at eight spatial extents (radii of 300, 600 to 2,400 m). The number of butterfly species recorded was strongly positively correlated with plant species richness, proportion of shrubland and mean patch size of shrubland. Patterns in butterfly species richness were negatively correlated with other variables including mean patch size, average vegetation height, elevation, and range in NDVI. The best predictive model selected using Akaike's Information Criterion corrected for small sample size (AICc), explained 62% of the variation in butterfly species richness at the 2,100 m spatial extent. Average vegetation height and mean patch size were among the best predictors of butterfly species richness. The models that included plot-level information and topographic variables explained relatively less variation in butterfly species richness, and were improved significantly after including landscape metrics. Our results suggest that spatial heterogeneity greatly influences patterns in butterfly species richness, and that it should be explicitly considered in conservation and management actions. ?? 2008 Springer Science+Business Media B.V.
Estimating recharge at Yucca Mountain, Nevada, USA: Comparison of methods
Flint, A.L.; Flint, L.E.; Kwicklis, E.M.; Fabryka-Martin, J. T.; Bodvarsson, G.S.
2002-01-01
Obtaining values of net infiltration, groundwater travel time, and recharge is necessary at the Yucca Mountain site, Nevada, USA, in order to evaluate the expected performance of a potential repository as a containment system for high-level radioactive waste. However, the geologic complexities of this site, its low precipitation and net infiltration, with numerous mechanisms operating simultaneously to move water through the system, provide many challenges for the estimation of the spatial distribution of recharge. A variety of methods appropriate for arid environments has been applied, including water-balance techniques, calculations using Darcy's law in the unsaturated zone, a soil-physics method applied to neutron-hole water-content data, inverse modeling of thermal profiles in boreholes extending through the thick unsaturated zone, chloride mass balance, atmospheric radionuclides, and empirical approaches. These methods indicate that near-surface infiltration rates at Yucca Mountain are highly variable in time and space, with local (point) values ranging from zero to several hundred millimeters per year. Spatially distributed net-infiltration values average 5 mm/year, with the highest values approaching 20 mm/year near Yucca Crest. Site-scale recharge estimates range from less than 1 to about 12 mm/year. These results have been incorporated into a site-scale model that has been calibrated using these data sets that reflect infiltration processes acting on highly variable temporal and spatial scales. The modeling study predicts highly non-uniform recharge at the water table, distributed significantly differently from the non-uniform infiltration pattern at the surface.
Quantification of the spatial strain distribution of scoliosis using a thin-plate spline method.
Kiriyama, Yoshimori; Watanabe, Kota; Matsumoto, Morio; Toyama, Yoshiaki; Nagura, Takeo
2014-01-03
The objective of this study was to quantify the three-dimensional spatial strain distribution of a scoliotic spine by nonhomogeneous transformation without using a statistically averaged reference spine. The shape of the scoliotic spine was determined from computed tomography images from a female patient with adolescent idiopathic scoliosis. The shape of the scoliotic spine was enclosed in a rectangular grid, and symmetrized using a thin-plate spline method according to the node positions of the grid. The node positions of the grid were determined by numerical optimization to satisfy symmetry. The obtained symmetric spinal shape was enclosed within a new rectangular grid and distorted back to the original scoliotic shape using a thin-plate spline method. The distorted grid was compared to the rectangular grid that surrounded the symmetrical spine. Cobb's angle was reduced from 35° in the scoliotic spine to 7° in the symmetrized spine, and the scoliotic shape was almost fully symmetrized. The scoliotic spine showed a complex Green-Lagrange strain distribution in three dimensions. The vertical and transverse compressive/tensile strains in the frontal plane were consistent with the major scoliotic deformation. The compressive, tensile and shear strains on the convex side of the apical vertebra were opposite to those on the concave side. These results indicate that the proposed method can be used to quantify the three-dimensional spatial strain distribution of a scoliotic spine, and may be useful in quantifying the deformity of scoliosis. © 2013 Elsevier Ltd. All rights reserved.
Tao, Qian; Milles, Julien; Zeppenfeld, Katja; Lamb, Hildo J; Bax, Jeroen J; Reiber, Johan H C; van der Geest, Rob J
2010-08-01
Accurate assessment of the size and distribution of a myocardial infarction (MI) from late gadolinium enhancement (LGE) MRI is of significant prognostic value for postinfarction patients. In this paper, an automatic MI identification method combining both intensity and spatial information is presented in a clear framework of (i) initialization, (ii) false acceptance removal, and (iii) false rejection removal. The method was validated on LGE MR images of 20 chronic postinfarction patients, using manually traced MI contours from two independent observers as reference. Good agreement was observed between automatic and manual MI identification. Validation results showed that the average Dice indices, which describe the percentage of overlap between two regions, were 0.83 +/- 0.07 and 0.79 +/- 0.08 between the automatic identification and the manual tracing from observer 1 and observer 2, and the errors in estimated infarct percentage were 0.0 +/- 1.9% and 3.8 +/- 4.7% compared with observer 1 and observer 2. The difference between the automatic method and manual tracing is in the order of interobserver variation. In conclusion, the developed automatic method is accurate and robust in MI delineation, providing an objective tool for quantitative assessment of MI in LGE MR imaging.
Shock-Strength Determination With Seeded and Seedless Laser Methods
NASA Technical Reports Server (NTRS)
Herring, G. C.; Meyers, James F.
2008-01-01
Two nonintrusive laser diagnostics were independently used to demonstrate the measurement of time-averaged and spatially-resolved pressure change across a twodimensional (2-D) shock wave. The first method is Doppler global velocimetry (DGV) which uses water seeding and generates 2-D maps of 3-orthogonal components of velocity. A DGV-measured change in flow direction behind an oblique shock provides an indirect determination of pressure jump across the shock, when used with the known incoming Mach number and ideal shock relations (or Prandtl-Meyer flow equations for an expansion fan). This approach was demonstrated at Mach 2 on 2-D shocks and expansions generated from a flat plate at angles-of-attack approx. equals -2.4deg and +0.6deg, respectively. This technique also works for temperature jump (as well as pressure) and for normal shocks (as well as oblique). The second method, laser-induced thermal acoustics (LITA), is a seedless approach that was used to generate 1-D spatial profiles of streamwise Mach number, sound speed, pressure, and temperature across the same shock waves. Excellent agreement was obtained between the DGV and LITA methods, suggesting that either technique is viable for noninvasive shock-strength measurements.
Infrared and Raman Microscopy in Cell Biology
Matthäus, Christian; Bird, Benjamin; Miljković, Miloš; Chernenko, Tatyana; Romeo, Melissa; Diem, Max
2009-01-01
This chapter presents novel microscopic methods to monitor cell biological processes of live or fixed cells without the use of any dye, stains, or other contrast agent. These methods are based on spectral techniques that detect inherent spectroscopic properties of biochemical constituents of cells, or parts thereof. Two different modalities have been developed for this task. One of them is infrared micro-spectroscopy, in which an average snapshot of a cell’s biochemical composition is collected at a spatial resolution of typically 25 mm. This technique, which is extremely sensitive and can collect such a snapshot in fractions of a second, is particularly suited for studying gross biochemical changes. The other technique, Raman microscopy (also known as Raman micro-spectroscopy), is ideally suited to study variations of cellular composition on the scale of subcellular organelles, since its spatial resolution is as good as that of fluorescence microscopy. Both techniques exhibit the fingerprint sensitivity of vibrational spectroscopy toward biochemical composition, and can be used to follow a variety of cellular processes. PMID:19118679
A critical look at prospective surveillance using a scan statistic.
Correa, Thais R; Assunção, Renato M; Costa, Marcelo A
2015-03-30
The scan statistic is a very popular surveillance technique for purely spatial, purely temporal, and spatial-temporal disease data. It was extended to the prospective surveillance case, and it has been applied quite extensively in this situation. When the usual signal rules, as those implemented in SaTScan(TM) (Boston, MA, USA) software, are used, we show that the scan statistic method is not appropriate for the prospective case. The reason is that it does not adjust properly for the sequential and repeated tests carried out during the surveillance. We demonstrate that the nominal significance level α is not meaningful and there is no relationship between α and the recurrence interval or the average run length (ARL). In some cases, the ARL may be equal to ∞, which makes the method ineffective. This lack of control of the type-I error probability and of the ARL leads us to strongly oppose the use of the scan statistic with the usual signal rules in the prospective context. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Casas, Rafael; Linguraru, Marius G.
2016-03-01
Pleural effusion is an abnormal collection of fluid within the pleural cavity. Excessive accumulation of pleural fluid is an important bio-marker for various illnesses, including congestive heart failure, pneumonia, metastatic cancer, and pulmonary embolism. Quantification of pleural effusion can be indicative of the progression of disease as well as the effectiveness of any treatment being administered. Quantification, however, is challenging due to unpredictable amounts and density of fluid, complex topology of the pleural cavity, and the similarity in texture and intensity of pleural fluid to the surrounding tissues in computed tomography (CT) scans. Herein, we present an automated method for the segmentation of pleural effusion in CT scans based on spatial context information. The method consists of two stages: first, a probabilistic pleural effusion map is created using multi-atlas segmentation. The probabilistic map assigns a priori probabilities to the presence of pleural uid at every location in the CT scan. Second, a statistical pattern classification approach is designed to annotate pleural regions using local descriptors based on a priori probabilities, geometrical, and spatial features. Thirty seven CT scans from a diverse patient population containing confirmed cases of minimal to severe amounts of pleural effusion were used to validate the proposed segmentation method. An average Dice coefficient of 0.82685 and Hausdorff distance of 16.2155 mm was obtained.
NASA Astrophysics Data System (ADS)
Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko
2018-04-01
Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.
Neonatal Atlas Construction Using Sparse Representation
Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases. PMID:24638883
The impact of half-a-degree Celsius upon the spatial pattern of future sea-level change.
NASA Astrophysics Data System (ADS)
Jackson, Luke
2017-04-01
It has been shown that the global thermal expansion of sea level and ocean dynamics are linearly related to global temperature change. On this basis one can estimate the difference in local sea-level change between a 1.5°C and 2.0°C world. The mitigation scenario RCP 2.6 shows an end-of-century global temperature range of 0.9 to 2.3°C (median 1.6°C). Additional sea-level components, such as mass changes in ice sheets, glaciers and land-water storage have unique spatial patterns that contribute to sea-level change and will be indirectly affected by global temperature change. We project local sea-level change for RCP 2.6 using sub-sets of models in the CMIP5 archive that follow different global temperature pathways. The method used to calculate local sea-level change is probabilistic and combines the normalised spatial patterns of sea-level components with global average projections of individual sea-level components.
Bellesia, Giovanni; Bales, Benjamin B.
2016-10-10
Here, we investigate, via Brownian dynamics simulations, the reaction dynamics of a generic, nonlinear chemical network under spatial confinement and crowding conditions. In detail, the Willamowski-Rossler chemical reaction system has been “extended” and considered as a prototype reaction-diffusion system. These results are potentially relevant to a number of open problems in biophysics and biochemistry, such as the synthesis of primitive cellular units (protocells) and the definition of their role in the chemical origin of life and the characterization of vesicle-mediated drug delivery processes. More generally, the computational approach presented in this work makes the case for the use of spatialmore » stochastic simulation methods for the study of biochemical networks in vivo where the “well-mixed” approximation is invalid and both thermal and intrinsic fluctuations linked to the possible presence of molecular species in low number copies cannot be averaged out.« less
Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; González, Santiago; Robles, Víctor; DeFelipe, Javier; Larrañaga, Pedro; Bielza, Concha
2014-01-01
In the cerebral cortex, most synapses are found in the neuropil, but relatively little is known about their 3-dimensional organization. Using an automated dual-beam electron microscope that combines focused ion beam milling and scanning electron microscopy, we have been able to obtain 10 three-dimensional samples with an average volume of 180 µm3 from the neuropil of layer III of the young rat somatosensory cortex (hindlimb representation). We have used specific software tools to fully reconstruct 1695 synaptic junctions present in these samples and to accurately quantify the number of synapses per unit volume. These tools also allowed us to determine synapse position and to analyze their spatial distribution using spatial statistical methods. Our results indicate that the distribution of synaptic junctions in the neuropil is nearly random, only constrained by the fact that synapses cannot overlap in space. A theoretical model based on random sequential absorption, which closely reproduces the actual distribution of synapses, is also presented. PMID:23365213
Merchán-Pérez, Angel; Rodríguez, José-Rodrigo; González, Santiago; Robles, Víctor; Defelipe, Javier; Larrañaga, Pedro; Bielza, Concha
2014-06-01
In the cerebral cortex, most synapses are found in the neuropil, but relatively little is known about their 3-dimensional organization. Using an automated dual-beam electron microscope that combines focused ion beam milling and scanning electron microscopy, we have been able to obtain 10 three-dimensional samples with an average volume of 180 µm(3) from the neuropil of layer III of the young rat somatosensory cortex (hindlimb representation). We have used specific software tools to fully reconstruct 1695 synaptic junctions present in these samples and to accurately quantify the number of synapses per unit volume. These tools also allowed us to determine synapse position and to analyze their spatial distribution using spatial statistical methods. Our results indicate that the distribution of synaptic junctions in the neuropil is nearly random, only constrained by the fact that synapses cannot overlap in space. A theoretical model based on random sequential absorption, which closely reproduces the actual distribution of synapses, is also presented.
Relaxation of creep strain in paper
NASA Astrophysics Data System (ADS)
Mustalahti, Mika; Rosti, Jari; Koivisto, Juha; Alava, Mikko J.
2010-07-01
In disordered, viscoelastic or viscoplastic materials a sample response exhibits a recovery phenomenon after the removal of a constant load or after creep. We study experimentally the recovery in paper, a quasi-two-dimensional system with intrinsic structural disorder. The deformation is measured by using the digital image correlation (DIC) method. By the DIC we obtain accurate displacement data and the spatial fields of deformation and recovered strains. The averaged results are first compared to several heuristic models for viscoelastic polymer materials in particular. The most important experimental quantity is the permanent creep strain, and we analyze whether it is non-zero by fitting the empirical models of viscoelasticity. We then present in more detail the spatial recovery behavior results from DIC, and show that they indicate a power-law-type relaxation. We outline results on variation from sample to sample and collective, spatial fluctuations in the recovery behavior. An interpretation is provided for the relaxation in the general context of glassy, interacting systems with barriers.
NASA Astrophysics Data System (ADS)
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
NASA Astrophysics Data System (ADS)
Sun, D.; Zheng, J. H.; Ma, T.; Chen, J. J.; Li, X.
2018-04-01
The rodent disaster is one of the main biological disasters in grassland in northern Xinjiang. The eating and digging behaviors will cause the destruction of ground vegetation, which seriously affected the development of animal husbandry and grassland ecological security. UAV low altitude remote sensing, as an emerging technique with high spatial resolution, can effectively recognize the burrows. However, how to select the appropriate spatial resolution to monitor the calamity of the rodent disaster is the first problem we need to pay attention to. The purpose of this study is to explore the optimal spatial scale on identification of the burrows by evaluating the impact of different spatial resolution for the burrows identification accuracy. In this study, we shoot burrows from different flight heights to obtain visible images of different spatial resolution. Then an object-oriented method is used to identify the caves, and we also evaluate the accuracy of the classification. We found that the highest classification accuracy of holes, the average has reached more than 80 %. At the altitude of 24 m and the spatial resolution of 1cm, the accuracy of the classification is the highest We have created a unique and effective way to identify burrows by using UAVs visible images. We draw the following conclusion: the best spatial resolution of burrows recognition is 1 cm using DJI PHANTOM-3 UAV, and the improvement of spatial resolution does not necessarily lead to the improvement of classification accuracy. This study lays the foundation for future research and can be extended to similar studies elsewhere.
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator)
The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2003-02-28] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator)
The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator)
The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2003-02-28] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator)
The Monthly TOA/Surface Averages (SRBAVG) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SRBAVG is also produced for combinations of scanner instruments. The monthly average regional flux is estimated using diurnal models and the 1-degree regional fluxes at the hour of observation from the CERES SFC product. A second set of monthly average fluxes are estimated using concurrent diurnal information from geostationary satellites. These fluxes are given for both clear-sky and total-sky scenes and are spatially averaged from 1-degree regions to 1-degree zonal averages and a global average. For each region, the SRBAVG also contains hourly average fluxes for the month and an overall monthly average. The cloud properties from SFC are column averaged and are included on the SRBAVG. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-02-01; Stop_Date=2004-05-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
NASA Astrophysics Data System (ADS)
Osterwalder, S.; Sommar, J.; Åkerblom, S.; Jocher, G.; Fritsche, J.; Nilsson, M. B.; Bishop, K.; Alewell, C.
2018-01-01
Quantitative estimates of the land-atmosphere exchange of gaseous elemental mercury (GEM) are biased by the measurement technique employed, because no standard method or scale in space and time are agreed upon. Here we present concurrent GEM exchange measurements over a boreal peatland using a novel relaxed eddy accumulation (REA) system, a rectangular Teflon® dynamic flux chamber (DFC) and a DFC designed according to aerodynamic considerations (Aero-DFC). During four consecutive days the DFCs were placed alternately on two measurement plots in every cardinal direction around the REA sampling mast. Spatial heterogeneity in peat surface characteristics (0-34 cm) was identified by measuring total mercury in eight peat cores (57 ± 8 ng g-1, average ± SE), vascular plant coverage (32-52%), water table level (4.5-14.1 cm) and dissolved gaseous elemental mercury concentrations (28-51 pg L-1) in the peat water. The GEM fluxes measured by the DFCs showed a distinct diel pattern, but no spatial difference in the average fluxes was detected (ANOVA, α = 0.05). Even though the correlation between the Teflon® DFC and Aero-DFC was significant (r = 0.76, p < 0.05) the cumulative flux of the Aero-DFC was a factor of three larger. The average flux of the Aero-DFC (1.9 ng m-2 h-1) and REA (2 ng m-2 h-1) were in good agreement. The results indicate that the novel REA design is in agreement for cumulative flux estimates with the Aero-DFC, which incorporates the effect of atmospheric turbulence. The comparison was performed over a fetch with spatially rather homogenous GEM flux dynamics under fairly consistent weather conditions, minimizing the effect of weather influence on the data from the three measurement systems. However, in complex biomes with heterogeneous surface characteristics where there can be large spatial variability in GEM gas exchange, the small footprint of chambers (<0.2 m2) makes for large coefficients of variation. Thus many chamber measurement replications are needed to establish a credible biome GEM flux estimate, even for a single point in time. Dynamic flux chambers will, however, be able to resolve systematic differences between small scale features, such as experimentally manipulated plots or small scale spatial heterogeneity.
Properties of a new small-world network with spatially biased random shortcuts
NASA Astrophysics Data System (ADS)
Matsuzawa, Ryo; Tanimoto, Jun; Fukuda, Eriko
2017-11-01
This paper introduces a small-world (SW) network with a power-law distance distribution that differs from conventional models in that it uses completely random shortcuts. By incorporating spatial constraints, we analyze the divergence of the proposed model from conventional models in terms of fundamental network properties such as clustering coefficient, average path length, and degree distribution. We find that when the spatial constraint more strongly prohibits a long shortcut, the clustering coefficient is improved and the average path length increases. We also analyze the spatial prisoner's dilemma (SPD) games played on our new SW network in order to understand its dynamical characteristics. Depending on the basis graph, i.e., whether it is a one-dimensional ring or a two-dimensional lattice, and the parameter controlling the prohibition of long-distance shortcuts, the emergent results can vastly differ.
NASA Technical Reports Server (NTRS)
Ginger, Kathryn M.
1993-01-01
Since clouds are the largest variable in Earth's radiation budget, it is critical to determine both the spatial and temporal characteristics of their radiative properties. The relationships between cloud properties and cloud fraction are studied in order to supplement grid scale parameterizations. The satellite data used is from three hourly ISCCP (International Satellite Cloud Climatology Project) and monthly ERBE (Earth Radiation Budget Experiment) data on a 2.5 deg x 2.5 deg latitude-longitude grid. Mean cloud spherical albedo, the mean optical depth distribution, and cloud fraction are examined and compared off the coast of California and the mid-tropical Atlantic for July 1987 and 1988. Individual grid boxes and spatial averages over several grid boxes are correlated to Coakley's theory of reflection for uniform and broken layered cloud and to Kedem, et al.'s findings that rainfall volume and fractional area of rain in convective systems is linear. Kedem's hypothesis can be expressed in terms of cloud properties. That is, the total volume of liquid in a box is a linear function of cloud fraction. Results for the marine stratocumulus regime indicate that albedo is often invariant for cloud fractions of 20% to 80%. Coakley's satellite model of small and large clouds with cores (1 km) and edges (100 m) is consistent with this observation. The cores maintain high liquid water concentrations and large droplets while the edges contain low liquid water concentrations and small droplets. Large clouds are just a collection of cores. The mean optical depth (TAU) distributions support the above observation with TAU values of 3.55 to 9.38 favored across all cloud fractions. From these results, a method based upon Kedem, et al's theory is proposed to separate the cloud fraction and liquid water path (LWP) calculations in a general circulation model (GCM). In terms of spatial averaging, a linear relationship between albedo and cloud fraction is observed. For tropical locations outside the Intertropical Convergence Zone (ITCZ), results of cloud fraction and albedo spatial averaging followed that of the stratus boxes containing few overcast scenes. Both the ideas of Coakley and Kedem, et al. apply. Within the ITCZ, the grid boxes tended to have the same statistical properties as stratus boxes containing many overcast scenes. Because different dynamical forcing mechanisms are present, it is difficult to devise a method for determining subgrid scale variations. Neither of the theories proposed by Kedem, et al. or Coakley works well for the boxes with numerous overcast scenes.
NASA Technical Reports Server (NTRS)
Ginger, Kathryn M.
1993-01-01
Since clouds are the largest variable in Earth's radiation budget, it is critical to determine both the spatial and temporal characteristics of their radiative properties. This study examines the relationships between cloud properties and cloud fraction in order to supplement grid scale parameterizations. The satellite data used in this study is from three hourly ISCCP (International Satellite Cloud Climatology Project) and monthly ERBE (Earth Radiation Budget Experiment) data on a 2.50 x 2.50 latitude-longitude grid. Mean cloud spherical albedo, the mean optical depth distribution and cloud fraction are examined and compared off the coast of California and the mid-tropical Atlantic for July 1987 and 1988. Individual grid boxes and spatial averages over several grid boxes are correlated to Coakleys (1991) theory of reflection for uniform and broken layered cloud and to Kedem, et al.(1990) findings that rainfall volume and fractional area of rain in convective systems is linear. Kedem's hypothesis can be expressed in terms of cloud properties. That is, the total volume of liquid in a box is a linear function of cloud fraction. Results for the marine stratocumulus regime indicate that albedo is often invariant for cloud fractions of 20% to 80%. Coakley's satellite model of small and large clouds with cores (1 km) and edges (100 in) is consistent with this observation. The cores maintain high liquid water concentrations and large droplets while the edges contain low liquid water concentrations and small droplets. Large clouds are just a collection of cores. The mean optical depth (TAU) distributions support the above observation with TAU values of 3.55 to 9.38 favored across all cloud fractions. From these results, a method based upon Kedem, et al. theory is proposed to separate the cloud fraction and liquid water path (LWP) calculations in a general circulation model (GCM). In terms of spatial averaging, a linear relationship between albedo and cloud fraction is observed. For tropical locations outside the Intertropical Convergence Zone (ITCZ), results of cloud fraction and albedo spatial averaging followed that of the stratus boxes containing few overcast scenes. Both the ideas of Coakley and Kedem, et al. apply. Within the ITCZ, the grid boxes tended to have the same statistical properties as stratus boxes containing many overcast scenes. Because different dynamical forcing mechanisms are present, it is difficult to devise a method for determining subgrid scale variations. Neither of the theories proposed by Kedem, et al. or Coakley works well for the boxes with numerous overcast scenes.
Vaughan, Adam S; Kramer, Michael R; Waller, Lance A; Schieb, Linda J; Greer, Sophia; Casper, Michele
2015-05-01
To demonstrate the implications of choosing analytical methods for quantifying spatiotemporal trends, we compare the assumptions, implementation, and outcomes of popular methods using county-level heart disease mortality in the United States between 1973 and 2010. We applied four regression-based approaches (joinpoint regression, both aspatial and spatial generalized linear mixed models, and Bayesian space-time model) and compared resulting inferences for geographic patterns of local estimates of annual percent change and associated uncertainty. The average local percent change in heart disease mortality from each method was -4.5%, with the Bayesian model having the smallest range of values. The associated uncertainty in percent change differed markedly across the methods, with the Bayesian space-time model producing the narrowest range of variance (0.0-0.8). The geographic pattern of percent change was consistent across methods with smaller declines in the South Central United States and larger declines in the Northeast and Midwest. However, the geographic patterns of uncertainty differed markedly between methods. The similarity of results, including geographic patterns, for magnitude of percent change across these methods validates the underlying spatial pattern of declines in heart disease mortality. However, marked differences in degree of uncertainty indicate that Bayesian modeling offers substantially more precise estimates. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Olafsen, L. J.; Olafsen, J. S.; Eaves, I. K.
2018-06-01
We report on an experimental investigation of the time-dependent spatial intensity distribution of near-infrared idler pulses from an optical parametric oscillator measured using an infrared (IR) camera, in contrast to beam profiles obtained using traditional knife-edge techniques. Comparisons show the information gained by utilizing the thermal camera provides more detail than the spatially- or time-averaged measurements from a knife-edge profile. Synchronization, averaging, and thresholding techniques are applied to enhance the images acquired. The additional information obtained can improve the process by which semiconductor devices and other IR lasers are characterized for their beam quality and output response and thereby result in IR devices with higher performance.
Correlation of Spatially Filtered Dynamic Speckles in Distance Measurement Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semenov, Dmitry V.; Nippolainen, Ervin; Kamshilin, Alexei A.
2008-04-15
In this paper statistical properties of spatially filtered dynamic speckles are considered. This phenomenon was not sufficiently studied yet while spatial filtering is an important instrument for speckles velocity measurements. In case of spatial filtering speckle velocity information is derived from the modulation frequency of filtered light power which is measured by photodetector. Typical photodetector output is represented by a narrow-band random noise signal which includes non-informative intervals. Therefore more or less precious frequency measurement requires averaging. In its turn averaging implies uncorrelated samples. However, conducting research we found that correlation is typical property not only of dynamic speckle patternsmore » but also of spatially filtered speckles. Using spatial filtering the correlation is observed as a response of measurements provided to the same part of the object surface or in case of simultaneously using several adjacent photodetectors. Found correlations can not be explained using just properties of unfiltered dynamic speckles. As we demonstrate the subject of this paper is important not only from pure theoretical point but also from the point of applied speckle metrology. E.g. using single spatial filter and an array of photodetector can greatly improve accuracy of speckle velocity measurements.« less
NASA Technical Reports Server (NTRS)
Rundle, John B.
1988-01-01
The idea that earthquakes represent a fluctuation about the long-term motion of plates is expressed mathematically through the fluctuation hypothesis, under which all physical quantities which pertain to the occurance of earthquakes are required to depend on the difference between the present state of slip on the fault and its long-term average. It is shown that under certain circumstances the model fault dynamics undergo a sudden transition from a spatially ordered, temporally disordered state to a spatially disordered, temporally ordered state, and that the latter stages are stable for long intervals of time. For long enough faults, the dynamics are evidently chaotic. The methods developed are then used to construct a detailed model for earthquake dynamics in southern California. The result is a set of slip-time histories for all the major faults, which are similar to data obtained by geological trenching studies. Although there is an element of periodicity to the events, the patterns shift, change and evolve with time. Time scales for pattern evolution seem to be of the order of a thousand years for average recurring intervals of about a hundred years.
Lagrangian Hotspots of In-Use NOX Emissions from Transit Buses.
Kotz, Andrew J; Kittelson, David B; Northrop, William F
2016-06-07
In-use, spatiotemporal NOX emissions were measured from a conventional powertrain transit bus and a series electric hybrid bus over gradients of route kinetic intensity and ambient temperature. This paper introduces a new method for identifying NOX emissions hotspots along a bus route using high fidelity Lagrangian vehicle data to explore spatial interactions that may influence emissions production. Our study shows that the studied transit buses emit higher than regulated emissions because on-route operation does not accurately represent the range of engine operation tested according to regulatory standards. Using the Lagrangian hotspot detection, we demonstrate that NOX hotspots occurred at bus stops, during cold starts, on inclines, and for accelerations. On the selected routes, bus stops resulted in 3.3 times the route averaged emissions factor in grams/km without significant dependence on bus type or climate. The buses also emitted 2.3 times the route averaged NOX emissions factor at the beginning of each route due to cold selective catalytic reduction aftertreatment temperature. The Lagrangian hotspot detection technique demonstrated here could be employed in future connected vehicles empowered by advances in computational power, data storage capability, and improved sensor technology to optimize emissions as a function of spatial location.
Schmid, G; Lager, D; Preiner, P; Uberbacher, R; Cecil, S
2007-01-01
In order to estimate typical radio frequency exposures from indoor used wireless communication technologies applied in homes and offices, WLAN, Bluetooth and Digital Enhanced Cordless Telecommunications systems, as well as baby surveillance devices and wireless headphones for indoor usage, have been investigated by measurements and numerical computations. Based on optimised measurement methods, field distributions and resulting exposure were assessed on selected products and real exposure scenarios. Additionally, generic scenarios have been investigated on the basis of numerical computations. The obtained results demonstrate that under usual conditions the resulting spatially (over body dimensions) averaged and 6-min time-averaged exposure for persons in the radio frequency fields of the considered applications is below approximately 0.1% of the reference level for power density according to the International Commission on Non-Ionizing Radiation Protection (ICNIRP) guidelines published in 1998. Spatial and temporal peak values can be considerably higher by 2-3 orders of magnitude. In case of some transmitting devices operated in close proximity to the body (e.g. WLAN transmitters), local exposure can reach the same order of magnitude as the basic restriction; however, none of the devices considered in this study exceeded the limits according to the ICNIRP guidelines.
Chung, Younshik; Chang, IlJoon
2015-11-01
Recently, the introduction of vehicle black box systems or in-vehicle video event data recorders enables the driver to use the system to collect more accurate crash information such as location, time, and situation at the pre-crash and crash moment, which can be analyzed to find the crash causal factors more accurately. This study presents the vehicle black box system in brief and its application status in Korea. Based on the crash data obtained from the vehicle black box system, this study analyzes the accuracy of the crash data collected from existing road crash data recording method, which has been recorded by police officers based on accident parties' statements or eyewitness's account. The analysis results show that the crash data observed by the existing method have an average of 84.48m of spatial difference and standard deviation of 157.75m as well as average 29.05min of temporal error and standard deviation of 19.24min. Additionally, the average and standard deviation of crash speed errors were found to be 9.03km/h and 7.21km/h, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Castillo, Richard; Castillo, Edward; McCurdy, Matthew; Gomez, Daniel R.; Block, Alec M.; Bergsma, Derek; Joy, Sarah; Guerrero, Thomas
2012-04-01
To determine the spatial overlap agreement between four-dimensional computed tomography (4D CT) ventilation and single photon emission computed tomography (SPECT) perfusion hypo-functioning pulmonary defect regions in a patient population with malignant airway stenosis. Treatment planning 4D CT images were obtained retrospectively for ten lung cancer patients with radiographically demonstrated airway obstruction due to gross tumor volume. Each patient also received a SPECT perfusion study within one week of the planning 4D CT, and prior to the initiation of treatment. Deformable image registration was used to map corresponding lung tissue elements between the extreme component phase images, from which quantitative three-dimensional (3D) images representing the local pulmonary specific ventilation were constructed. Semi-automated segmentation of the percentile perfusion distribution was performed to identify regional defects distal to the known obstructing lesion. Semi-automated segmentation was similarly performed by multiple observers to delineate corresponding defect regions depicted on 4D CT ventilation. Normalized Dice similarity coefficient (NDSC) indices were determined for each observer between SPECT perfusion and 4D CT ventilation defect regions to assess spatial overlap agreement. Tidal volumes determined from 4D CT ventilation were evaluated versus measurements obtained from lung parenchyma segmentation. Linear regression resulted in a linear fit with slope = 1.01 (R2 = 0.99). Respective values for the average DSC, NDSC1 mm and NDSC2 mm for all cases and multiple observers were 0.78, 0.88 and 0.99, indicating that, on average, spatial overlap agreement between ventilation and perfusion defect regions was comparable to the threshold for agreement within 1-2 mm uncertainty. Corresponding coefficients of variation for all metrics were similarly in the range: 0.10%-19%. This study is the first to quantitatively assess 3D spatial overlap agreement between clinically acquired SPECT perfusion and specific ventilation from 4D CT. Results suggest high correlation between methods within the sub-population of lung cancer patients with malignant airway stenosis.
Correction of Spatial Bias in Oligonucleotide Array Data
Lemieux, Sébastien
2013-01-01
Background. Oligonucleotide microarrays allow for high-throughput gene expression profiling assays. The technology relies on the fundamental assumption that observed hybridization signal intensities (HSIs) for each intended target, on average, correlate with their target's true concentration in the sample. However, systematic, nonbiological variation from several sources undermines this hypothesis. Background hybridization signal has been previously identified as one such important source, one manifestation of which appears in the form of spatial autocorrelation. Results. We propose an algorithm, pyn, for the elimination of spatial autocorrelation in HSIs, exploiting the duality of desirable mutual information shared by probes in a common probe set and undesirable mutual information shared by spatially proximate probes. We show that this correction procedure reduces spatial autocorrelation in HSIs; increases HSI reproducibility across replicate arrays; increases differentially expressed gene detection power; and performs better than previously published methods. Conclusions. The proposed algorithm increases both precision and accuracy, while requiring virtually no changes to users' current analysis pipelines: the correction consists merely of a transformation of raw HSIs (e.g., CEL files for Affymetrix arrays). A free, open-source implementation is provided as an R package, compatible with standard Bioconductor tools. The approach may also be tailored to other platform types and other sources of bias. PMID:23573083
Gong, Pingyuan; Zheng, Anyun; Chen, Dongmei; Ge, Wanhua; Lv, Changchao; Zhang, Kejin; Gao, Xiaocai; Zhang, Fuchang
2009-07-01
Cognitive abilities are complex human traits influenced by genetic factors. Brain-derived neurotrophic factor (BDNF), a unique polypeptide growth factor, has an influence on the differentiation and survival of neurons in the nervous system. A single-nucleotide polymorphism (rs6265) in the human gene, resulting in a valine to methionine substitution in the pro-BDNF protein, was thought to associate with psychiatric disorders and might play roles in the individual difference of cognitive abilities. However, the specific roles of the gene in cognition remain unclear. To investigate the relationships between the substitution and cognitive abilities, a healthy population-based study and the PCR-SSCP method were performed. The results showed the substitution was associated with digital working memory (p = 0.02) and spatial localization (p = 0.03), but not with inhibition, shifting, updating, visuo-spatial working memory, long-term memory, and others (p > 0.05) among the compared genotype groups analyzed by general linear model. On the other hand, the participants with BDNF (GG) had higher average performance in digital working memory and spatial localization than the ones with BDNF (AA). The findings of the present work implied that the variation in BDNF might play positive roles in human digital working memory and spatial localization.
Modelling space of spread Dengue Hemorrhagic Fever (DHF) in Central Java use spatial durbin model
NASA Astrophysics Data System (ADS)
Ispriyanti, Dwi; Prahutama, Alan; Taryono, Arkadina PN
2018-05-01
Dengue Hemorrhagic Fever is one of the major public health problems in Indonesia. From year to year, DHF causes Extraordinary Event in most parts of Indonesia, especially Central Java. Central Java consists of 35 districts or cities where each region is close to each other. Spatial regression is an analysis that suspects the influence of independent variables on the dependent variables with the influences of the region inside. In spatial regression modeling, there are spatial autoregressive model (SAR), spatial error model (SEM) and spatial autoregressive moving average (SARMA). Spatial Durbin model is the development of SAR where the dependent and independent variable have spatial influence. In this research dependent variable used is number of DHF sufferers. The independent variables observed are population density, number of hospitals, residents and health centers, and mean years of schooling. From the multiple regression model test, the variables that significantly affect the spread of DHF disease are the population and mean years of schooling. By using queen contiguity and rook contiguity, the best model produced is the SDM model with queen contiguity because it has the smallest AIC value of 494,12. Factors that generally affect the spread of DHF in Central Java Province are the number of population and the average length of school.
West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.
2016-01-01
Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.
Exploring Spatial and Temporal Distribution of Cutaneous Leishmaniasis in the Americas, 2001-2011.
Maia-Elkhoury, Ana Nilce Silveira; E Yadón, Zaida; Idali Saboyá Díaz, Martha; de Fátima de Araújo Lucena, Francisca; Gerardo Castellanos, Luis; J Sanchez-Vazquez, Manuel
2016-11-01
Cases reported in the period of 2001-2011 from 14/18 CL endemic countries were included in this study by using two spreadsheet to collect the data. Two indicators were analyzed: CL cases and incidence rate. The local regression method was used to analyze case trends and incidence rates for all the studied period, and for 2011 the spatial distribution of each indicator was analyzed by quartile and stratified into four groups. From 2001-2011, 636,683 CL cases were reported by 14 countries and with an increase of 30% of the reported cases. The average incidence rate in the Americas was 15.89/100,000 inhabitants. In 2011, 15 countries reported cases in 180 from a total of 292 units of first subnational level. The global incidence rate for all countries was 17.42 cases per 100,000 inhabitants; while in 180 administrative units at the first subnational level, the average incidence rate was 57.52/100,000 inhabitants. Nicaragua and Panama had the highest incidence but more cases occurred in Brazil and Colombia. Spatial distribution was heterogeneous for each indicator, and when analyzed in different administrative level. The results showed different distribution patterns, illustrating the limitation of the use of individual indicators and the need to classify higher-risk areas in order to prioritize the actions. This study shows the epidemiological patterns using secondary data and the importance of using multiple indicators to define and characterize smaller territorial units for surveillance and control of leishmaniasis.
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
Using spatialized sound cues in an auditorily rich environment
NASA Astrophysics Data System (ADS)
Brock, Derek; Ballas, James A.; Stroup, Janet L.; McClimens, Brian
2004-05-01
Previous Navy research has demonstrated that spatialized sound cues in an otherwise quiet setting are useful for directing attention and improving performance by 16.8% or more in the decision component of a complex dual-task. To examine whether the benefits of this technique are undermined in the presence of additional, unrelated sounds, a background recording of operations in a Navy command center and a voice communications response task [Bolia et al., J. Acoust. Soc. Am. 107, 1065-1066 (2000)] were used to simulate the conditions of an auditorily rich military environment. Without the benefit of spatialized sound cues, performance in the presence of this extraneous auditory information, as measured by decision response times, was an average of 13.6% worse than baseline performance in an earlier study. Performance improved when the cues were present by an average of 18.3%, but this improvement remained below the improvement observed in the baseline study by an average of 11.5%. It is concluded that while the two types of extraneous sound information used in this study degrade performance in the decision task, there is no interaction with the relative performance benefit provided by the use of spatialized auditory cues. [Work supported by ONR.
Spatial correlation in precipitation trends in the Brazilian Amazon
NASA Astrophysics Data System (ADS)
Buarque, Diogo Costa; Clarke, Robin T.; Mendes, Carlos Andre Bulhoes
2010-06-01
A geostatistical analysis of variables derived from Amazon daily precipitation records (trends in annual precipitation totals, trends in annual maximum precipitation accumulated over 1-5 days, trend in length of dry spell, trend in number of wet days per year) gave results that are consistent with those previously reported. Averaged over the Brazilian Amazon region as a whole, trends in annual maximum precipitations were slightly negative, the trend in the length of dry spell was slightly positive, and the trend in the number of wet days in the year was slightly negative. For trends in annual maximum precipitation accumulated over 1-5 days, spatial correlation between trends was found to extend up to a distance equivalent to at least half a degree of latitude or longitude, with some evidence of anisotropic correlation. Time trends in annual precipitation were found to be spatially correlated up to at least ten degrees of separation, in both W-E and S-N directions. Anisotropic spatial correlation was strongly evident in time trends in length of dry spell with much stronger evidence of spatial correlation in the W-E direction, extending up to at least five degrees of separation, than in the S-N. Because the time trends analyzed are shown to be spatially correlated, it is argued that methods at present widely used to test the statistical significance of climate trends over time lead to erroneous conclusions if spatial correlation is ignored, because records from different sites are assumed to be statistically independent.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Numerical analysis of Venous External Scaffolding Technology for Saphenous Vein Grafts.
Meirson, T; Orion, E; Avrahami, I
2015-07-16
This paper presents a method for analyzing and comparing numerically Saphenous Vein Grafts (SVGs) following Coronary Artery Bypass Graft surgery (CABG). The method analyses the flow dynamics inside vein grafts with and without supporting using Venous External Scaffolding Technology (VEST). The numerical method uses patients׳ specific computational fluid dynamics (CFD) methods to characterize the relevant hemodynamic parameters of patients׳ SVGs. The method was used to compare the hemodynamics of six patient׳s specific model and flow conditions of stented and non-stented SVGs, 12 months post-transplantation. The flow parameters used to characterize the grafts׳ hemodynamics include Time Averaged Wall Shear Stress (TAWSS), Oscillatory Shear Index (OSI) and Relative Residence Time (RRT). The effect of stenting was clearly demonstrated by the chosen parameters. SVGs under constriction of VEST were associated with similar spatial average of TAWSS (10.73 vs 10.29 dyn/cm(2)), yet had fewer lesions with low TAWSS, lower OSI (0.041 vs 0.08) and RRT (0.12 vs 0.24), and more uniform flow with less flow discrepancies. In conclusion, the suggested method and parameters well demonstrated the advantage of VEST support. Stenting vein grafts with VEST improved hemodynamic factors which are correlated to graft failure following CABG procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Favre-Averaged Turbulence Statistics in Variable Density Mixing of Buoyant Jets
NASA Astrophysics Data System (ADS)
Charonko, John; Prestridge, Kathy
2014-11-01
Variable density mixing of a heavy fluid jet with lower density ambient fluid in a subsonic wind tunnel was experimentally studied using Particle Image Velocimetry and Planar Laser Induced Fluorescence to simultaneously measure velocity and density. Flows involving the mixing of fluids with large density ratios are important in a range of physical problems including atmospheric and oceanic flows, industrial processes, and inertial confinement fusion. Here we focus on buoyant jets with coflow. Results from two different Atwood numbers, 0.1 (Boussinesq limit) and 0.6 (non-Boussinesq case), reveal that buoyancy is important for most of the turbulent quantities measured. Statistical characteristics of the mixing important for modeling these flows such as the PDFs of density and density gradients, turbulent kinetic energy, Favre averaged Reynolds stress, turbulent mass flux velocity, density-specific volume correlation, and density power spectra were also examined and compared with previous direct numerical simulations. Additionally, a method for directly estimating Reynolds-averaged velocity statistics on a per-pixel basis is extended to Favre-averages, yielding improved accuracy and spatial resolution as compared to traditional post-processing of velocity and density fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less
Schneider, Philipp; Castell, Nuria; Vogt, Matthias; Dauge, Franck R; Lahoz, William A; Bartonova, Alena
2017-09-01
The recent emergence of low-cost microsensors measuring various air pollutants has significant potential for carrying out high-resolution mapping of air quality in the urban environment. However, the data obtained by such sensors are generally less reliable than that from standard equipment and they are subject to significant data gaps in both space and time. In order to overcome this issue, we present here a data fusion method based on geostatistics that allows for merging observations of air quality from a network of low-cost sensors with spatial information from an urban-scale air quality model. The performance of the methodology is evaluated for nitrogen dioxide in Oslo, Norway, using both simulated datasets and real-world measurements from a low-cost sensor network for January 2016. The results indicate that the method is capable of producing realistic hourly concentration fields of urban nitrogen dioxide that inherit the spatial patterns from the model and adjust the prior values using the information from the sensor network. The accuracy of the data fusion method is dependent on various factors including the total number of observations, their spatial distribution, their uncertainty (both in terms of systematic biases and random errors), as well as the ability of the model to provide realistic spatial patterns of urban air pollution. A validation against official data from air quality monitoring stations equipped with reference instrumentation indicates that the data fusion method is capable of reproducing city-wide averaged official values with an R 2 of 0.89 and a root mean squared error of 14.3 μg m -3 . It is further capable of reproducing the typical daily cycles of nitrogen dioxide. Overall, the results indicate that the method provides a robust way of extracting useful information from uncertain sensor data using only a time-invariant model dataset and the knowledge contained within an entire sensor network. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Buchin, Kevin; Sijben, Stef; van Loon, E Emiel; Sapir, Nir; Mercier, Stéphanie; Marie Arseneau, T Jean; Willems, Erik P
2015-01-01
The Brownian bridge movement model (BBMM) provides a biologically sound approximation of the movement path of an animal based on discrete location data, and is a powerful method to quantify utilization distributions. Computing the utilization distribution based on the BBMM while calculating movement parameters directly from the location data, may result in inconsistent and misleading results. We show how the BBMM can be extended to also calculate derived movement parameters. Furthermore we demonstrate how to integrate environmental context into a BBMM-based analysis. We develop a computational framework to analyze animal movement based on the BBMM. In particular, we demonstrate how a derived movement parameter (relative speed) and its spatial distribution can be calculated in the BBMM. We show how to integrate our framework with the conceptual framework of the movement ecology paradigm in two related but acutely different ways, focusing on the influence that the environment has on animal movement. First, we demonstrate an a posteriori approach, in which the spatial distribution of average relative movement speed as obtained from a "contextually naïve" model is related to the local vegetation structure within the monthly ranging area of a group of wild vervet monkeys. Without a model like the BBMM it would not be possible to estimate such a spatial distribution of a parameter in a sound way. Second, we introduce an a priori approach in which atmospheric information is used to calculate a crucial parameter of the BBMM to investigate flight properties of migrating bee-eaters. This analysis shows significant differences in the characteristics of flight modes, which would have not been detected without using the BBMM. Our algorithm is the first of its kind to allow BBMM-based computation of movement parameters beyond the utilization distribution, and we present two case studies that demonstrate two fundamentally different ways in which our algorithm can be applied to estimate the spatial distribution of average relative movement speed, while interpreting it in a biologically meaningful manner, across a wide range of environmental scenarios and ecological contexts. Therefore movement parameters derived from the BBMM can provide a powerful method for movement ecology research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, T.A.
1992-12-01
The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less
Multiparticle imaging technique for two-phase fluid flows using pulsed laser speckle velocimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassan, T.A.
1992-12-01
The practical use of Pulsed Laser Velocimetry (PLV) requires the use of fast, reliable computer-based methods for tracking numerous particles suspended in a fluid flow. Two methods for performing tracking are presented. One method tracks a particle through multiple sequential images (minimum of four required) by prediction and verification of particle displacement and direction. The other method, requiring only two sequential images uses a dynamic, binary, spatial, cross-correlation technique. The algorithms are tested on computer-generated synthetic data and experimental data which was obtained with traditional PLV methods. This allowed error analysis and testing of the algorithms on real engineering flows.more » A novel method is proposed which eliminates tedious, undersirable, manual, operator assistance in removing erroneous vectors. This method uses an iterative process involving an interpolated field produced from the most reliable vectors. Methods are developed to allow fast analysis and presentation of sets of PLV image data. Experimental investigation of a two-phase, horizontal, stratified, flow regime was performed to determine the interface drag force, and correspondingly, the drag coefficient. A horizontal, stratified flow test facility using water and air was constructed to allow interface shear measurements with PLV techniques. The experimentally obtained local drag measurements were compared with theoretical results given by conventional interfacial drag theory. Close agreement was shown when local conditions near the interface were similar to space-averaged conditions. However, theory based on macroscopic, space-averaged flow behavior was shown to give incorrect results if the local gas velocity near the interface as unstable, transient, and dissimilar from the average gas velocity through the test facility.« less
NASA Astrophysics Data System (ADS)
Lehman, B. M.; Niemann, J. D.
2008-12-01
Soil moisture exerts significant control over the partitioning of latent and sensible energy fluxes, the magnitude of both vertical and lateral water fluxes, the physiological and water-use characteristics of vegetation, and nutrient cycling. Considerable progress has been made in determining how soil characteristics, topography, and vegetation influence spatial patterns of soil moisture in humid environments at the catchment, hillslope, and plant scales. However, understanding of the controls on soil moisture patterns beyond the plant scale in semi-arid environments remains more limited. This study examines the relationships between the spatial patterns of near surface soil moisture (upper 5 cm), terrain indices, and soil properties in a small, semi-arid, montane catchment. The 8 ha catchment, located in the Cache La Poudre River Canyon in north-central Colorado, has a total relief of 115 m and an average elevation of 2193 m. It is characterized by steep slopes and shallow, gravelly/sandy soils with scattered granite outcroppings. Depth to bedrock ranges from 0 m to greater than 1 m. Vegetation in the catchment is highly correlated with topographic aspect. In particular, north-facing hillslopes are predominately vegetated by ponderosa pines, while south-facing slopes are mostly vegetated by several shrub species. Soil samples were collected at a 30 m resolution to characterize soil texture and bulk density, and several datasets consisting of more than 300 point measurements of soil moisture were collected using time domain reflectometry (TDR) between Fall 2007 and Summer 2008 at a 15 m resolution. Results from soil textural analysis performed with sieving and the ASTM standard hydrometer method show that soil texture is finer on the north-facing hillslope than on the south-facing hillslope. Cos(aspect) is the best univariate predictor of silts, while slope is the best predictor of coarser fractions up to fine gravel. Bulk density increases with depth but shows no significant relationship with topographic indices. When the catchment average soil moisture is low, the variance of soil moisture increases with the average. When the average is high, the variance remains relatively constant. Little of the variation in soil moisture is explained by topographic indices when the catchment is either very wet or dry; however, when the average soil moisture takes on intermediate values, cos(aspect) is consistently the best predictor among the terrain indices considered.
NASA Astrophysics Data System (ADS)
Nasta, Paolo; Penna, Daniele; Brocca, Luca; Zuecco, Giulia; Romano, Nunzio
2018-02-01
Indirect measurements of field-scale (hectometer grid-size) spatial-average near-surface soil moisture are becoming increasingly available by exploiting new-generation ground-based and satellite sensors. Nonetheless, modeling applications for water resources management require knowledge of plot-scale (1-5 m grid-size) soil moisture by using measurements through spatially-distributed sensor network systems. Since efforts to fulfill such requirements are not always possible due to time and budget constraints, alternative approaches are desirable. In this study, we explore the feasibility of determining spatial-average soil moisture and soil moisture patterns given the knowledge of long-term records of climate forcing data and topographic attributes. A downscaling approach is proposed that couples two different models: the Eco-Hydrological Bucket and Equilibrium Moisture from Topography. This approach helps identify the relative importance of two compound topographic indexes in explaining the spatial variation of soil moisture patterns, indicating valley- and hillslope-dependence controlled by lateral flow and radiative processes, respectively. The integrated model also detects temporal instability if the dominant type of topographic dependence changes with spatial-average soil moisture. Model application was carried out at three sites in different parts of Italy, each characterized by different environmental conditions. Prior calibration was performed by using sparse and sporadic soil moisture values measured by portable time domain reflectometry devices. Cross-site comparisons offer different interpretations in the explained spatial variation of soil moisture patterns, with time-invariant valley-dependence (site in northern Italy) and hillslope-dependence (site in southern Italy). The sources of soil moisture spatial variation at the site in central Italy are time-variant within the year and the seasonal change of topographic dependence can be conveniently correlated to a climate indicator such as the aridity index.
Hamill, Daniel; Buscombe, Daniel; Wheaton, Joseph M
2018-01-01
Side scan sonar in low-cost 'fishfinder' systems has become popular in aquatic ecology and sedimentology for imaging submerged riverbed sediment at coverages and resolutions sufficient to relate bed texture to grain-size. Traditional methods to map bed texture (i.e. physical samples) are relatively high-cost and low spatial coverage compared to sonar, which can continuously image several kilometers of channel in a few hours. Towards a goal of automating the classification of bed habitat features, we investigate relationships between substrates and statistical descriptors of bed textures in side scan sonar echograms of alluvial deposits. We develop a method for automated segmentation of bed textures into between two to five grain-size classes. Second-order texture statistics are used in conjunction with a Gaussian Mixture Model to classify the heterogeneous bed into small homogeneous patches of sand, gravel, and boulders with an average accuracy of 80%, 49%, and 61%, respectively. Reach-averaged proportions of these sediment types were within 3% compared to similar maps derived from multibeam sonar.
NASA Technical Reports Server (NTRS)
Bunting, Charles F.; Yu, Shih-Pin
2006-01-01
This paper emphasizes the application of numerical methods to explore the ideas related to shielding effectiveness from a statistical view. An empty rectangular box is examined using a hybrid modal/moment method. The basic computational method is presented followed by the results for single- and multiple observation points within the over-moded empty structure. The statistics of the field are obtained by using frequency stirring, borrowed from the ideas connected with reverberation chamber techniques, and extends the ideas of shielding effectiveness well into the multiple resonance regions. The study presented in this paper will address the average shielding effectiveness over a broad spatial sample within the enclosure as the frequency is varied.
Yeo, Boon Y.; McLaughlin, Robert A.; Kirk, Rodney W.; Sampson, David D.
2012-01-01
We present a high-resolution three-dimensional position tracking method that allows an optical coherence tomography (OCT) needle probe to be scanned laterally by hand, providing the high degree of flexibility and freedom required in clinical usage. The method is based on a magnetic tracking system, which is augmented by cross-correlation-based resampling and a two-stage moving window average algorithm to improve upon the tracker's limited intrinsic spatial resolution, achieving 18 µm RMS position accuracy. A proof-of-principle system was developed, with successful image reconstruction demonstrated on phantoms and on ex vivo human breast tissue validated against histology. This freehand scanning method could contribute toward clinical implementation of OCT needle imaging. PMID:22808429
Nonlinear data assimilation for the regional modeling of maximum ozone values.
Božnar, Marija Zlata; Grašič, Boštjan; Mlakar, Primož; Gradišar, Dejan; Kocijan, Juš
2017-11-01
We present a new method of data assimilation with the aim of correcting the forecast of the maximum values of ozone in regional photo-chemical models for areas over complex terrain using multilayer perceptron artificial neural networks. Up until now, these types of models have been used as a single model for one location when forecasting concentrations of air pollutants. We propose a method for constructing a more ambitious model: a single model, which can be used at several locations because the model is spatially transferable and is valid for the whole 2D domain. To achieve this goal, we introduce three novel ideas. The new method improves correlation at measurement station locations by 10% on average and improves by approximately 5% elsewhere.
Contrast-enhanced MR Angiography of the Abdomen with Highly Accelerated Acquisition Techniques
Mostardi, Petrice M.; Glockner, James F.; Young, Phillip M.
2011-01-01
Purpose: To demonstrate that highly accelerated (net acceleration factor [Rnet] ≥ 10) acquisition techniques can be used to generate three-dimensional (3D) subsecond timing images, as well as diagnostic-quality high-spatial-resolution contrast material–enhanced (CE) renal magnetic resonance (MR) angiograms with a single split dose of contrast material. Materials and Methods: All studies were approved by the institutional review board and were HIPAA compliant; written consent was obtained from all participants. Twenty-two studies were performed in 10 female volunteers (average age, 47 years; range, 27–62 years) and six patients with renovascular disease (three women; average age, 48 years; range, 37–68 years; three men; average age, 60 years; range, 50–67 years; composite average age, 54 years; range, 38–68 years). The two-part protocol consisted of a low-dose (2 mL contrast material) 3D timing image with approximate 1-second frame time, followed by a high-spatial-resolution (1.0–1.6-mm isotropic voxels) breath-hold 3D renal MR angiogram (18 mL) over the full abdominal field of view. Both acquisitions used two-dimensional (2D) sensitivity encoding acceleration factor (R) of eight and 2D homodyne (HD) acceleration (RHD) of 1.4–1.8 for Rnet = R · RHD of 10 or higher. Statistical analysis included determination of mean values and standard deviations of image quality scores performed by two experienced reviewers with use of eight evaluation criteria. Results: The 2-mL 3D time-resolved image successfully portrayed progressive arterial filling in all 22 studies and provided an anatomic overview of the vasculature. Successful timing was also demonstrated in that the renal MR angiogram showed adequate or excellent portrayal of the main renal arteries in 21 of 22 studies. Conclusion: Two-dimensional acceleration techniques with Rnet of 10 or higher can be used in CE MR angiography to acquire (a) a 3D image series with 1-second frame time, allowing accurate bolus timing, and (b) a high-spatial-resolution renal angiogram. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11110242/-/DC1 PMID:21900616
Role of spatial averaging in multicellular gradient sensing.
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-05-20
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
Role of spatial averaging in multicellular gradient sensing
NASA Astrophysics Data System (ADS)
Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew
2016-06-01
Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.
Direct Simulation of Extinction in a Slab of Spherical Particles
NASA Technical Reports Server (NTRS)
Mackowski, D.W.; Mishchenko, Michael I.
2013-01-01
The exact multiple sphere superposition method is used to calculate the coherent and incoherent contributions to the ensemble-averaged electric field amplitude and Poynting vector in systems of randomly positioned nonabsorbing spherical particles. The target systems consist of cylindrical volumes, with radius several times larger than length, containing spheres with positional configurations generated by a Monte Carlo sampling method. Spatially dependent values for coherent electric field amplitude, coherent energy flux, and diffuse energy flux, are calculated by averaging of exact local field and flux values over multiple configurations and over spatially independent directions for fixed target geometry, sphere properties, and sphere volume fraction. Our results reveal exponential attenuation of the coherent field and the coherent energy flux inside the particulate layer and thereby further corroborate the general methodology of the microphysical radiative transfer theory. An effective medium model based on plane wave transmission and reflection by a plane layer is used to model the dependence of the coherent electric field on particle packing density. The effective attenuation coefficient of the random medium, computed from the direct simulations, is found to agree closely with effective medium theories and with measurements. In addition, the simulation results reveal the presence of a counter-propagating component to the coherent field, which arises due to the internal reflection of the main coherent field component by the target boundary. The characteristics of the diffuse flux are compared to, and found to be consistent with, a model based on the diffusion approximation of the radiative transfer theory.
NASA Astrophysics Data System (ADS)
Sirianni, M.; Comas, X.; Shoemaker, B.; Job, M. J.; Cooper, H.
2016-12-01
Globally, wetland soils play an important role in regulating climate change by functioning as a source or sink for atmospheric carbon, particularly in terms of methane and carbon dioxide. While many historic studies defined the function of wetland soils in the global carbon budget, the gas-flux dynamics of subtropical wetlands is largely unknown. Big Cypress National Preserve is a collection of subtropical wetlands in southwestern Florida, including extensive forested (cypress, pine, hardwood) and sawgrass ecosystems that dry and flood annually in response to rainfall. The U.S. Geological Survey employs eddy covariance methods at several locations within the Preserve to quantify carbon and methane exchanges at ecosystem scales. While eddy covariance towers are a convenient tool for measuring gas fluxes, their footprint is spatially extensive (hundreds of meters); and thus spatial variability at smaller scales is masked by averaging or even overlooked. We intend to estimate small-scale contributions of organic and calcitic soils to gas exchanges measured by the eddy covariance towers using a combination of geophysical, hydrologic and ecologic techniques. Preliminary results suggest that gas releases from flooded calcitic soils are much greater than organic soils. These results - and others - will help build a better understanding of the role of subtropical wetlands in the global carbon budget.
Nonlinear vibrational microscopy
Holtom, Gary R.; Xie, Xiaoliang Sunney; Zumbusch, Andreas
2000-01-01
The present invention is a method and apparatus for microscopic vibrational imaging using coherent Anti-Stokes Raman Scattering or Sum Frequency Generation. Microscopic imaging with a vibrational spectroscopic contrast is achieved by generating signals in a nonlinear optical process and spatially resolved detection of the signals. The spatial resolution is attained by minimizing the spot size of the optical interrogation beams on the sample. Minimizing the spot size relies upon a. directing at least two substantially co-axial laser beams (interrogation beams) through a microscope objective providing a focal spot on the sample; b. collecting a signal beam together with a residual beam from the at least two co-axial laser beams after passing through the sample; c. removing the residual beam; and d. detecting the signal beam thereby creating said pixel. The method has significantly higher spatial resolution then IR microscopy and higher sensitivity than spontaneous Raman microscopy with much lower average excitation powers. CARS and SFG microscopy does not rely on the presence of fluorophores, but retains the resolution and three-dimensional sectioning capability of confocal and two-photon fluorescence microscopy. Complementary to these techniques, CARS and SFG microscopy provides a contrast mechanism based on vibrational spectroscopy. This vibrational contrast mechanism, combined with an unprecedented high sensitivity at a tolerable laser power level, provides a new approach for microscopic investigations of chemical and biological samples.
Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment
Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih
2015-01-01
In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911
NASA Astrophysics Data System (ADS)
Bae, Seungbin; Lee, Kisung; Seo, Changwoo; Kim, Jungmin; Joo, Sung-Kwan; Joung, Jinhun
2011-09-01
We developed a high precision position decoding method for a positron emission tomography (PET) detector that consists of a thick slab scintillator coupled with a multichannel photomultiplier tube (PMT). The DETECT2000 simulation package was used to validate light response characteristics for a 48.8 mm×48.8 mm×10 mm slab of lutetium oxyorthosilicate coupled to a 64 channel PMT. The data are then combined to produce light collection histograms. We employed a Gaussian mixture model (GMM) to parameterize the composite light response with multiple Gaussian mixtures. In the training step, light photons acquired by N PMT channels was used as an N-dimensional feature vector and were fed into a GMM training model to generate optimal parameters for M mixtures. In the positioning step, we decoded the spatial locations of incident photons by evaluating a sample feature vector with respect to the trained mixture parameters. The average spatial resolutions after positioning with four mixtures were 1.1 mm full width at half maximum (FWHM) at the corner and 1.0 mm FWHM at the center section. This indicates that the proposed algorithm achieved high performance in both spatial resolution and positioning bias, especially at the corner section of the detector.
A factor analysis of the SSQ (Speech, Spatial, and Qualities of Hearing Scale)
2014-01-01
Objective The speech, spatial, and qualities of hearing questionnaire (SSQ) is a self-report test of auditory disability. The 49 items ask how well a listener would do in many complex listening situations illustrative of real life. The scores on the items are often combined into the three main sections or into 10 pragmatic subscales. We report here a factor analysis of the SSQ that we conducted to further investigate its statistical properties and to determine its structure. Design Statistical factor analysis of questionnaire data, using parallel analysis to determine the number of factors to retain, oblique rotation of factors, and a bootstrap method to estimate the confidence intervals. Study sample 1220 people who have attended MRC IHR over the last decade. Results We found three clear factors, essentially corresponding to the three main sections of the SSQ. They are termed “speech understanding”, “spatial perception”, and “clarity, separation, and identification”. Thirty-five of the SSQ questions were included in the three factors. There was partial evidence for a fourth factor, “effort and concentration”, representing two more questions. Conclusions These results aid in the interpretation and application of the SSQ and indicate potential methods for generating average scores. PMID:24417459
Kobayashi, Yutaka; Ohtsuki, Hisashi
2014-03-01
Learning abilities are categorized into social (learning from others) and individual learning (learning on one's own). Despite the typically higher cost of individual learning, there are mechanisms that allow stable coexistence of both learning modes in a single population. In this paper, we investigate by means of mathematical modeling how the effect of spatial structure on evolutionary outcomes of pure social and individual learning strategies depends on the mechanisms for coexistence. We model a spatially structured population based on the infinite-island framework and consider three scenarios that differ in coexistence mechanisms. Using the inclusive-fitness method, we derive the equilibrium frequency of social learners and the genetic load of social learning (defined as average fecundity reduction caused by the presence of social learning) in terms of some summary statistics, such as relatedness, for each of the three scenarios and compare the results. This comparative analysis not only reconciles previous models that made contradictory predictions as to the effect of spatial structure on the equilibrium frequency of social learners but also derives a simple mathematical rule that determines the sign of the genetic load (i.e. whether or not social learning contributes to the mean fecundity of the population). Copyright © 2013 Elsevier Inc. All rights reserved.
Yield variability prediction by remote sensing sensors with different spatial resolution
NASA Astrophysics Data System (ADS)
Kumhálová, Jitka; Matějková, Štěpánka
2017-04-01
Currently, remote sensing sensors are very popular for crop monitoring and yield prediction. This paper describes how satellite images with moderate (Landsat satellite data) and very high (QuickBird and WorldView-2 satellite data) spatial resolution, together with GreenSeeker hand held crop sensor, can be used to estimate yield and crop growth variability. Winter barley (2007 and 2015) and winter wheat (2009 and 2011) were chosen because of cloud-free data availability in the same time period for experimental field from Landsat satellite images and QuickBird or WorldView-2 images. Very high spatial resolution images were resampled to worse spatial resolution. Normalised difference vegetation index was derived from each satellite image data sets and it was also measured with GreenSeeker handheld crop sensor for the year 2015 only. Results showed that each satellite image data set can be used for yield and plant variability estimation. Nevertheless, better results, in comparison with crop yield, were obtained for images acquired in later phenological phases, e.g. in 2007 - BBCH 59 - average correlation coefficient 0.856, and in 2011 - BBCH 59-0.784. GreenSeeker handheld crop sensor was not suitable for yield estimation due to different measuring method.
Semiclassical spatial correlations in chaotic wave functions.
Toscano, Fabricio; Lewenkopf, Caio H
2002-03-01
We study the spatial autocorrelation of energy eigenfunctions psi(n)(q) corresponding to classically chaotic systems in the semiclassical regime. Our analysis is based on the Weyl-Wigner formalism for the spectral average C(epsilon)(q(+),q(-),E) of psi(n)(q(+))psi(*)(n)(q(-)), defined as the average over eigenstates within an energy window epsilon centered at E. In this framework C(epsilon) is the Fourier transform in the momentum space of the spectral Wigner function W(x,E;epsilon). Our study reveals the chord structure that C(epsilon) inherits from the spectral Wigner function showing the interplay between the size of the spectral average window, and the spatial separation scale. We discuss under which conditions is it possible to define a local system independent regime for C(epsilon). In doing so, we derive an expression that bridges the existing formulas in the literature and find expressions for C(epsilon)(q(+),q(-),E) valid for any separation size /q(+)-q(-)/.
Visuo-spatial processing and executive functions in children with specific language impairment
Marton, Klara
2007-01-01
Background Individual differences in complex working memory tasks reflect simultaneous processing, executive functions, and attention control. Children with specific language impairment (SLI) show a deficit in verbal working memory tasks that involve simultaneous processing of information. Aims The purpose of the study was to examine executive functions and visuo-spatial processing and working memory in children with SLI and in their typically developing peers (TLD). Experiment 1 included 40 children with SLI (age=5;3–6;10) and 40 children with TLD (age=5;3–6;7); Experiment 2 included 25 children with SLI (age=8;2–11;2) and 25 children with TLD (age=8;3–11;0). It was examined whether the difficulties that children with SLI show in verbal working memory tasks are also present in visuo-spatial working memory. Methods & Procedures In Experiment 1, children's performance was measured with three visuo-spatial processing tasks: space visualization, position in space, and design copying. The stimuli in Experiment 2 were two widely used neuropsychological tests: the Wisconsin Card Sorting Test — 64 (WCST-64) and the Tower of London test (TOL). Outcomes & Results In Experiment 1, children with SLI performed more poorly than their age-matched peers in all visuo-spatial working memory tasks. There was a subgroup within the SLI group that included children whose parents and teachers reported a weakness in the child's attention control. These children showed particular difficulties in the tasks of Experiment 1. The results support Engle's attention control theory: individuals need good attention control to perform well in visuo-spatial working memory tasks. In Experiment 2, the children with SLI produced more perseverative errors and more rule violations than their peers. Conclusions Executive functions have a great impact on SLI children's working memory performance, regardless of domain. Tasks that require an increased amount of attention control and executive functions are more difficult for the children with SLI than for their peers. Most children with SLI scored either below average or in the low average range on the neuropsychological tests that measured executive functions. PMID:17852522
[Temporal-spatial analysis of bacillary dysentery in the Three Gorges Area of China, 2005-2016].
Zhang, P; Zhang, J; Chang, Z R; Li, Z J
2018-01-10
Objective: To analyze the spatial and temporal distributions of bacillary dysentery in Chongqing, Yichang and Enshi (the Three Gorges Area) from 2005 to 2016, and provide evidence for the disease prevention and control. Methods: The incidence data of bacillary dysentery in the Three Gorges Area during this period were collected from National Notifiable Infectious Disease Reporting System. The spatial-temporal scan statistic was conducted with software SaTScan 9.4 and bacillary dysentery clusters were visualized with software ArcGIS 10.3. Results: A total of 126 196 cases were reported in the Three Gorges Area during 2005-2016, with an average incidence rate of 29.67/100 000. The overall incidence was in a downward trend, with an average annual decline rate of 4.74%. Cases occurred all the year round but with an obvious seasonal increase between May and October. Among the reported cases, 44.71% (56 421/126 196) were children under 5-year-old, the cases in children outside child care settings accounted for 41.93% (52 918/126 196) of the total. The incidence rates in districts of Yuzhong, Dadukou, Jiangbei, Shapingba, Jiulongpo, Nanan, Yubei, Chengkou of Chongqing and districts of Xiling and Wujiagang of Yichang city of Hubei province were high, ranging from 60.20/100 000 to 114.81/100 000. Spatial-temporal scan statistic for the spatial and temporal distributions of bacillary dysentery during this period revealed that the temporal distribution was during May-October, and there were 12 class Ⅰ clusters, 35 class Ⅱ clusters, and 9 clusters without statistical significance in counties with high incidence. All the class Ⅰ clusters were in urban area of Chongqing (Yuzhong, Dadukou, Jiangbei, Shapingba, Jiulongpo, Nanan, Beibei, Yubei, Banan) and surrounding counties, and the class Ⅱ clusters transformed from concentrated distribution to scattered distribution. Conclusions: Temporal and spatial cluster of bacillary dysentery incidence existed in the three gorges area during 2005-2016. It is necessary to strengthen the bacillary dysentery prevention and control in urban areas of Chongqing and Yichang.
Spatial and Temporal Variation of Land Surface Temperature in Fujian Province from 2001 TO 2015
NASA Astrophysics Data System (ADS)
Li, Y.; Wang, X.; Ding, Z.
2018-04-01
Land surface temperature (LST) is an essential parameter in the physics of land surface processes. The spatiotemporal variations of LST on the Fujian province were studied using AQUA Moderate Resolution Imaging Spectroradiometer LST data. Considering the data gaps in remotely sensed LST products caused by cloud contamination, the Savitzky-Golay (S-G) filter method was used to eliminate the influence of cloud cover and to describe the periodical signals of LST. Observed air temperature data from 27 weather stations were employed to evaluate the fitting performance of the S-G filter method. Results indicate that S-G can effectively fit the LST time series and remove the influence of cloud cover. Based on the S-G-derived result, Spatial and temporal Variations of LST in Fujian province from 2001 to 2015 are analysed through slope analysis. The results show that: 1) the spatial distribution of annual mean LST generally exhibits consistency with altitude in the study area and the average of LST was much higher in the east than in the west. 2) The annual mean temperature of LST declines slightly among 15 years in Fujian. 3) Slope analysis reflects the spatial distribution characteristics of LST changing trend in Fujian.Improvement areas of LST are mainly concentrated in the urban areas of Fujian, especially in the eastern urban areas. Apparent descent areas are mainly distributed in the area of Zhangzhou and eastern mountain area.
Soil organic carbon stocks in Alaska estimated with spatial and pedon data
Bliss, Norman B.; Maursetter, J.
2010-01-01
Temperatures in high-latitude ecosystems are increasing faster than the average rate of global warming, which may lead to a positive feedback for climate change by increasing the respiration rates of soil organic C. If a positive feedback is confirmed, soil C will represent a source of greenhouse gases that is not currently considered in international protocols to regulate C emissions. We present new estimates of the stocks of soil organic C in Alaska, calculated by linking spatial and field data developed by the USDA NRCS. The spatial data are from the State Soil Geographic database (STATSGO), and the field and laboratory data are from the National Soil Characterization Database, also known as the pedon database. The new estimates range from 32 to 53 Pg of soil organic C for Alaska, formed by linking the spatial and field data using the attributes of Soil Taxonomy. For modelers, we recommend an estimation method based on taxonomic subgroups with interpolation for missing areas, which yields an estimate of 48 Pg. This is a substantial increase over a magnitude of 13 Pg estimated from only the STATSGO data as originally distributed in 1994, but the increase reflects different estimation methods and is not a measure of the change in C on the landscape. Pedon samples were collected between 1952 and 2002, so the results do not represent a single point in time. The linked databases provide an improved basis for modeling the impacts of climate change on net ecosystem exchange.
Gangodagamage, Chandana; Rowland, Joel C; Hubbard, Susan S; Brumby, Steven P; Liljedahl, Anna K; Wainwright, Haruko; Wilson, Cathy J; Altmann, Garrett L; Dafflon, Baptiste; Peterson, John; Ulrich, Craig; Tweedie, Craig E; Wullschleger, Stan D
2014-08-01
Landscape attributes that vary with microtopography, such as active layer thickness ( ALT ), are labor intensive and difficult to document effectively through in situ methods at kilometer spatial extents, thus rendering remotely sensed methods desirable. Spatially explicit estimates of ALT can provide critically needed data for parameterization, initialization, and evaluation of Arctic terrestrial models. In this work, we demonstrate a new approach using high-resolution remotely sensed data for estimating centimeter-scale ALT in a 5 km 2 area of ice-wedge polygon terrain in Barrow, Alaska. We use a simple regression-based, machine learning data-fusion algorithm that uses topographic and spectral metrics derived from multisensor data (LiDAR and WorldView-2) to estimate ALT (2 m spatial resolution) across the study area. Comparison of the ALT estimates with ground-based measurements, indicates the accuracy (r 2 = 0.76, RMSE ±4.4 cm) of the approach. While it is generally accepted that broad climatic variability associated with increasing air temperature will govern the regional averages of ALT , consistent with prior studies, our findings using high-resolution LiDAR and WorldView-2 data, show that smaller-scale variability in ALT is controlled by local eco-hydro-geomorphic factors. This work demonstrates a path forward for mapping ALT at high spatial resolution and across sufficiently large regions for improved understanding and predictions of coupled dynamics among permafrost, hydrology, and land-surface processes from readily available remote sensing data.
Investigation of Spatial Control Strategies for AHWR: A Comparative Study
NASA Astrophysics Data System (ADS)
Munje, R. K.; Patre, B. M.; Londhe, P. S.; Tiwari, A. P.; Shimjith, S. R.
2016-04-01
Large nuclear reactors such as the Advanced Heavy Water Reactor (AHWR), are susceptible to xenon-induced spatial oscillations in which, though the core average power remains constant, the power distribution may be nonuniform as well as it might experience unstable oscillations. Such oscillations influence the operation and control philosophy and could also drive safety issues. Therefore, large nuclear reactors are equipped with spatial controllers which maintain the core power distribution close to desired distribution during all the facets of operation and following disturbances. In this paper, the case of AHWR has been considered, for which a number of different types of spatial controllers have been designed during the last decade. Some of these designs are based on output feedback while the others are based on state feedback. Also, both the conventional and modern control concepts, such as linear quadratic regulator theory, sliding mode control, multirate output feedback control and fuzzy control have been investigated. The designs of these different controllers for the AHWR have been carried out using a 90th order model, which is highly stiff. Hence, direct application of design methods suffers with numerical ill-conditioning. Singular perturbation and time-scale methods have been applied whereby the design problem for the original higher order system is decoupled into two or three subproblems, each of which is solved separately. Nonlinear simulations have been carried out to obtain the transient responses of the system with different types of controllers and their performances have been compared.
Ihlefeld, Antje; Litovsky, Ruth Y
2012-01-01
Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.
Measuring Historical Coastal Change using GIS and the Change Polygon Approach
Smith, M.J.; Cromley, R.G.
2012-01-01
This study compares two automated approaches, the transect-from-baseline technique and a new change polygon method, for quantifying historical coastal change over time. The study shows that the transect-from-baseline technique is complicated by choice of a proper baseline as well as generating transects that intersect with each other rather than with the nearest shoreline. The change polygon method captures the full spatial difference between the positions of the two shorelines and average coastal change is the defined as the ratio of the net area divided by the shoreline length. Although then change polygon method is sensitive to the definition and measurement of shoreline length, the results are more invariant to parameter changes than the transect-from-baseline method, suggesting that the change polygon technique may be a more robust coastal change method. ?? 2012 Blackwell Publishing Ltd.
Johnson, Adam G.; Engott, John A.; Bassiouni, Maoya; Rotzoll, Kolja
2014-12-14
Demand for freshwater on the Island of Maui is expected to grow. To evaluate the availability of fresh groundwater, estimates of groundwater recharge are needed. A water-budget model with a daily computation interval was developed and used to estimate the spatial distribution of recharge on Maui for average climate conditions (1978–2007 rainfall and 2010 land cover) and for drought conditions (1998–2002 rainfall and 2010 land cover). For average climate conditions, mean annual recharge for Maui is about 1,309 million gallons per day, or about 44 percent of precipitation (rainfall and fog interception). Recharge for average climate conditions is about 39 percent of total water inflow consisting of precipitation, irrigation, septic leachate, and seepage from reservoirs and cesspools. Most recharge occurs on the wet, windward slopes of Haleakalā and on the wet, uplands of West Maui Mountain. Dry, coastal areas generally have low recharge. In the dry isthmus, however, irrigated fields have greater recharge than nearby unirrigated areas. For drought conditions, mean annual recharge for Maui is about 1,010 million gallons per day, which is 23 percent less than recharge for average climate conditions. For individual aquifer-system areas used for groundwater management, recharge for drought conditions is about 8 to 51 percent less than recharge for average climate conditions. The spatial distribution of rainfall is the primary factor determining spatially distributed recharge estimates for most areas on Maui. In wet areas, recharge estimates are also sensitive to water-budget parameters that are related to runoff, fog interception, and forest-canopy evaporation. In dry areas, recharge estimates are most sensitive to irrigated crop areas and parameters related to evapotranspiration.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
Precipitation interpolation in mountainous areas
NASA Astrophysics Data System (ADS)
Kolberg, Sjur
2015-04-01
Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.
NASA Astrophysics Data System (ADS)
Parry, Louise; Neely, Ryan, III; Bennett, Lindsay; Collier, Chris; Dufton, David
2017-04-01
The Scottish Environment Protection Agency (SEPA) has a statutory responsibility to provide flood warning across Scotland. It achieves this through an operational partnership with the UK Met Office wherein meteorological forecasts are applied to a national distributed hydrological model, Grid- to- Grid (G2G), and catchment specific lumped PDM models. Both of these model types rely on observed precipitation input for model development and calibration, and operationally for historical runs to generate initial conditions. Scotland has an average annual precipitation of 1430mm per annum (1971-2000), but the spatial variability in totals is high, predominantly in relation to the topography and prevailing winds, which poses different challenges to both radar and point measurement methods of observation. In addition, the high elevations mean that in winter a significant proportion of precipitation falls as snow. For the operational forecasting models, observed rainfall data is provided in Near Real Time (NRT) from SEPA's network of approximately 260 telemetered TBR gauges and 4 UK Met Office C-band radars. Both data sources have their strengths and weaknesses, particularly in relation to the orography and spatial representativeness, but estimates of rainfall from the two methods can vary greatly. Northern Scotland, particularly near Inverness, is a comparatively sparse part of the radar network. Rainfall totals and distribution in this area are determined by the Northern Western Highlands and Cairngorms mountain ranges, which also have a negative impact on radar observations. In recognition of this issue, the NCAS mobile X-band weather radar (MXWR) was deployed in this area between February and August 2016. This study presents a comparison of rainfall estimates for the Inverness and Moray Firth region generated from the operational radar network, the TBR network, and the MXWR. Quantitative precipitation estimates (QPEs) from both sources of radar data were compared to point estimates of precipitation as well as catchment average estimates generated using different spatial averaging methods, including the operationally applied Thiessen polygons. In addition, the QPEs were applied to operational PDM models to compare the effect on the simulated runoff. The results highlight the hydrological significance of uncertainty in observed rainfall. Recommendations for future investigations are to improve the estimate of radar QPEs through improvement of the correction for orography and the correction for different precipitation types, as well as to analyse the benefits of the UK Met Office radar-raingauge merged product. In addition, we need to quantity the cost-benefit of deploying more radars in Scotland in light of the problems posed by the orography.
Trends in 1970-2010 southern California surface maximum temperatures: extremes and heat waves
NASA Astrophysics Data System (ADS)
Ghebreegziabher, Amanuel T.
Daily maximum temperatures from 1970-2010 were obtained from the National Climatic Data Center (NCDC) for 28 South Coast Air Basin (SoCAB) Cooperative Network (COOP) sites. Analyses were carried out on the entire data set, as well as on the 1970-1974 and 2006-2010 sub-periods, including construction of spatial distributions and time-series trends of both summer-average and annual-maximum values and of the frequency of two and four consecutive "daytime" heat wave events. Spatial patterns of average and extreme values showed three areas consistent with climatological SoCAB flow patterns: cold coastal, warm inland low-elevation, and cool further-inland mountain top. Difference (2006-2010 minus 1970-1974) distributions of both average and extreme-value trends were consistent with the shorter period (1970-2005) study of previous study, as they showed the expected inland regional warming and a "reverse-reaction" cooling in low elevation coastal and inland areas open to increasing sea breeze flows. Annual-extreme trends generally showed cooling at sites below 600 m and warming at higher elevations. As the warming trends of the extremes were larger than those of the averages, regional warming thus impacts extremes more than averages. Spatial distributions of hot-day frequencies showed expected maximum at inland low-elevation sites. Regional warming again thus induced increases at both elevated-coastal areas, but low-elevation areas showed reverse-reaction decreases.
Davis, Christopher C.; Beard, Brian B.; Tillman, Ahlia; Rzasa, John; Merideth, Eric; Balzano, Quirino
2018-01-01
This paper reports the results of an international intercomparison of the specific absorption rates (SARs) measured in a flat-bottomed container (flat phantom), filled with human head tissue simulant fluid, placed in the near-field of custom-built dipole antennas operating at 900 and 1800 MHz, respectively. These tests of the reliability of experimental SAR measurements have been conducted as part of a verification of the ways in which wireless phones are tested and certified for compliance with safety standards. The measurements are made using small electric-field probes scanned in the simulant fluid in the phantom to record the spatial SAR distribution. The intercomparison involved a standard flat phantom, antennas, power meters, and RF components being circulated among 15 different governmental and industrial laboratories. At the conclusion of each laboratory’s measurements, the following results were communicated to the coordinators: Spatial SAR scans at 900 and 1800 MHz and 1 and 10 g maximum spatial SAR averages for cubic volumes at 900 and 1800 MHz. The overall results, given as meanstandard deviation, are the following: at 900 MHz, 1 g average 7.850.76; 10 g average 5.160.45; at 1800 MHz, 1 g average 18.44 ± 1.65; 10 g average 10.14 ± 0.85, all measured in units of watt per kilogram, per watt of radiated power. PMID:29520117
Automatic evaluation of interferograms
NASA Technical Reports Server (NTRS)
Becker, F.
1982-01-01
A system for the evaluation of interference patterns was developed. For digitizing and processing of the interferograms from classical and holographic interferometers a picture analysis system based upon a computer with a television digitizer was installed. Depending on the quality of the interferograms, four different picture enhancement operations may be used: Signal averaging; spatial smoothing, subtraction of the overlayed intensity function and the removal of distortion-patterns using a spatial filtering technique in the frequency spectrum of the interferograms. The extraction of fringe loci from the digitized interferograms is performed by a foating-threshold method. The fringes are numbered using a special scheme after the removal of any fringe disconnections which appeared if there was insufficient contrast in the holograms. The reconstruction of the object function from the fringe field uses least squares approximation with spline fit. Applications are given.
Characterizing virus-induced gene silencing at the cellular level with in situ multimodal imaging
Burkhow, Sadie J.; Stephens, Nicole M.; Mei, Yu; ...
2018-05-25
Reverse genetic strategies, such as virus-induced gene silencing, are powerful techniques to study gene function. Currently, there are few tools to study the spatial dependence of the consequences of gene silencing at the cellular level. Here, we report the use of multimodal Raman and mass spectrometry imaging to study the cellular-level biochemical changes that occur from silencing the phytoene desaturase ( pds) gene using a Foxtail mosaic virus (FoMV) vector in maize leaves. The multimodal imaging method allows the localized carotenoid distribution to be measured and reveals differences lost in the spatial average when analyzing a carotenoid extraction of themore » whole leaf. The nature of the Raman and mass spectrometry signals are complementary: silencing pds reduces the downstream carotenoid Raman signal and increases the phytoene mass spectrometry signal.« less
Characterizing virus-induced gene silencing at the cellular level with in situ multimodal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkhow, Sadie J.; Stephens, Nicole M.; Mei, Yu
Reverse genetic strategies, such as virus-induced gene silencing, are powerful techniques to study gene function. Currently, there are few tools to study the spatial dependence of the consequences of gene silencing at the cellular level. Here, we report the use of multimodal Raman and mass spectrometry imaging to study the cellular-level biochemical changes that occur from silencing the phytoene desaturase ( pds) gene using a Foxtail mosaic virus (FoMV) vector in maize leaves. The multimodal imaging method allows the localized carotenoid distribution to be measured and reveals differences lost in the spatial average when analyzing a carotenoid extraction of themore » whole leaf. The nature of the Raman and mass spectrometry signals are complementary: silencing pds reduces the downstream carotenoid Raman signal and increases the phytoene mass spectrometry signal.« less
The role of spatial heterogeneity of the environment in soil fauna recovery after fires
NASA Astrophysics Data System (ADS)
Gongalsky, K. B.; Zaitsev, A. S.
2016-12-01
Forest fires are almost always heterogeneous, leaving less-disturbed sites that are potentially suitable as habitats for soil-dwelling creatures. The recovery of large soil animal communities after fires is therefore dependent on the spatial structure of the burned habitats. The role of locally less disturbed sites in the survival of soil macrofauna communities along with traditionally considered immigration from the surrounding undisturbed habitats is shown by the example of burnt areas located in three geographically distant regions of European Russia. Such unburned soil cover sites (perfugia) occupy 5-10% of the total burned habitats. Initially, perfugia are characterized by much higher (200-300% of the average across a burned area) diversity and abundance of soil fauna. A geostatistical method made it possible to estimate the perfugia size for soil macrofauna at 3-8 m.
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
Regional precipitation trend analysis at the Langat River Basin, Selangor, Malaysia
NASA Astrophysics Data System (ADS)
Palizdan, Narges; Falamarzi, Yashar; Huang, Yuk Feng; Lee, Teang Shui; Ghazali, Abdul Halim
2014-08-01
Various hydrological and meteorological variables such as rainfall and temperature have been affected by global climate change. Any change in the pattern of precipitation can have a significant impact on the availability of water resources, agriculture, and the ecosystem. Therefore, knowledge on rainfall trend is an important aspect of water resources management. In this study, the regional annual and seasonal precipitation trends at the Langat River Basin, Malaysia, for the period of 1982-2011 were examined at the 95 % level of significance using the regional average Mann-Kendall (RAMK) test and the regional average Mann-Kendall coupled with bootstrap (RAMK-bootstrap) method. In order to identify the homogeneous regions respectively for the annual and seasonal scales, firstly, at-site mean total annual and separately at-site mean total seasonal precipitation were spatialized into 5 km × 5 km grids using the inverse distance weighting (IDW) algorithm. Next, the optimum number of homogeneous regions (clusters) is computed using the silhouette coefficient approach. Next, the homogeneous regions were formed using the K-mean clustering method. From the annual scale perspective, all three regions showed positive trends. However, the application of two methods at this scale showed a significant trend only in the region AC1. The region AC2 experienced a significant positive trend using only the RAMK test. On a seasonal scale, all regions showed insignificant trends, except the regions I1C1 and I1C2 in the Inter-Monsoon 1 (INT1) season which experienced significant upward trends. In addition, it was proven that the significance of trends has been affected by the existence of serial and spatial correlations.
Zhang, Zhao; Song, Xiao; Chen, Yi; Wang, Pin; Wei, Xing; Tao, Fulu
2015-05-01
Although many studies have indicated the consistent impact of warming on the natural ecosystem (e.g., an early flowering and prolonged growing period), our knowledge of the impacts on agricultural systems is still poorly understood. In this study, spatiotemporal variability of the heading-flowering stages of single rice was detected and compared at three different scales using field-based methods (FBMs) and satellite-based methods (SBMs). The heading-flowering stages from 2000 to 2009 with a spatial resolution of 1 km were extracted from the SPOT/VGT NDVI time series data using the Savizky-Golay filtering method in the areas in China dominated by single rice of Northeast China (NE), the middle-lower Yangtze River Valley (YZ), the Sichuan Basin (SC), and the Yunnan-Guizhou Plateau (YG). We found that approximately 52.6 and 76.3 % of the estimated heading-flowering stages by a SBM were within ±5 and ±10 days estimation error (a root mean square error (RMSE) of 8.76 days) when compared with those determined by a FBM. Both the FBM data and the SBM data had indicated a similar spatial pattern, with the earliest annual average heading-flowering stages in SC, followed by YG, NE, and YZ, which were inconsistent with the patterns reported in natural ecosystems. Moreover, diverse temporal trends were also detected in the four regions due to different climate conditions and agronomic factors such as cultivar shifts. Nevertheless, there were no significant differences (p > 0.05) between the FBM and the SBM in both the regional average value of the phenological stages and the trends, implying the consistency and rationality of the SBM at three scales.
NASA Astrophysics Data System (ADS)
Magic, Z.; Collet, R.; Hayek, W.; Asplund, M.
2013-12-01
Aims: We study the implications of averaging methods with different reference depth scales for 3D hydrodynamical model atmospheres computed with the Stagger-code. The temporally and spatially averaged (hereafter denoted as ⟨3D⟩) models are explored in the light of local thermodynamic equilibrium (LTE) spectral line formation by comparing spectrum calculations using full 3D atmosphere structures with those from ⟨3D⟩ averages. Methods: We explored methods for computing mean ⟨3D⟩ stratifications from the Stagger-grid time-dependent 3D radiative hydrodynamical atmosphere models by considering four different reference depth scales (geometrical depth, column-mass density, and two optical depth scales). Furthermore, we investigated the influence of alternative averages (logarithmic, enforced hydrostatic equilibrium, flux-weighted temperatures). For the line formation we computed curves of growth for Fe i and Fe ii lines in LTE. Results: The resulting ⟨3D⟩ stratifications for the four reference depth scales can be very different. We typically find that in the upper atmosphere and in the superadiabatic region just below the optical surface, where the temperature and density fluctuations are highest, the differences become considerable and increase for higher Teff, lower log g, and lower [Fe / H]. The differential comparison of spectral line formation shows distinctive differences depending on which ⟨3D⟩ model is applied. The averages over layers of constant column-mass density yield the best mean ⟨3D⟩ representation of the full 3D models for LTE line formation, while the averages on layers at constant geometrical height are the least appropriate. Unexpectedly, the usually preferred averages over layers of constant optical depth are prone to increasing interference by reversed granulation towards higher effective temperature, in particular at low metallicity. Appendix A is available in electronic form at http://www.aanda.orgMean ⟨3D⟩ models are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A8 as well as at http://www.stagger-stars.net
Biological object recognition in μ-radiography images
NASA Astrophysics Data System (ADS)
Prochazka, A.; Dammer, J.; Weyda, F.; Sopko, V.; Benes, J.; Zeman, J.; Jandejsek, I.
2015-03-01
This study presents an applicability of real-time microradiography to biological objects, namely to horse chestnut leafminer, Cameraria ohridella (Insecta: Lepidoptera, Gracillariidae) and following image processing focusing on image segmentation and object recognition. The microradiography of insects (such as horse chestnut leafminer) provides a non-invasive imaging that leaves the organisms alive. The imaging requires a high spatial resolution (micrometer scale) radiographic system. Our radiographic system consists of a micro-focus X-ray tube and two types of detectors. The first is a charge integrating detector (Hamamatsu flat panel), the second is a pixel semiconductor detector (Medipix2 detector). The latter allows detection of single quantum photon of ionizing radiation. We obtained numerous horse chestnuts leafminer pupae in several microradiography images easy recognizable in automatic mode using the image processing methods. We implemented an algorithm that is able to count a number of dead and alive pupae in images. The algorithm was based on two methods: 1) noise reduction using mathematical morphology filters, 2) Canny edge detection. The accuracy of the algorithm is higher for the Medipix2 (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.83), than for the flat panel (average recall for detection of alive pupae =0.99, average recall for detection of dead pupae =0.77). Therefore, we conclude that Medipix2 has lower noise and better displays contours (edges) of biological objects. Our method allows automatic selection and calculation of dead and alive chestnut leafminer pupae. It leads to faster monitoring of the population of one of the world's important insect pest.
Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.
Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S
2017-09-01
This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images
NASA Astrophysics Data System (ADS)
Moeskops, Pim; Viergever, Max A.; Benders, Manon J. N. L.; Išgum, Ivana
2015-03-01
Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.
NASA Technical Reports Server (NTRS)
Zhang, Zeng-Chan; Yu, S. T. John; Chang, Sin-Chung; Jorgenson, Philip (Technical Monitor)
2001-01-01
In this paper, we report a version of the Space-Time Conservation Element and Solution Element (CE/SE) Method in which the 2D and 3D unsteady Euler equations are simulated using structured or unstructured quadrilateral and hexahedral meshes, respectively. In the present method, mesh values of flow variables and their spatial derivatives are treated as independent unknowns to be solved for. At each mesh point, the value of a flow variable is obtained by imposing a flux conservation condition. On the other hand, the spatial derivatives are evaluated using a finite-difference/weighted-average procedure. Note that the present extension retains many key advantages of the original CE/SE method which uses triangular and tetrahedral meshes, respectively, for its 2D and 3D applications. These advantages include efficient parallel computing ease of implementing non-reflecting boundary conditions, high-fidelity resolution of shocks and waves, and a genuinely multidimensional formulation without using a dimensional-splitting approach. In particular, because Riemann solvers, the cornerstones of the Godunov-type upwind schemes, are not needed to capture shocks, the computational logic of the present method is considerably simpler. To demonstrate the capability of the present method, numerical results are presented for several benchmark problems including oblique shock reflection, supersonic flow over a wedge, and a 3D detonation flow.
Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.
2016-01-01
Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.
Spatial variability of mountain stream dynamics along the Ethiopian Rift Valley escarpment
NASA Astrophysics Data System (ADS)
Asfaha, Tesfaalem-Ghebreyohannes; Frankl, Amaury; Zenebe, Amanuel; Haile, Mitiku; Nyssen, Jan
2014-05-01
Changes in hydrogeomorphic characteristics of mountain streams are generally deemed to be controlled mainly by land use/cover changes and rainfall variability. This study investigates the spatial variability of peak discharge in relation to land cover, rainfall and topographic variables in eleven catchments of the Ethiopian Rift Valley escarpment (average slope gradient = 48% (± 13%). Rapid deforestation of the escarpment in the second half of the 20th century resulted in the occurrence of strong flash floods, transporting large amounts of discharge and sediment to the lower graben bottom. Due to integrated reforestation interventions as of the 1980s, many of these catchments do show improvement in vegetation cover at various degrees. Daily rainfall was measured using seven non-recording rain gauges, while peak stage discharges were measured after floods using crest stage gauges installed at eleven stream reaches. Peak discharges were calculated using the Manning's equation. Daily area-weighted rainfall was computed for each catchment using the Thiessen Polygon method. To estimate the vegetation cover of each catchment, the Normalized Difference Vegetation Index was calculated from Landsat TM imagery (mean = 0.14 ± 0.05). In the rainy season of 2012, there was a positive correlation between daily rainfall and peak discharge in each of the monitored catchments. In a multiple linear regression analysis (R² = 0.83; P<0.01), average daily peak discharge in all rivers was positively related with rainfall depth and catchment size and negatively with vegetation cover (as represented by average NDVI values). Average slope gradient of the catchments and Gravelius's compactness index did not show a statistically significant relation with peak discharge. This study shows that though the average vegetation cover of the catchments is still relatively low, differences in vegetation cover, together with rainfall variability plays a determining role in the amount of peak discharges in flashy mountain streams.
NASA Astrophysics Data System (ADS)
Mendolia, D.; D'Souza, R. J. C.; Evans, G. J.; Brook, J.
2013-10-01
Tropospheric NO2 vertical column densities have been retrieved and compared for the first time in Toronto, Canada, using three methods of differing spatial scales. Remotely sensed NO2 vertical column densities, retrieved from multi-axis differential optical absorption spectroscopy and satellite remote sensing, were evaluated by comparison with in situ vertical column densities estimated using a pair of chemiluminescence monitors situated 0.01 and 0.5 km a.g.l. (above ground level). The chemiluminescence measurements were corrected for the influence of NOz, which reduced the NO2 concentrations at 0.01 and 0.5 km by an average of 8 ± 1% and 12 ± 1%, respectively. The average absolute decrease in the chemiluminescence NO2 measurement as a result of this correction was less than 1 ppb. The monthly averaged ratio of the NO2 concentration at 0.5 to 0.01 km varied seasonally, and exhibited a negative linear dependence on the monthly average temperature, with Pearson's R = 0.83. During the coldest month, February, this ratio was 0.52 ± 0.04, while during the warmest month, July, this ratio was 0.34 ± 0.04, illustrating that NO2 is not well mixed within 0.5 km above ground level. Good correlation was observed between the remotely sensed and in situ NO2 vertical column densities (Pearson's R value ranging from 0.72 to 0.81), but the in situ vertical column densities were 52 to 58% greater than the remotely sensed columns. These results indicate that NO2 horizontal heterogeneity strongly impacted the magnitude of the remotely sensed columns. The in situ columns reflected an urban environment with major traffic sources, while the remotely sensed NO2 vertical column densities were representative of the region, which included spatial heterogeneity introduced by residential neighbourhoods and Lake Ontario. Despite the difference in absolute values, the reasonable correlation between the vertical column densities determined by three distinct methods increased confidence in the validity of the values provided by each measurement technique.
Iqbal, Zohaib; Wilson, Neil E; Keller, Margaret A; Michalik, David E; Church, Joseph A; Nielsen-Saines, Karin; Deville, Jaime; Souza, Raissa; Brecht, Mary-Lynn; Thomas, M Albert
2016-01-01
To measure cerebral metabolite levels in perinatally HIV-infected youths and healthy controls using the accelerated five dimensional (5D) echo planar J-resolved spectroscopic imaging (EP-JRESI) sequence, which is capable of obtaining two dimensional (2D) J-resolved spectra from three spatial dimensions (3D). After acquisition and reconstruction of the 5D EP-JRESI data, T1-weighted MRIs were used to classify brain regions of interest for HIV patients and healthy controls: right frontal white (FW), medial frontal gray (FG), right basal ganglia (BG), right occipital white (OW), and medial occipital gray (OG). From these locations, respective J-resolved and TE-averaged spectra were extracted and fit using two different quantitation methods. The J-resolved spectra were fit using prior knowledge fitting (ProFit) while the TE-averaged spectra were fit using the advanced method for accurate robust and efficient spectral fitting (AMARES). Quantitation of the 5D EP-JRESI data using the ProFit algorithm yielded significant metabolic differences in two spatial locations of the perinatally HIV-infected youths compared to controls: elevated NAA/(Cr+Ch) in the FW and elevated Asp/(Cr+Ch) in the BG. Using the TE-averaged data quantified by AMARES, an increase of Glu/(Cr+Ch) was shown in the FW region. A strong negative correlation (r < -0.6) was shown between tCh/(Cr+Ch) quantified using ProFit in the FW and CD4 counts. Also, strong positive correlations (r > 0.6) were shown between Asp/(Cr+Ch) and CD4 counts in the FG and BG. The complimentary results using ProFit fitting of J-resolved spectra and AMARES fitting of TE-averaged spectra, which are a subset of the 5D EP-JRESI acquisition, demonstrate an abnormal energy metabolism in the brains of perinatally HIV-infected youths. This may be a result of the HIV pathology and long-term combinational anti-retroviral therapy (cART). Further studies of larger perinatally HIV-infected cohorts are necessary to confirm these findings.
NASA Astrophysics Data System (ADS)
Suárez, F.; Aravena, J. E.; Hausner, M. B.; Childress, A. E.; Tyler, S. W.
2011-01-01
In shallow thermohaline-driven lakes it is important to measure temperature on fine spatial and temporal scales to detect stratification or different hydrodynamic regimes. Raman spectra distributed temperature sensing (DTS) is an approach available to provide high spatial and temporal temperature resolution. A vertical high-resolution DTS system was constructed to overcome the problems of typical methods used in the past, i.e., without disturbing the water column, and with resistance to corrosive environments. This system monitors the temperature profile each 1.1 cm vertically and in time averages as small as 10 s. Temperature resolution as low as 0.035 °C is obtained when the data are collected at 5-min intervals. The vertical high-resolution DTS system is used to monitor the thermal behavior of a salt-gradient solar pond, which is an engineered shallow thermohaline system that allows collection and storage of solar energy for a long period of time. This paper describes a method to quantitatively assess accuracy, precision and other limitations of DTS systems to fully utilize the capacity of this technology. It also presents, for the first time, a method to manually calibrate temperatures along the optical fiber.
The elimination of zero-order diffraction of 10.6 μm infrared digital holography
NASA Astrophysics Data System (ADS)
Liu, Ning; Yang, Chao
2017-05-01
A new method of eliminating the zero-order diffraction in infrared digital holography has been raised in this paper. Usually in the reconstruction of digital holography, the spatial frequency of the infrared thermal imager, such as microbolometer, cannot be compared to the common visible CCD or CMOS devices. The infrared imager suffers the problems of large pixel size and low spatial resolution, which cause the zero-order diffraction a severe influence of the reconstruction process of digital holograms. The zero-order diffraction has very large energy and occupies the central region in the spectrum domain. In this paper, we design a new filtering strategy to overcome this problem. This filtering strategy contains two kinds of filtering process which are the Gaussian low-frequency filter and the high-pass phase averaging filter. With the correct set of the calculating parameters, these filtering strategies can work effectively on the holograms and fully eliminate the zero-order diffraction, as well as the two crossover bars shown in the spectrum domain. Detailed explanation and discussion about the new method have been proposed in this paper, and the experiment results are also demonstrated to prove the performance of this method.
Melisa L. Holman; David L. Peterson
2006-01-01
We compared annual basal area increment (BAI) at different spatial scales among all size classes and species at diverse locations in the wet western and dry northeastern Olympic Mountains. Weak growth correlations at small spatial scales (average R = 0.084-0.406) suggest that trees are responding to local growth conditions. However, significant...
The Importance of Gesture in Children's Spatial Reasoning
ERIC Educational Resources Information Center
Ehrlich, Stacy B.; Levine, Susan C.; Goldin-Meadow, Susan
2006-01-01
On average, men outperform women on mental rotation tasks. Even boys as young as 4 1/2 perform better than girls on simplified spatial transformation tasks. The goal of our study was to explore ways of improving 5-year-olds' performance on a spatial transformation task and to examine the strategies children use to solve this task. We found that…
NASA Astrophysics Data System (ADS)
Andreo, B.; Barberá, J. A.; Mudarra, M.; Marín, A. I.; García-Orellana, J.; Rodellas, V.; Pérez, I.
2018-02-01
Understanding the transference of water resources within hydrogeological systems, particularly in coastal aquifers, in which groundwater discharge may occur through multiple pathways (through springs, into rivers and streams, towards the sea, etc.), is crucial for sustainable groundwater use. This research aims to demonstrate the usefulness of the application of conventional recharge assessment methods coupled to isotopic techniques for accurately quantifying the hydrogeological balance and submarine groundwater discharge (SGD) from coastal carbonate aquifers. Sierra Almijara (Southern Spain), a carbonate aquifer formed of Triassic marbles, is considered as representative of Mediterranean coastal karst formations. The use of a multi-method approach has permitted the computation of a wide range of groundwater infiltration rates (17-60%) by means of direct application of hydrometeorological methods (Thornthwaite and Kessler) and spatially distributed information (modified APLIS method). A spatially weighted recharge rate of 42% results from the most coherent information on physiographic and hydrogeological characteristics of the studied system. Natural aquifer discharge and groundwater abstraction have been volumetrically quantified, based on flow and water-level data, while the relevance of SGD was estimated from the spatial analysis of salinity, 222Rn and the short-lived radium isotope 224Ra in coastal seawater. The total mean aquifer discharge (44.9-45.9 hm3 year-1) is in agreement with the average recharged groundwater (44.7 hm3 year-1), given that the system is volumetrically equilibrated during the study period. Besides the groundwater resources assessment, the methodological aspects of this research may be interesting for groundwater management and protection strategies in coastal areas, particularly karst environments.
Beyond the SCS curve number: A new stochastic spatial runoff approach
NASA Astrophysics Data System (ADS)
Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.
2015-12-01
The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.
NASA Astrophysics Data System (ADS)
Schäfer, K.; Grant, R. H.; Emeis, S.; Raabe, A.; von der Heide, C.; Schmid, H. P.
2012-07-01
Measurements of land-surface emission rates of greenhouse and other gases at large spatial scales (10 000 m2) are needed to assess the spatial distribution of emissions. This can be readily done using spatial-integrating micro-meteorological methods like flux-gradient methods which were evaluated for determining land-surface emission rates of trace gases under stable boundary layers. Non-intrusive path-integrating measurements are utilized. Successful application of a flux-gradient method requires confidence in the gradients of trace gas concentration and wind, and in the applicability of boundary-layer turbulence theory; consequently the procedures to qualify measurements that can be used to determine the flux is critical. While there is relatively high confidence in flux measurements made under unstable atmospheres with mean winds greater than 1 m s-1, there is greater uncertainty in flux measurements made under free convective or stable conditions. The study of N2O emissions of flat grassland and NH3 emissions from a cattle lagoon involves quality-assured determinations of fluxes under low wind, stable or night-time atmospheric conditions when the continuous "steady-state" turbulence of the surface boundary layer breaks down and the layer has intermittent turbulence. Results indicate that following the Monin-Obukhov similarity theory (MOST) flux-gradient methods that assume a log-linear profile of the wind speed and concentration gradient incorrectly determine vertical profiles and thus flux in the stable boundary layer. An alternative approach is considered on the basis of turbulent diffusivity, i.e. the measured friction velocity as well as height gradients of horizontal wind speeds and concentrations without MOST correction for stability. It is shown that this is the most accurate of the flux-gradient methods under stable conditions.
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
Biomass Burning Aerosol Absorption Measurements with MODIS Using the Critical Reflectance Method
NASA Technical Reports Server (NTRS)
Zhu, Li; Martins, Vanderlei J.; Remer, Lorraine A.
2010-01-01
This research uses the critical reflectance technique, a space-based remote sensing method, to measure the spatial distribution of aerosol absorption properties over land. Choosing two regions dominated by biomass burning aerosols, a series of sensitivity studies were undertaken to analyze the potential limitations of this method for the type of aerosol to be encountered in the selected study areas, and to show that the retrieved results are relatively insensitive to uncertainties in the assumptions used in the retrieval of smoke aerosol. The critical reflectance technique is then applied to Moderate Resolution Imaging Spectrometer (MODIS) data to retrieve the spectral aerosol single scattering albedo (SSA) in South African and South American 35 biomass burning events. The retrieved results were validated with collocated Aerosol Robotic Network (AERONET) retrievals. One standard deviation of mean MODIS retrievals match AERONET products to within 0.03, the magnitude of the AERONET uncertainty. The overlap of the two retrievals increases to 88%, allowing for measurement variance in the MODIS retrievals as well. The ensemble average of MODIS-derived SSA for the Amazon forest station is 0.92 at 670 nm, and 0.84-0.89 for the southern African savanna stations. The critical reflectance technique allows evaluation of the spatial variability of SSA, and shows that SSA in South America exhibits higher spatial variation than in South Africa. The accuracy of the retrieved aerosol SSA from MODIS data indicates that this product can help to better understand 44 how aerosols affect the regional and global climate.
Star formation in M 33: the radial and local relations with the gas
NASA Astrophysics Data System (ADS)
Verley, S.; Corbelli, E.; Giovanardi, C.; Hunt, L. K.
2010-02-01
Aims: In the Local Group spiral galaxy M 33, we investigate the correlation between the star formation rate (SFR) surface density, Σ_SFR, and the gas density Σ_gas (molecular, atomic, and total). We also explore whether there are other physical quantities, such as the hydrostatic pressure and dust optical depth, which establish a good correlation with Σ_SFR. Methods: We use the Hα, far-ultraviolet (FUV), and bolometric emission maps to infer the SFR locally at different spatial scales, and in radial bins using azimuthally averaged values. Most of the local analysis is done using the highest spatial resolution allowed by gas surveys, 180 pc. The Kennicutt-Schmidt (KS) law, Σ_SFR ∝ Σ_gas^n is analyzed by three statistical methods. Results: At all spatial scales, with Hα emission as a SFR tracer, the KS indices n are always steeper than those derived with the FUV and bolometric emissions. We attribute this to the lack of Hα emission in low luminosity regions where most stars form in small clusters with an incomplete initial mass function at their high mass end. For azimuthally averaged values the depletion timescale for the molecular gas is constant, and the KS index is n_H_2=1.1 ±0.1. Locally, at a spatial resolution of 180 pc, the correlation between Σ_SFR and Σ_gas is generally poor, even though it is tighter with the molecular and total gas than with the atomic gas alone. Considering only positions where the CO J=1-0 line is above the 2-σ detection threshold and taking into account uncertainties in Σ_H_2 and Σ_SFR, we obtain a steeper KS index than obtained with radial averages: n_H_2=2.22 ±0.07 (for FUV and bolometric SFR tracers), flatter than that relative to the total gas (n_Htot=2.59 ±0.05). The gas depletion timescale is therefore larger in regions of lower Σ_SFR. Lower KS indices (n_H_2=1.46 ±0.34 and n_H_2=1.12) are found using different fitting techniques, which do not account for individual position uncertainties. At coarser spatial resolutions these indices get slightly steeper, and the correlation improves. We find an almost linear relation and a better correlation coefficient between the local Σ_SFR and the ISM hydrostatic pressure or the gas volume density. This suggests that the stellar disk, gravitationally dominant with respect to the gaseous disk in M 33, has a non-marginal role in driving the SFR. However, the tight local correlation that exists between the dust optical depth and the SFR sheds light on the alternative hypothesis that the dust column density is a good tracer of the gas that is prone to star formation.
Computer quantitation of coronary angiograms
NASA Technical Reports Server (NTRS)
Ledbetter, D. C.; Selzer, R. H.; Gordon, R. M.; Blankenhorn, D. H.; Sanmarco, M. E.
1978-01-01
A computer technique is being developed at the Jet Propulsion Laboratory to automate the measurement of coronary stenosis. A Vanguard 35mm film transport is optically coupled to a Spatial Data System vidicon/digitizer which in turn is controlled by a DEC PDP 11/55 computer. Programs have been developed to track the edges of the arterial shadow, to locate normal and atherosclerotic vessel sections and to measure percent stenosis. Multiple frame analysis techniques are being investigated that involve on the one hand, averaging stenosis measurements from adjacent frames, and on the other hand, averaging adjacent frame images directly and then measuring stenosis from the averaged image. For the latter case, geometric transformations are used to force registration of vessel images whose spatial orientation changes.
Spatial evolution of laser filaments in turbulent air
NASA Astrophysics Data System (ADS)
Zeng, Tao; Zhu, Shiping; Zhou, Shengling; He, Yan
2018-04-01
In this study, the spatial evolution properties of laser filament clusters in turbulent air were evaluated using numerical simulations. Various statistical parameters were calculated, such as the percolation probability, filling factor, and average cluster size. The results indicate that turbulence-induced multi-filamentation can be described as a new phase transition universality class. In addition, during this process, the relationship between the average cluster size and filling factor could be fit by a power function. Our results are valuable for applications involving filamentation that can be influenced by the geometrical features of multiple filaments.
NASA Technical Reports Server (NTRS)
Crosson, William L.; Duchon, Claude E.; Raghavan, Ravikumar; Goodman, Steven J.
1996-01-01
Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are discussed.
Lagrange constraint neural network for audio varying BSS
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.
2002-03-01
Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).
Gao, Lirong; Zhang, Qin; Liu, Lidan; Li, Changliang; Wang, Yiwen
2014-11-01
Twenty-six ambient air samples were collected around a municipal solid waste incinerator (MSWI) in the summer and winter using polyurethane foam passive air samplers, and analyzed to assess the spatial and seasonal distributions of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) and polychlorinated biphenyls (PCBs). Three stack gas samples were also collected and analyzed to determine PCDD/F (971 pg m(-3) in average) and PCB (2,671 pg m(-3) in average) emissions from the MSWI and to help identify the sources of the pollutants in the ambient air. The total PCDD/F concentrations in the ambient air samples were lower in the summer (472-1,223 fg m(-3)) than the winter (561-3913 fg m(-3)). In contrast, the atmospheric total PCB concentrations were higher in the summer (716-4,902 fg m(-3)) than the winter (489-2,298 fg m(-3)). Principal component analysis showed that, besides emissions from the MSWI, the domestic burning of coal and wood also contributed to the presence of PCDD/Fs and PCBs in the ambient air. The PCDD/F and PCB spatial distributions were analyzed using ordinary Kriging Interpolation and limited effect was found to be caused by emissions from the MSWI. Higher PCDD/F and PCB concentrations were observed downwind of the MSWI than in the other directions, but the highest concentrations were not to be found in the direction with the greatest wind frequency which might be caused by emissions from domestic coal and wood burning. We used a systemic method including sampling and data analysis method which can provide pioneering information for characterizing risks and assessing uncertainty of PCDD/Fs and PCBs in the ambient air around MSWIs in China. Copyright © 2014 Elsevier Ltd. All rights reserved.
Using a motion capture system for spatial localization of EEG electrodes
Reis, Pedro M. R.; Lochmann, Matthias
2015-01-01
Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468
Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.
1981-01-01
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.
NASA Technical Reports Server (NTRS)
Kovalenko, L. J.; Philippoz, J.-M.; Bucenell, J. R.; Zenobi, R.; Zare, R. N.
1991-01-01
The distribution of PAHs in the Allende meteorite has been measured using two-step laser desorption and laser multiphoton-ionization mass spectrometry. This method enables in situ analysis (with a spatial resolution of 1 mm or better) of selected organic molecules. Results show that PAH concentrations are locally high compared to the average concentration found by analysis of pulverized samples, and are found primarily in the fine-grained matrix; no PAHs were detected in the interiors of individual chondrules at the detection limit (about 0.05 ppm).
Variability in Tropospheric Ozone over China Derived from Assimilated GOME-2 Ozone Profiles
NASA Astrophysics Data System (ADS)
van Peet, J. C. A.; van der A, R. J.; Kelder, H. M.
2016-08-01
A tropospheric ozone dataset is derived from assimilated GOME-2 ozone profiles for 2008. Ozone profiles are retrieved with the OPERA algorithm, using the optimal estimation method. The retrievals are done on a spatial resolution of 160×160 km on 16 layers ranging from the surface up to 0.01 hPa. By using the averaging kernels in the data assimilation, the algorithm maintains the high resolution vertical structures of the model, while being constrained by observations with a lower vertical resolution.
Kokki, Tommi; Sipilä, Hannu T; Teräs, Mika; Noponen, Tommi; Durand-Schaefer, Nicolas; Klén, Riku; Knuuti, Juhani
2010-01-01
In PET imaging respiratory and cardiac contraction motions interfere the imaging of heart. The aim was to develop and evaluate dual gating method for improving the detection of small targets of the heart. The method utilizes two independent triggers which are sent periodically into list mode data based on respiratory and ECG cycles. An algorithm for generating dual gated segments from list mode data was developed. The test measurements showed that rotational and axial movements of point source can be separated spatially to different segments with well-defined borders. The effect of dual gating on detection of small moving targets was tested with a moving heart phantom. Dual gated images showed 51% elimination (3.6 mm out of 7.0 mm) of contraction motion of hot spot (diameter 3 mm) and 70% elimination (14 mm out of 20 mm) of respiratory motion. Averaged activity value of hot spot increases by 89% when comparing to non-gated images. Patient study of suspected cardiac sarcoidosis shows sharper spatial myocardial uptake profile and improved detection of small myocardial structures such as papillary muscles. The dual gating method improves detection of small moving targets in a phantom and it is feasible in clinical situations.
Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Robertson, Franklin R.; Bosilovich, Michael; Lyon, Bradfield; Funk, Chris
2013-01-01
The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period
Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Roberts, J. Brent; Bosilovich, Michael; Lyon, Bradfield
2013-01-01
The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period.
A system and method for online high-resolution mapping of gastric slow-wave activity.
Bull, Simon H; O'Grady, Gregory; Du, Peng; Cheng, Leo K
2014-11-01
High-resolution (HR) mapping employs multielectrode arrays to achieve spatially detailed analyses of propagating bioelectrical events. A major current limitation is that spatial analyses must currently be performed "off-line" (after experiments), compromising timely recording feedback and restricting experimental interventions. These problems motivated development of a system and method for "online" HR mapping. HR gastric recordings were acquired and streamed to a novel software client. Algorithms were devised to filter data, identify slow-wave events, eliminate corrupt channels, and cluster activation events. A graphical user interface animated data and plotted electrograms and maps. Results were compared against off-line methods. The online system analyzed 256-channel serosal recordings with no unexpected system terminations with a mean delay 18 s. Activation time marking sensitivity was 0.92; positive predictive value was 0.93. Abnormal slow-wave patterns including conduction blocks, ectopic pacemaking, and colliding wave fronts were reliably identified. Compared to traditional analysis methods, online mapping had comparable results with equivalent coverage of 90% of electrodes, average RMS errors of less than 1 s, and CC of activation maps of 0.99. Accurate slow-wave mapping was achieved in near real-time, enabling monitoring of recording quality and experimental interventions targeted to dysrhythmic onset. This work also advances the translation of HR mapping toward real-time clinical application.
NASA Astrophysics Data System (ADS)
Beloconi, Anton; Benas, Nikolaos; Chrysoulakis, Nektarios; Kamarianakis, Yiannis
2015-11-01
Linear mixed effects models were developed for the estimation of the average daily Particulate Matter (PM) concentration spatial distribution over the area of Greater London (UK). Both fine (PM2.5) and coarse (PM10) concentrations were predicted for the 2002- 2012 time period, based on satellite data. The latter included Aerosol Optical Thickness (AOT) at 3×3 km spatial resolution, as well as the Surface Relative Humidity, Surface Temperature and K-Index derived from MODIS (Moderate Resolution Imaging Spectroradiometer) sensor. For a meaningful interpretation of the association among these variables, all data were homogenized with regard to spatial support and geographic projection, thus addressing the change of support problem and leading to a valid statistical inference. To this end, spatial (2D) and spatio- temporal (3D) kriging techniques were applied to in-situ particulate matter concentrations and the leave-one- station-out cross-validation was performed on a daily level to gauge the quality of the predictions. Satellite- derived covariates displayed clear seasonal patterns; in order to work with data which is stationary in mean, for each covariate, deviations from its estimated annual profiles were computed using nonlinear least squares and nonlinear absolute deviations. High-resolution land- cover and morphology static datasets were additionally incorporated in the analysis in order to catch the effects of nearby emission sources and sequestration sites. For pairwise comparisons of the particulate matter concentration means at distinct land-cover classes, the pairwise comparisons method for unequal sample sizes, known as Tukey's method, was performed. The use of satellite-derived products allowed better assessment of space-time interactions of PM, since these daily spatial measurements were able to capture differences in PM concentrations between grid cells, while the use of high- resolution land-cover and morphology static datasets allowed accounting for local industrial, domestic and traffic related air pollution. The developed methods are expected to fully exploit ESA's new Sentinel-3 observations to estimate spatial distributions of both PM10 and PM2.5 concentrations in arbitrary cities.
Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline
2014-01-01
In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248
Characterizing tree canopy temperature heterogeneity using an unmanned aircraft-borne thermal imager
NASA Astrophysics Data System (ADS)
Messinger, M.; Powell, R.; Silman, M.; Wright, M.; Nicholson, W.
2013-12-01
Leaf temperature (Tleaf) is an important control on many physiological processes such as photosynthesis and respiration, is a key variable for characterizing canopy energy fluxes, and is a valuable metric for identifying plant water stress or disease. Traditional methods of Tleaf measurement involve either the use of thermocouples, a time and labor-intensive method that samples sparsely in space, or the use of air temperature (Tair) as a proxy measure, which can introduce inaccuracies due to near constant canopy-atmosphere energy flux. Thermal infrared (TIR) imagery provides an efficient means of collecting Tleaf for large areas. Existing satellite and aircraft-based TIR imagery is, however, limited by low spatial and/or temporal resolution, while crane-mounted camera systems have strictly limited spatial extents. Unmanned aerial systems (UAS) offer new opportunities to acquire high spatial and temporal resolution imagery on demand. Here, we demonstrate the feasibility of collecting tree canopy Tleaf data using a small multirotor UAS fitted with a high spatial resolution TIR imager. The goals of this pilot study were to a) characterize basic patterns of within crown Tleaf for 4 study species and b) identify trends in Tleaf between species with varying leaf morphologies and canopy structures. TIR imagery was acquired for individual tree crowns of 4 species common to the North Carolina Piedmont ecoregion (Quercus phellos, Pinus strobus, Liriodendron tulipifera, Magnolia grandiflora) in an urban park environment. Due to significantly above-average summer precipitation, we assumed that none of the sampled trees was limited by soil water availability. We flew the TIR imaging system over 3-4 individuals of each of the 4 target species on 3 separate days. Imagery of all individuals was collected within the same 2-hour period in the afternoon on all days. There was low wind and partly cloudy skies during imaging. Tair, relative humidity, and wind speed were recorded at each site. Emissivity was assumed to be 0.98 for all species. Acquired images had a pixel resolution of <3 cm and measurement accuracy of ×1° C. We found the UAS-borne TIR imaging system to be an effective tool for collection of high resolution canopy imagery. The system imaged all targeted crowns quickly and reliably, providing a viable alternative to current methods of canopy Tleaf measurement. Analysis of the imagery indicated significant variability in Tleaf both within and between crowns. We identified trends in Tleaf related to average leaf size, shape, and crown structural traits. These data on the heterogeneity of Tleaf can further our understanding of canopy-atmosphere energy exchange. This pilot study demonstrates the promise of UAS-borne TIR sensors for acquiring high spatial resolution imagery at the scale of individual tree crowns.
Surface NO2 fields derived from joint use of OMI and GOME-2A observations with EMEP model output
NASA Astrophysics Data System (ADS)
Schneider, Philipp; Svendby, Tove; Stebel, Kerstin
2016-04-01
Nitrogen dioxide (NO2) is one of the most prominent air pollutants. Emitted primarily by transport and industry, NO2 has a major impact on health and economy. In contrast to the very sparse network of air quality monitoring stations, satellite data of NO2 is ubiquitous and allows for quantifying the NO2 levels worldwide. However, one drawback of satellite-derived NO2 products is that they provide solely an estimate of the entire tropospheric column, whereas what is generally needed for air quality applications are the concentrations of NO2 near the surface. Here we derive surface NO2 concentration fields from OMI and GOME-2A tropospheric column products using the EMEP chemical transport model as auxiliary information. The model is used for providing information of the boundary layer contribution to the total tropospheric column. For preparation of deriving the surface product, a comprehensive model-based analysis of the spatial and temporal patterns of the NO2 surface-to-column ratio in Europe was carried out for the year 2011. The results from this analysis indicate that the spatial patterns of the surface-to-column ratio vary only slightly. While the highest ratio values can be found in some shipping lanes, the spatial variability of the ratio in some of the most polluted areas of Europe is not very high. Some but not all urban agglomeration shows high ratio values. Focusing on the temporal behavior, the analysis showed that the European-wide average ratio varies throughout the year. The surface-to-column ratio increases from January all the way through April when it reaches its maximum, then decreases relatively rapidly to average levels and then stays mostly constant throughout the summer. The minimum ratio is observed in December. The knowledge gained from analyzing the spatial and temporal patterns of the surface-to-column ratio was then used to produce surface NO2 products from the daily NO2 data for OMI and GOME-2A. This was carried out using two methods, namely using 1) hourly surface-to-column ratio at the time of the satellite overpass as well as 2) using annual average ratios thus eliminating the temporal variability and focusing solely on the spatial patterns. A validation of the resulting surface NO2 fields was performed using station observations of NO2 as provided by the Airbase database maintained by the European Environment Agency. First results indicate that the methodology is capable of producing surface concentration fields that reproduce the station-observed surface NO2 levels significantly better than the model surface fields as measured by the root mean squared error. The results also show that the spatial patterns of the surface-to-column ratio are more significant than its temporal variability. In addition to deriving satellite-based surface NO2, we further present initial results of a geostatistical methodology for downscaling satellite products of NO2 to spatial scales that are more relevant for applications in urban air quality. This is being carried out by applying area-to-point kriging techniques while using high-resolution (1-2 km spatial resolution) runs of a chemical transport model as a spatial proxy. In combination, these two techniques for deriving surface NO2 and spatially downscaling satellite-based NO2 fields have significant potential for improving satellite-based monitoring and mapping of regional and local-scale air pollution.
75 years of dryland science: Trends and gaps in arid ecology literature.
Greenville, Aaron C; Dickman, Chris R; Wardle, Glenda M
2017-01-01
Growth in the publication of scientific articles is occurring at an exponential rate, prompting a growing need to synthesise information in a timely manner to combat urgent environmental problems and guide future research. Here, we undertake a topic analysis of dryland literature over the last 75 years (8218 articles) to identify areas in arid ecology that are well studied and topics that are emerging. Four topics-wetlands, mammal ecology, litter decomposition and spatial modelling, were identified as 'hot topics' that showed higher than average growth in publications from 1940 to 2015. Five topics-remote sensing, climate, habitat and spatial, agriculture and soils-microbes, were identified as 'cold topics', with lower than average growth over the survey period, but higher than average numbers of publications. Topics in arid ecology clustered into seven broad groups on word-based similarity. These groups ranged from mammal ecology and population genetics, broad-scale management and ecosystem modelling, plant ecology, agriculture and ecophysiology, to populations and paleoclimate. These patterns may reflect trends in the field of ecology more broadly. We also identified two broad research gaps in arid ecology: population genetics, and habitat and spatial research. Collaborations between population genetics and ecologists and investigations of ecological processes across spatial scales would contribute profitably to the advancement of arid ecology and to ecology more broadly.
High-Quality T2-Weighted 4-Dimensional Magnetic Resonance Imaging for Radiation Therapy Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Dongsu; Caruthers, Shelton D.; Glide-Hurst, Carri
2015-06-01
Purpose: The purpose of this study was to improve triggering efficiency of the prospective respiratory amplitude-triggered 4-dimensional magnetic resonance imaging (4DMRI) method and to develop a 4DMRI imaging protocol that could offer T2 weighting for better tumor visualization, good spatial coverage and spatial resolution, and respiratory motion sampling within a reasonable amount of time for radiation therapy applications. Methods and Materials: The respiratory state splitting (RSS) and multi-shot acquisition (MSA) methods were analytically compared and validated in a simulation study by using the respiratory signals from 10 healthy human subjects. The RSS method was more effective in improving triggering efficiency.more » It was implemented in prospective respiratory amplitude-triggered 4DMRI. 4DMRI image datasets were acquired from 5 healthy human subjects. Liver motion was estimated using the acquired 4DMRI image datasets. Results: The simulation study showed the RSS method was more effective for improving triggering efficiency than the MSA method. The average reductions in 4DMRI acquisition times were 36% and 10% for the RSS and MSA methods, respectively. The human subject study showed that T2-weighted 4DMRI with 10 respiratory states, 60 slices at a spatial resolution of 1.5 × 1.5 × 3.0 mm{sup 3} could be acquired in 9 to 18 minutes, depending on the individual's breath pattern. Based on the acquired 4DMRI image datasets, the ranges of peak-to-peak liver displacements among 5 human subjects were 9.0 to 12.9 mm, 2.5 to 3.9 mm, and 0.5 to 2.3 mm in superior-inferior, anterior-posterior, and left-right directions, respectively. Conclusions: We demonstrated that with the RSS method, it was feasible to acquire high-quality T2-weighted 4DMRI within a reasonable amount of time for radiation therapy applications.« less
Multi-scale approaches for high-speed imaging and analysis of large neural populations
Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam
2017-01-01
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570
NASA Astrophysics Data System (ADS)
Massie, Mark A.; Woolaway, James T., II; Curzan, Jon P.; McCarley, Paul L.
1993-08-01
An infrared focal plane has been simulated, designed and fabricated which mimics the form and function of the vertebrate retina. The `Neuromorphic' focal plane has the capability of performing pixel-based sensor fusion and real-time local contrast enhancement, much like the response of the human eye. The device makes use of an indium antimonide detector array with a 3 - 5 micrometers spectral response, and a switched capacitor resistive network to compute a real-time 2D spatial average. This device permits the summation of other sensor outputs to be combined on-chip with the infrared detections of the focal plane itself. The resulting real-time analog processed information thus represents the combined information of many sensors with the advantage that analog spatial and temporal signal processing is performed at the focal plane. A Gaussian subtraction method is used to produce the pixel output which when displayed produces an image with enhanced edges, representing spatial and temporal derivatives in the scene. The spatial and temporal responses of the device are tunable during operation, permitting the operator to `peak up' the response of the array to spatial and temporally varying signals. Such an array adapts to ambient illumination conditions without loss of detection performance. This paper reviews the Neuromorphic infrared focal plane from initial operational simulations to detailed design characteristics, and concludes with a presentation of preliminary operational data for the device as well as videotaped imagery.
A Remote Sensing Image Fusion Method based on adaptive dictionary learning
NASA Astrophysics Data System (ADS)
He, Tongdi; Che, Zongxi
2018-01-01
This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models. PMID:26890307
Li, Jin; Tran, Maggie; Siwabessy, Justy
2016-01-01
Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and caution should be taken when applying filter FS methods in selecting predictive models.
NASA Astrophysics Data System (ADS)
Franz, K.; Dziubanski, D.; Helmers, M. J.
2015-12-01
The simplicity of the Curve Number (CN) method, which summarizes an area's hydrologic soil group, land cover, treatment, and hydrologic condition into a single number, make it a consistently popular choice for modelers. When multiple land cover types are present, a weighted average of the CNs is used. However, the weighted CN does not account for the spatial distribution of different land cover types within the watershed. To overcome this limitation, it becomes necessary to discretize the model into homogenous subunits, perhaps even to the hillslope scale, leading to a more complex model application. The objective of this study is to empirically derive CN values that reflect the effects of placements of native prairie vegetation (NPV) within agricultural landscapes. We derived CN values using precipitation and runoff data from (May 1 - Sept 30 over a 7 year period (2008 - 2014) for 9 ephemeral watersheds in Iowa (USA) ranging from 0.47 to 3.19 ha. The watersheds were planted with varying extents of NPV (0%, 10%, 20%) in different watershed positions (footslope vs. contour strips), with the rest of the watershed as row crop. The derived CN values from watersheds with all row crop were consistent with published values and watersheds with NPV had an average CN reduction of 6.4%, with a maximum reduction of 11.6%. Four of the six sites with treatment had a lower CN than one calculated using a weighted average of look-up values, indicating that accounting for placement of vegetation within the landscape is important for modeling runoff with the CN method. The derived CNs were verified using the leave-one-year-out method (computing CN using data from 6 of the 7 years, and then estimating runoff on the seventh year with that CN). Nash-Sutcliffe Efficiency (NSE) values for the estimated runoff typically ranged from 0.4-0.6. Our results suggest that the new CNs could confidently be used in future modeling studies to explore the hydrologic impacts of the NPV treatments at increasingly larger watershed scales.
Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing
2017-03-01
Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
NASA Astrophysics Data System (ADS)
Yin, Kai; Wen, MeiPing; Zhang, FeiFei; Yuan, Chao; Chen, Qiang; Zhang, Xiupeng
2016-10-01
With the acceleration of urbanization in China, most rural areas formed a widespread phenomenon, i.e., destitute village, labor population loss, land abandonment and rural hollowing. And it formed a unique hollow village problem in China finally. The governance of hollow village was the objective need of the development of economic and social development in rural area for Chinese government, and the research on the evaluation method of rural hollowing was the premise and basis of the hollow village governance. In this paper, several evaluation methods were used to evaluate the rural hollowing based on the survey data, land use data, social and economic development data. And these evaluation indexes were the transition of homesteads, the development intensity of rural residential areas, the per capita housing construction area, the residential population proportion in rural area, and the average annual electricity consumption, which can reflect the rural hollowing degree from the land, population, and economy point of view, respectively. After that, spatial analysis method of GIS was used to analyze the evaluation result for each index. Based on spatial raster data generated by Kriging interpolation, we carried out re-classification of all the results. Using the fuzzy clustering method, the rural hollowing degree in Ningxia area was reclassified based on the two spatial scales of county and village. The results showed that the rural hollowing pattern in the Ningxia Hui Autonomous Region had a spatial distribution characteristics that the rural hollowing degree was obvious high in the middle of the study area but was low around the study area. On a county scale, the specific performances of the serious rural hollowing were the higher degree of extensive land use, and the lower level of rural economic development and population transfer concentration. On a village scale, the main performances of the rural hollowing were the rural population loss and idle land. The evaluation method of rural hollowing constructed in this paper can effectively carry out a comprehensive degree zoning of rural hollowing, which can make orderly decision support plans of hollow village governance for the government.
Intrinsic coincident linear polarimetry using stacked organic photovoltaics.
Roy, S Gupta; Awartani, O M; Sen, P; O'Connor, B T; Kudenov, M W
2016-06-27
Polarimetry has widespread applications within atmospheric sensing, telecommunications, biomedical imaging, and target detection. Several existing methods of imaging polarimetry trade off the sensor's spatial resolution for polarimetric resolution, and often have some form of spatial registration error. To mitigate these issues, we have developed a system using oriented polymer-based organic photovoltaics (OPVs) that can preferentially absorb linearly polarized light. Additionally, the OPV cells can be made semitransparent, enabling multiple detectors to be cascaded along the same optical axis. Since each device performs a partial polarization measurement of the same incident beam, high temporal resolution is maintained with the potential for inherent spatial registration. In this paper, a Mueller matrix model of the stacked OPV design is provided. Based on this model, a calibration technique is developed and presented. This calibration technique and model are validated with experimental data, taken with a cascaded three cell OPV Stokes polarimeter, capable of measuring incident linear polarization states. Our results indicate polarization measurement error of 1.2% RMS and an average absolute radiometric accuracy of 2.2% for the demonstrated polarimeter.
Effects of daily, high spatial resolution a priori profiles of satellite-derived NOx emissions
NASA Astrophysics Data System (ADS)
Laughner, J.; Zare, A.; Cohen, R. C.
2016-12-01
The current generation of space-borne NO2 column observations provides a powerful method of constraining NOx emissions due to the spatial resolution and global coverage afforded by the Ozone Monitoring Instrument (OMI). The greater resolution available in next generation instruments such as TROPOMI and the capabilities of geosynchronous platforms TEMPO, Sentinel-4, and GEMS will provide even greater capabilities in this regard, but we must apply lessons learned from the current generation of retrieval algorithms to make the best use of these instruments. Here, we focus on the effect of the resolution of the a priori NO2 profiles used in the retrieval algorithms. We show that for an OMI retrieval, using daily high-resolution a priori profiles results in changes in the retrieved VCDs up to 40% when compared to a retrieval using monthly average profiles at the same resolution. Further, comparing a retrieval with daily high spatial resolution a priori profiles to a more standard one, we show that emissions derived increase by 100% when using the optimized retrieval.
Uncovering urban human mobility from large scale taxi GPS data
NASA Astrophysics Data System (ADS)
Tang, Jinjun; Liu, Fang; Wang, Yinhai; Wang, Hua
2015-11-01
Taxi GPS trajectories data contain massive spatial and temporal information of urban human activity and mobility. Taking taxi as mobile sensors, the information derived from taxi trips benefits the city and transportation planning. The original data used in study are collected from more than 1100 taxi drivers in Harbin city. We firstly divide the city area into 400 different transportation districts and analyze the origin and destination distribution in urban area on weekday and weekend. The Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm is used to cluster pick-up and drop-off locations. Furthermore, four spatial interaction models are calibrated and compared based on trajectories in shopping center of Harbin city to study the pick-up location searching behavior. By extracting taxi trips from GPS data, travel distance, time and average speed in occupied and non-occupied status are then used to investigate human mobility. Finally, we use observed OD matrix of center area in Harbin city to model the traffic distribution patterns based on entropy-maximizing method, and the estimation performance verify its effectiveness in case study.
Cowan, Cameron S; Sabharwal, Jasdeep; Wu, Samuel M
2016-09-01
Reverse correlation methods such as spike-triggered averaging consistently identify the spatial center in the linear receptive fields (RFs) of retinal ganglion cells (GCs). However, the spatial antagonistic surround observed in classical experiments has proven more elusive. Tests for the antagonistic surround have heretofore relied on models that make questionable simplifying assumptions such as space-time separability and radial homogeneity/symmetry. We circumvented these, along with other common assumptions, and observed a linear antagonistic surround in 754 of 805 mouse GCs. By characterizing the RF's space-time structure, we found the overall linear RF's inseparability could be accounted for both by tuning differences between the center and surround and differences within the surround. Finally, we applied this approach to characterize spatial asymmetry in the RF surround. These results shed new light on the spatiotemporal organization of GC linear RFs and highlight a major contributor to its inseparability. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.
Redistribution population data across a regular spatial grid according to buildings characteristics
NASA Astrophysics Data System (ADS)
Calka, Beata; Bielecka, Elzbieta; Zdunkiewicz, Katarzyna
2016-12-01
Population data are generally provided by state census organisations at the predefined census enumeration units. However, these datasets very are often required at userdefined spatial units that differ from the census output levels. A number of population estimation techniques have been developed to address these problems. This article is one of those attempts aimed at improving county level population estimates by using spatial disaggregation models with support of buildings characteristic, derived from national topographic database, and average area of a flat. The experimental gridded population surface was created for Opatów county, sparsely populated rural region located in Central Poland. The method relies on geolocation of population counts in buildings, taking into account the building volume and structural building type and then aggregation the people total in 1 km quadrilateral grid. The overall quality of population distribution surface expressed by the mean of RMSE equals 9 persons, and the MAE equals 0.01. We also discovered that nearly 20% of total county area is unpopulated and 80% of people lived on 33% of the county territory.
Xia, Zhang; Jing-Bo, Xue; He-Hua, Hu; Xiong, Liu; Cai-Xia, Cui; Xiao-Hong, Wen; Xiao-Ping, Xie; Wei-Rong, Zhang; Rong, Tian; Li-Chun, Dong; Chun-Li, Cao; Shi-Zhu, Li; Yi-Biao, Zhou
2017-03-07
To understand the spatial distribution characteristics of wild feces in schistosomiasis endemic areas of Jiangling County, Hubei Province and further explore the source of infection efficiently, so as to provide the evidence for the development of corresponding monitoring and response technology. In 2011, the fresh wild feces were investigated every two months in the selected 15 villages by the severity of historical endemic in Jiangling County. The schistosome miracidium hatching method was used to test the schistosome infection of the wild feces. The descriptive analysis and spatial analysis were used for the description of the spatial distribution of the wild feces. Totally 701 wild feces samples were collected with the average density of 0.055 6/100 m 2 , and the positive rate of the wild feces was 11.70% (82/701). The results of the regression analysis showed a positive spatial correlation between the positive rate of wild feces and the rate of human infection, the area with infected Oncomelania hupensis and the number of fenced cattle, and the corrected R 2 of the model was 0.58. The infection rate of wild feces is positively correlated with the rate of human infection, area with infected O. hupensis and number of fenced cattle in space in Jiangling County, so the prevention and control measures could be conducted according to the spatial distribution of the positive wild feces.
Chagas disease vector control and Taylor's law
Rodríguez-Planes, Lucía I.; Gaspe, María S.; Cecere, María C.; Cardinal, Marta V.
2017-01-01
Background Large spatial and temporal fluctuations in the population density of living organisms have profound consequences for biodiversity conservation, food production, pest control and disease control, especially vector-borne disease control. Chagas disease vector control based on insecticide spraying could benefit from improved concepts and methods to deal with spatial variations in vector population density. Methodology/Principal findings We show that Taylor's law (TL) of fluctuation scaling describes accurately the mean and variance over space of relative abundance, by habitat, of four insect vectors of Chagas disease (Triatoma infestans, Triatoma guasayana, Triatoma garciabesi and Triatoma sordida) in 33,908 searches of people's dwellings and associated habitats in 79 field surveys in four districts in the Argentine Chaco region, before and after insecticide spraying. As TL predicts, the logarithm of the sample variance of bug relative abundance closely approximates a linear function of the logarithm of the sample mean of abundance in different habitats. Slopes of TL indicate spatial aggregation or variation in habitat suitability. Predictions of new mathematical models of the effect of vector control measures on TL agree overall with field data before and after community-wide spraying of insecticide. Conclusions/Significance A spatial Taylor's law identifies key habitats with high average infestation and spatially highly variable infestation, providing a new instrument for the control and elimination of the vectors of a major human disease. PMID:29190728
Zhang, Peng; Chen, Xiaoling; Lu, Jianzhong; Zhang, Wei
2015-12-01
Numerical models are important tools that are used in studies of sediment dynamics in inland and coastal waters, and these models can now benefit from the use of integrated remote sensing observations. This study explores a scheme for assimilating remotely sensed suspended sediment (from charge-coupled device (CCD) images obtained from the Huanjing (HJ) satellite) into a two-dimensional sediment transport model of Poyang Lake, the largest freshwater lake in China. Optimal interpolation is used as the assimilation method, and model predictions are obtained by combining four remote sensing images. The parameters for optimal interpolation are determined through a series of assimilation experiments evaluating the sediment predictions based on field measurements. The model with assimilation of remotely sensed sediment reduces the root-mean-square error of the predicted sediment concentrations by 39.4% relative to the model without assimilation, demonstrating the effectiveness of the assimilation scheme. The spatial effect of assimilation is explored by comparing model predictions with remotely sensed sediment, revealing that the model with assimilation generates reasonable spatial distribution patterns of suspended sediment. The temporal effect of assimilation on the model's predictive capabilities varies spatially, with an average temporal effect of approximately 10.8 days. The current velocities which dominate the rate and direction of sediment transport most likely result in spatial differences in the temporal effect of assimilation on model predictions.
A spatial cluster analysis of tractor overturns in Kentucky from 1960 to 2002
Saman, D.M.; Cole, H.P.; Odoi, A.; Myers, M.L.; Carey, D.I.; Westneat, S.C.
2012-01-01
Background: Agricultural tractor overturns without rollover protective structures are the leading cause of farm fatalities in the United States. To our knowledge, no studies have incorporated the spatial scan statistic in identifying high-risk areas for tractor overturns. The aim of this study was to determine whether tractor overturns cluster in certain parts of Kentucky and identify factors associated with tractor overturns. Methods: A spatial statistical analysis using Kulldorff's spatial scan statistic was performed to identify county clusters at greatest risk for tractor overturns. A regression analysis was then performed to identify factors associated with tractor overturns. Results: The spatial analysis revealed a cluster of higher than expected tractor overturns in four counties in northern Kentucky (RR = 2.55) and 10 counties in eastern Kentucky (RR = 1.97). Higher rates of tractor overturns were associated with steeper average percent slope of pasture land by county (p = 0.0002) and a greater percent of total tractors with less than 40 horsepower by county (p<0.0001). Conclusions: This study reveals that geographic hotspots of tractor overturns exist in Kentucky and identifies factors associated with overturns. This study provides policymakers a guide to targeted county-level interventions (e.g., roll-over protective structures promotion interventions) with the intention of reducing tractor overturns in the highest risk counties in Kentucky. ?? 2012 Saman et al.
Ray, J D
2001-09-28
The National Park Service (NPS) has tested and used passive ozone samplers for several years to get baseline values for parks and to determine the spatial variability within parks. Experience has shown that the Ogawa passive samplers can provide +/-10% accuracy when used with a quality assurance program consisting of blanks, duplicates, collocated instrumentation, and a standard operating procedure that carefully guides site operators. Although the passive device does not meet EPA criteria as a certified method (mainly, that hourly values be measured), it does provide seasonal summed values of ozone. The seasonal ozone concentrations from the passive devices can be compared to other monitoring to determine baseline values, trends, and spatial variations. This point is illustrated with some kriged interpolation maps of ozone statistics. Passive ozone samplers were used to get elevational gradients and spatial distributions of ozone within a park. This was done in varying degrees at Mount Rainier, Olympic, Sequoia-Kings Canyon, Yosemite, Joshua Tree, Rocky Mountain, and Great Smoky Mountains national parks. The ozone has been found to vary by factors of 2 and 3 within a park when average ozone is compared between locations. Specific examples of the spatial distributions of ozone in three parks within California are given using interpolation maps. Positive aspects and limitations of the passive sampling approach are presented.
Zhang, Renduo; Wood, A Lynn; Enfield, Carl G; Jeong, Seung-Woo
2003-01-01
Stochastical analysis was performed to assess the effect of soil spatial variability and heterogeneity on the recovery of denser-than-water nonaqueous phase liquids (DNAPL) during the process of surfactant-enhanced remediation. UTCHEM, a three-dimensional, multicomponent, multiphase, compositional model, was used to simulate water flow and chemical transport processes in heterogeneous soils. Soil spatial variability and heterogeneity were accounted for by considering the soil permeability as a spatial random variable and a geostatistical method was used to generate random distributions of the permeability. The randomly generated permeability fields were incorporated into UTCHEM to simulate DNAPL transport in heterogeneous media and stochastical analysis was conducted based on the simulated results. From the analysis, an exponential relationship between average DNAPL recovery and soil heterogeneity (defined as the standard deviation of log of permeability) was established with a coefficient of determination (r2) of 0.991, which indicated that DNAPL recovery decreased exponentially with increasing soil heterogeneity. Temporal and spatial distributions of relative saturations in the water phase, DNAPL, and microemulsion in heterogeneous soils were compared with those in homogeneous soils and related to soil heterogeneity. Cleanup time and uncertainty to determine DNAPL distributions in heterogeneous soils were also quantified. The study would provide useful information to design strategies for the characterization and remediation of nonaqueous phase liquid-contaminated soils with spatial variability and heterogeneity.
An investigation on thermal patterns in Iran based on spatial autocorrelation
NASA Astrophysics Data System (ADS)
Fallah Ghalhari, Gholamabbas; Dadashi Roudbari, Abbasali
2018-02-01
The present study aimed at investigating temporal-spatial patterns and monthly patterns of temperature in Iran using new spatial statistical methods such as cluster and outlier analysis, and hotspot analysis. To do so, climatic parameters, monthly average temperature of 122 synoptic stations, were assessed. Statistical analysis showed that January with 120.75% had the most fluctuation among the studied months. Global Moran's Index revealed that yearly changes of temperature in Iran followed a strong spatially clustered pattern. Findings showed that the biggest thermal cluster pattern in Iran, 0.975388, occurred in May. Cluster and outlier analyses showed that thermal homogeneity in Iran decreases in cold months, while it increases in warm months. This is due to the radiation angle and synoptic systems which strongly influence thermal order in Iran. The elevations, however, have the most notable part proved by Geographically weighted regression model. Iran's thermal analysis through hotspot showed that hot thermal patterns (very hot, hot, and semi-hot) were dominant in the South, covering an area of 33.5% (about 552,145.3 km2). Regions such as mountain foot and low lands lack any significant spatial autocorrelation, 25.2% covering about 415,345.1 km2. The last is the cold thermal area (very cold, cold, and semi-cold) with about 25.2% covering about 552,145.3 km2 of the whole area of Iran.
NASA Astrophysics Data System (ADS)
Torres, A.; Hassan Esfahani, L.; Ebtehaj, A.; McKee, M.
2016-12-01
While coarse space-time resolution of satellite observations in visible to near infrared (VIR) is a serious limiting factor for applications in precision agriculture, high resolution remotes sensing observation by the Unmanned Aerial Systems (UAS) systems are also site-specific and still practically restrictive for widespread applications in precision agriculture. We present a modern spatial downscaling approach that relies on new sparse approximation techniques. The downscaling approach learns from a large set of coincident low- and high-resolution satellite and UAS observations to effectively downscale the satellite imageries in VIR bands. We focus on field experiments using the AggieAirTM platform and Landsat 7 ETM+ and Landsat 8 OLI observations obtained in an intensive field campaign in 2013 over an agriculture field in Scipio, Utah. The results show that the downscaling methods can effectively increase the resolution of Landsat VIR imageries by the order of 2 to 4 from 30 m to 15 and 7.5 m, respectively. Specifically, on average, the downscaling method reduces the root mean squared errors up to 26%, considering bias corrected AggieAir imageries as the reference.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandrosov, V I
2015-10-31
This paper analyses low-coherence tomography of absorbing media with the use of spatially separated counterpropagating object and reference beams. A probe radiation source based on a broadband terahertz (THz) generator that emits sufficiently intense THz waves in the spectral range 90 – 350 μm and a prism spectroscope that separates out eight narrow intervals from this range are proposed for implementing this method. This allows media of interest to be examined by low-coherence tomography with counterpropagating beams in each interval. It is shown that, according to the Rayleigh criterion, the method is capable of resolving inhomogeneities with a size nearmore » one quarter of the coherence length of the probe radiation. In addition, the proposed tomograph configuration allows one to determine the average surface asperity slope and the refractive index and absorption coefficient of inhomogeneities 180 to 700 mm in size, and obtain spectra of such inhomogeneities in order to determine their chemical composition. (laser applications and other topics in quantum electronics)« less
NASA Astrophysics Data System (ADS)
Zhang, Siqian; Kuang, Gangyao
2014-10-01
In this paper, a novel three-dimensional imaging algorithm of downward-looking linear array SAR is presented. To improve the resolution, multiple signal classification (MUSIC) algorithm has been used. However, since the scattering centers are always correlated in real SAR system, the estimated covariance matrix becomes singular. To address the problem, a three-dimensional spatial smoothing method is proposed in this paper to restore the singular covariance matrix to a full-rank one. The three-dimensional signal matrix can be divided into a set of orthogonal three-dimensional subspaces. The main idea of the method is based on extracting the array correlation matrix as the average of all correlation matrices from the subspaces. In addition, the spectral height of the peaks contains no information with regard to the scattering intensity of the different scattering centers, thus it is difficulty to reconstruct the backscattering information. The least square strategy is used to estimate the amplitude of the scattering center in this paper. The above results of the theoretical analysis are verified by 3-D scene simulations and experiments on real data.
Surface characteristics and damage distributions of diamond wire sawn wafers for silicon solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sopori, Bhushan; Devayajanam, Srinivas; Basnyat, Prakash
2016-01-01
This paper describes surface characteristics, in terms of its morphology, roughness and near-surface damage of Si wafers cut by diamond wire sawing (DWS) of Si ingots under different cutting conditions. Diamond wire sawn Si wafers exhibit nearly-periodic surface features of different spatial wavelengths, which correspond to kinematics of various movements during wafering, such as ingot feed, wire reciprocation, and wire snap. The surface damage occurs in the form of frozen-in dislocations, phase changes, and microcracks. The in-depth damage was determined by conventional methods such as TEM, SEM and angle-polishing/defect-etching. However, because these methods only provide local information, we have alsomore » applied a new technique that determines average damage depth over a large area. This technique uses sequential measurement of the minority carrier lifetime after etching thin layers from the surfaces. The lateral spatial damage variations, which seem to be mainly related to wire reciprocation process, were observed by photoluminescence and minority carrier lifetime mapping. Our results show a strong correlation of damage depth on the diamond grit size and wire usage.« less
Singla, Neeru; Srivastava, Vishal; Mehta, Dalip Singh
2018-05-01
Malaria is a life-threatening infectious blood disease affecting humans and other animals caused by parasitic protozoans belonging to the Plasmodium type especially in developing countries. The gold standard method for the detection of malaria is through the microscopic method of chemically treated blood smears. We developed an automated optical spatial coherence tomographic system using a machine learning approach for a fast identification of malaria cells. In this study, 28 samples (15 healthy, 13 malaria infected stages of red blood cells) were imaged by the developed system and 13 features were extracted. We designed a multilevel ensemble-based classifier for the quantitative prediction of different stages of the malaria cells. The proposed classifier was used by repeating k-fold cross validation dataset and achieve a high-average accuracy of 97.9% for identifying malaria infected late trophozoite stage of cells. Overall, our proposed system and multilevel ensemble model has a substantial quantifiable potential to detect the different stages of malaria infection without staining or expert. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Study of phase clustering method for analyzing large volumes of meteorological observation data
NASA Astrophysics Data System (ADS)
Volkov, Yu. V.; Krutikov, V. A.; Botygin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.
2017-11-01
The article describes an iterative parallel phase grouping algorithm for temperature field classification. The algorithm is based on modified method of structure forming by using analytic signal. The developed method allows to solve tasks of climate classification as well as climatic zoning for any time or spatial scale. When used to surface temperature measurement series, the developed algorithm allows to find climatic structures with correlated changes of temperature field, to make conclusion on climate uniformity in a given area and to overview climate changes over time by analyzing offset in type groups. The information on climate type groups specific for selected geographical areas is expanded by genetic scheme of class distribution depending on change in mutual correlation level between ground temperature monthly average.
2015-08-01
21 Figure 4. Data-based proportion of DDD , DDE and DDT in total DDx in fish and sediment by... DDD dichlorodiphenyldichloroethane DDE dichlorodiphenyldichloroethylene DDT dichlorodiphenyltrichloroethane DoD Department of Defense ERM... DDD ) at the other site. The spatially-explicit model consistently predicts tissue concentrations that closely match both the average and the
Drive by Soil Moisture Measurement: A Citizen Science Project
NASA Astrophysics Data System (ADS)
Senanayake, I. P.; Willgoose, G. R.; Yeo, I. Y.; Hancock, G. R.
2017-12-01
Two of the common attributes of soil moisture are that at any given time it varies quite markedly from point to point, and that there is a significant deterministic pattern that underlies this spatial variation and which is typically 50% of the spatial variability. The spatial variation makes it difficult to determine the time varying catchment average soil moisture using field measurements because any individual measurement is unlikely to be equal to the average for the catchment. The traditional solution to this is to make many measurements (e.g. with soil moisture probes) spread over the catchment, which is very costly and manpower intensive, particularly if we need a time series of soil moisture variation across a catchment. An alternative approach, explored in this poster is to use the deterministic spatial pattern of soil moisture to calibrate one site (e.g. a permanent soil moisture probe at a weather station) to the spatial pattern of soil moisture over the study area. The challenge is then to determine the spatial pattern of soil moisture. This poster will present results from a proof of concept project, where data was collected by a number of undergraduate engineering students, to estimate the spatial pattern. The approach was to drive along a series of roads in a catchment and collect soil moisture measurements at the roadside using field portable soil moisture probes. This drive was repeated a number of times over the semester, and the time variation and spatial persistence of the soil moisture pattern were examined. Provided that the students could return to exactly the same location on each collection day there was a strong persistent pattern in the soil moisture, even while the average soil moisture varied temporally as a result of preceding rainfall. The poster will present results and analysis of the student data, and compare these results with several field sites where we have spatially distributed permanently installed soil moisture probes. The poster will also outline an experimental design, based on our experience, that will underpin a proposed citizen science project involving community environment and farming groups, and high school students.
NASA Astrophysics Data System (ADS)
Ashraf, M. A. M.; Kumar, N. S.; Yusoh, R.; Hazreek, Z. A. M.; Aziman, M.
2018-04-01
Site classification utilizing average shear wave velocity (Vs(30) up to 30 meters depth is a typical parameter. Numerous geophysical methods have been proposed for estimation of shear wave velocity by utilizing assortment of testing configuration, processing method, and inversion algorithm. Multichannel Analysis of Surface Wave (MASW) method is been rehearsed by numerous specialist and professional to geotechnical engineering for local site characterization and classification. This study aims to determine the site classification on soft and hard ground using MASW method. The subsurface classification was made utilizing National Earthquake Hazards Reduction Program (NERHP) and international Building Code (IBC) classification. Two sites are chosen to acquire the shear wave velocity which is in the state of Pulau Pinang for soft soil and Perlis for hard rock. Results recommend that MASW technique can be utilized to spatially calculate the distribution of shear wave velocity (Vs(30)) in soil and rock to characterize areas.
Development of a Hybrid RANS/LES Method for Compressible Mixing Layer Simulations
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli
2001-01-01
A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modem day aircraft and also those of hypersonic vehicles currently under development. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS/LES method on stretched, non-Cartesian grids. The hybrid RANS/LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two-dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Actual LES calculations, performed in three spatial directions, indicated an initial vortex shedding followed by rapid transition to turbulence, which is in agreement with experimental observations.
NASA Astrophysics Data System (ADS)
Jordan, Gyozo; Petrik, Attila; De Vivo, Benedetto; Albanese, Stefano; Demetriades, Alecos; Sadeghi, Martiya
2017-04-01
Several studies have investigated the spatial distribution of chemical elements in topsoil (0-20 cm) within the framework of the EuroGeoSurveys Geochemistry Expert Group's 'Geochemical Mapping of Agricultural and Grazing Land Soil' project . Most of these studies used geostatistical analyses and interpolated concentration maps, Exploratory and Compositional Data and Analysis to identify anomalous patterns. The objective of our investigation is to demonstrate the use of digital image processing techniques for reproducible spatial pattern recognition and quantitative spatial feature characterisation. A single element (Ni) concentration in agricultural topsoil is used to perform the detailed spatial analysis, and to relate these features to possible underlying processes. In this study, simple univariate statistical methods were implemented first, and Tukey's inner-fence criterion was used to delineate statistical outliers. The linear and triangular irregular network (TIN) interpolation was used on the outlier-free Ni data points, which was resampled to a 10*10 km grid. Successive moving average smoothing was applied to generalise the TIN model and to suppress small- and at the same time enhance significant large-scale features of Nickel concentration spatial distribution patterns in European topsoil. The TIN map smoothed with a moving average filter revealed the spatial trends and patterns without losing much detail, and it was used as the input into digital image processing, such as local maxima and minima determination, digital cross sections, gradient magnitude and gradient direction calculation, second derivative profile curvature calculation, edge detection, local variability assessment, lineament density and directional variogram analyses. The detailed image processing analysis revealed several NE-SW, E-W and NW-SE oriented elongated features, which coincide with different spatial parameter classes and alignment with local maxima and minima. The NE-SW oriented linear pattern is the dominant feature to the south of the last glaciation limit. Some of these linear features are parallel to the suture zone of the Iapetus Ocean, while the others follow the Alpine and Carpathian Chains. The highest variability zones of Ni concentration in topsoil are located in the Alps and in the Balkans where mafic and ultramafic rocks outcrop. The predominant NE-SW oriented pattern is also captured by the strong anisotropy in the semi-variograms in this direction. A single major E-W oriented north-facing feature runs along the southern border of the last glaciation zone. This zone also coincides with a series of local maxima in Ni concentration along the glaciofluvial deposits. The NW-SE elongated spatial features are less dominant and are located in the Pyrenees and Scandinavia. This study demonstrates the efficiency of systematic image processing analysis in identifying and characterising spatial geochemical patterns that often remain uncovered by the usual visual map interpretation techniques.
Topp, Cairistiona F. E.; Moorby, Jon M.; Pásztor, László; Foyer, Christine H.
2018-01-01
Dairy farming is one the most important sectors of United Kingdom (UK) agriculture. It faces major challenges due to climate change, which will have direct impacts on dairy cows as a result of heat stress. In the absence of adaptations, this could potentially lead to considerable milk loss. Using an 11-member climate projection ensemble, as well as an ensemble of 18 milk loss estimation methods, temporal changes in milk production of UK dairy cows were estimated for the 21st century at a 25 km resolution in a spatially-explicit way. While increases in UK temperatures are projected to lead to relatively low average annual milk losses, even for southern UK regions (<180 kg/cow), the ‘hottest’ 25×25 km grid cell in the hottest year in the 2090s, showed an annual milk loss exceeding 1300 kg/cow. This figure represents approximately 17% of the potential milk production of today’s average cow. Despite the potential considerable inter-annual variability of annual milk loss, as well as the large differences between the climate projections, the variety of calculation methods is likely to introduce even greater uncertainty into milk loss estimations. To address this issue, a novel, more biologically-appropriate mechanism of estimating milk loss is proposed that provides more realistic future projections. We conclude that South West England is the region most vulnerable to climate change economically, because it is characterised by a high dairy herd density and therefore potentially high heat stress-related milk loss. In the absence of mitigation measures, estimated heat stress-related annual income loss for this region by the end of this century may reach £13.4M in average years and £33.8M in extreme years. PMID:29738581
Automatic delineation of brain regions on MRI and PET images from the pig.
Villadsen, Jonas; Hansen, Hanne D; Jørgensen, Louise M; Keller, Sune H; Andersen, Flemming L; Petersen, Ida N; Knudsen, Gitte M; Svarer, Claus
2018-01-15
The increasing use of the pig as a research model in neuroimaging requires standardized processing tools. For example, extraction of regional dynamic time series from brain PET images requires parcellation procedures that benefit from being automated. Manual inter-modality spatial normalization to a MRI atlas is operator-dependent, time-consuming, and can be inaccurate with lack of cortical radiotracer binding or skull uptake. A parcellated PET template that allows for automatic spatial normalization to PET images of any radiotracer. MRI and [ 11 C]Cimbi-36 PET scans obtained in sixteen pigs made the basis for the atlas. The high resolution MRI scans allowed for creation of an accurately averaged MRI template. By aligning the within-subject PET scans to their MRI counterparts, an averaged PET template was created in the same space. We developed an automatic procedure for spatial normalization of the averaged PET template to new PET images and hereby facilitated transfer of the atlas regional parcellation. Evaluation of the automatic spatial normalization procedure found the median voxel displacement to be 0.22±0.08mm using the MRI template with individual MRI images and 0.92±0.26mm using the PET template with individual [ 11 C]Cimbi-36 PET images. We tested the automatic procedure by assessing eleven PET radiotracers with different kinetics and spatial distributions by using perfusion-weighted images of early PET time frames. We here present an automatic procedure for accurate and reproducible spatial normalization and parcellation of pig PET images of any radiotracer with reasonable blood-brain barrier penetration. Copyright © 2017 Elsevier B.V. All rights reserved.
Observational study of treatment space in individual neonatal cot spaces.
Hignett, Sue; Lu, Jun; Fray, Mike
2010-01-01
Technology developments in neonatal intensive care units have increased the spatial requirements for clinical activities. Because the effectiveness of healthcare delivery is determined in part by the design of the physical environment and the spatial organization of work, it is appropriate to apply an evidence-based approach to architectural design. This study aimed to provide empirical evidence of the spatial requirements for an individual cot or incubator space. Observational data from 2 simulation exercises were combined with an expert review to produce a final recommendation. A validated 5-step protocol was used to collect data. Step 1 defined the clinical specialty and space. In step 2, data were collected with 28 staff members and 15 neonates to produce a simulation scenario representing the frequent and safety-critical activities. In step 3, 21 staff members participated in functional space experiments to determine the average spatial requirements. Step 4 incorporated additional data (eg, storage and circulation) to produce a spatial recommendation. Finally, the recommendation was reviewed in step 5 by a national expert clinical panel to consider alternative layouts and technology. The average space requirement for an individual neonatal intensive care unit cot (incubator) space was 13.5 m2 (or 145.3 ft2). The circulation and storage space requirements added in step 4 increased this to 18.46 m2 (or 198.7 ft2). The expert panel reviewed the recommendation and agreed that the average individual cot space (13.5 m2/[or 145.3 ft2]) would accommodate variance in working practices. Care needs to be taken when extrapolating this recommendation to multiple cot areas to maintain the minimum spatial requirement.
The importance of base flow in sustaining surface water flow in the Upper Colorado River Basin
Miller, Matthew P.; Buto, Susan G.; Susong, David D.; Rumsey, Christine
2016-01-01
The Colorado River has been identified as the most overallocated river in the world. Considering predicted future imbalances between water supply and demand and the growing recognition that base flow (a proxy for groundwater discharge to streams) is critical for sustaining flow in streams and rivers, there is a need to develop methods to better quantify present-day base flow across large regions. We adapted and applied the spatially referenced regression on watershed attributes (SPARROW) water quality model to assess the spatial distribution of base flow, the fraction of streamflow supported by base flow, and estimates of and potential processes contributing to the amount of base flow that is lost during in-stream transport in the Upper Colorado River Basin (UCRB). On average, 56% of the streamflow in the UCRB originated as base flow, and precipitation was identified as the dominant driver of spatial variability in base flow at the scale of the UCRB, with the majority of base flow discharge to streams occurring in upper elevation watersheds. The model estimates an average of 1.8 × 1010 m3/yr of base flow in the UCRB; greater than 80% of which is lost during in-stream transport to the Lower Colorado River Basin via processes including evapotranspiration and water diversion for irrigation. Our results indicate that surface waters in the Colorado River Basin are dependent on base flow, and that management approaches that consider groundwater and surface water as a joint resource will be needed to effectively manage current and future water resources in the Basin.
The importance of base flow in sustaining surface water flow in the Upper Colorado River Basin
NASA Astrophysics Data System (ADS)
Miller, Matthew P.; Buto, Susan G.; Susong, David D.; Rumsey, Christine A.
2016-05-01
The Colorado River has been identified as the most overallocated river in the world. Considering predicted future imbalances between water supply and demand and the growing recognition that base flow (a proxy for groundwater discharge to streams) is critical for sustaining flow in streams and rivers, there is a need to develop methods to better quantify present-day base flow across large regions. We adapted and applied the spatially referenced regression on watershed attributes (SPARROW) water quality model to assess the spatial distribution of base flow, the fraction of streamflow supported by base flow, and estimates of and potential processes contributing to the amount of base flow that is lost during in-stream transport in the Upper Colorado River Basin (UCRB). On average, 56% of the streamflow in the UCRB originated as base flow, and precipitation was identified as the dominant driver of spatial variability in base flow at the scale of the UCRB, with the majority of base flow discharge to streams occurring in upper elevation watersheds. The model estimates an average of 1.8 × 1010 m3/yr of base flow in the UCRB; greater than 80% of which is lost during in-stream transport to the Lower Colorado River Basin via processes including evapotranspiration and water diversion for irrigation. Our results indicate that surface waters in the Colorado River Basin are dependent on base flow, and that management approaches that consider groundwater and surface water as a joint resource will be needed to effectively manage current and future water resources in the Basin.
NASA Astrophysics Data System (ADS)
Sproles, Eric A.; Roth, Travis R.; Nolin, Anne W.
2017-02-01
In the Pacific Northwest, USA, the extraordinarily low snowpacks of winters 2013-2014 and 2014-2015 stressed regional water resources and the social-environmental system. We introduce two new approaches to better understand how seasonal snow water storage during these two winters would compare to snow water storage under warmer climate conditions. The first approach calculates a spatial-probabilistic metric representing the likelihood that the snow water storage of 2013-2014 and 2014-2015 would occur under +2 °C perturbed climate conditions. We computed snow water storage (basin-wide and across elevations) and the ratio of snow water equivalent to cumulative precipitation (across elevations) for the McKenzie River basin (3041 km2), a major tributary to the Willamette River in Oregon, USA. We applied these computations to calculate the occurrence probability for similarly low snow water storage under climate warming. Results suggest that, relative to +2 °C conditions, basin-wide snow water storage during winter 2013-2014 would be above average, while that of winter 2014-2015 would be far below average. Snow water storage on 1 April corresponds to a 42 % (2013-2014) and 92 % (2014-2015) probability of being met or exceeded in any given year. The second approach introduces the concept of snow analogs to improve the anticipatory capacity of climate change impacts on snow-derived water resources. The use of a spatial-probabilistic approach and snow analogs provide new methods of assessing basin-wide snow water storage in a non-stationary climate and are readily applicable in other snow-dominated watersheds.
Spatially resolved D-T(2) correlation NMR of porous media.
Zhang, Yan; Blümich, Bernhard
2014-05-01
Within the past decade, 2D Laplace nuclear magnetic resonance (NMR) has been developed to analyze pore geometry and diffusion of fluids in porous media on the micrometer scale. Many objects like rocks and concrete are heterogeneous on the macroscopic scale, and an integral analysis of microscopic properties provides volume-averaged information. Magnetic resonance imaging (MRI) resolves this spatial average on the contrast scale set by the particular MRI technique. Desirable contrast parameters for studies of fluid transport in porous media derive from the pore-size distribution and the pore connectivity. These microscopic parameters are accessed by 1D and 2D Laplace NMR techniques. It is therefore desirable to combine MRI and 2D Laplace NMR to image functional information on fluid transport in porous media. Because 2D Laplace resolved MRI demands excessive measuring time, this study investigates the possibility to restrict the 2D Laplace analysis to the sum signals from low-resolution pixels, which correspond to pixels of similar amplitude in high-resolution images. In this exploratory study spatially resolved D-T2 correlation maps from glass beads and mortar are analyzed. Regions of similar contrast are first identified in high-resolution images to locate corresponding pixels in low-resolution images generated with D-T2 resolved MRI for subsequent pixel summation to improve the signal-to-noise ratio of contrast-specific D-T2 maps. This method is expected to contribute valuable information on correlated sample heterogeneity from the macroscopic and the microscopic scales in various types of porous materials including building materials and rock. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimising Habitat-Based Models for Wide-Ranging Marine Predators: Scale Matters
NASA Astrophysics Data System (ADS)
Scales, K. L.; Hazen, E. L.; Jacox, M.; Edwards, C. A.; Bograd, S. J.
2016-12-01
Predicting the responses of marine top predators to dynamic oceanographic conditions requires habitat-based models that sufficiently capture environmental preferences. Spatial resolution and temporal averaging of environmental data layers is a key aspect of model construction. The utility of surfaces contemporaneous to animal movement (e.g. daily, weekly), versus synoptic products (monthly, seasonal, climatological) is currently under debate, as is the optimal spatial resolution for predictive products. Using movement simulations with built-in environmental preferences (correlated random walks, multi-state hidden Markov-type models) together with modeled (Regional Oceanographic Modeling System, ROMS) and remotely-sensed (MODIS-Aqua) datasets, we explored the effects of degrading environmental surfaces (3km - 1 degree, daily - climatological) on model inference. We simulated the movements of a hypothetical wide-ranging marine predator through the California Current system over a three month period (May-June-July), based on metrics derived from previously published blue whale Balaenoptera musculus tracking studies. Results indicate that models using seasonal or climatological data fields can overfit true environmental preferences, in both presence-absence and behaviour-based model formulations. Moreover, the effects of a degradation in spatial resolution are more pronounced when using temporally averaged fields than when using daily, weekly or monthly datasets. In addition, we observed a notable divergence between the `best' models selected using common methods (e.g. AUC, AICc) and those that most accurately reproduced built-in environmental preferences. These findings have important implications for conservation and management of marine mammals, seabirds, sharks, sea turtles and large teleost fish, particularly in implementing dynamic ocean management initiatives and in forecasting responses to future climate-mediated ecosystem change.
Optimising Habitat-Based Models for Wide-Ranging Marine Predators: Scale Matters
NASA Astrophysics Data System (ADS)
Scales, K. L.; Hazen, E. L.; Jacox, M.; Edwards, C. A.; Bograd, S. J.
2016-02-01
Predicting the responses of marine top predators to dynamic oceanographic conditions requires habitat-based models that sufficiently capture environmental preferences. Spatial resolution and temporal averaging of environmental data layers is a key aspect of model construction. The utility of surfaces contemporaneous to animal movement (e.g. daily, weekly), versus synoptic products (monthly, seasonal, climatological) is currently under debate, as is the optimal spatial resolution for predictive products. Using movement simulations with built-in environmental preferences (correlated random walks, multi-state hidden Markov-type models) together with modeled (Regional Oceanographic Modeling System, ROMS) and remotely-sensed (MODIS-Aqua) datasets, we explored the effects of degrading environmental surfaces (3km - 1 degree, daily - climatological) on model inference. We simulated the movements of a hypothetical wide-ranging marine predator through the California Current system over a three month period (May-June-July), based on metrics derived from previously published blue whale Balaenoptera musculus tracking studies. Results indicate that models using seasonal or climatological data fields can overfit true environmental preferences, in both presence-absence and behaviour-based model formulations. Moreover, the effects of a degradation in spatial resolution are more pronounced when using temporally averaged fields than when using daily, weekly or monthly datasets. In addition, we observed a notable divergence between the `best' models selected using common methods (e.g. AUC, AICc) and those that most accurately reproduced built-in environmental preferences. These findings have important implications for conservation and management of marine mammals, seabirds, sharks, sea turtles and large teleost fish, particularly in implementing dynamic ocean management initiatives and in forecasting responses to future climate-mediated ecosystem change.
NASA Astrophysics Data System (ADS)
Zeng, C.; Zhang, F.
2014-12-01
Alpine meadow is one of widespread vegetation types of the Qinghai-Tibetan Plateau. However, alpine meadow ecosystem is undergoing degradation in recent years. The degradation of alpine meadow can changes soil physical and chemical properties as well as it's spatial variability. However, little research has been done that address the spatial patterns of soil properties under different degradation degrees of alpine meadow of the Qinghai-Tibetan Plateau although these changes were important to water and heat study and modelling of land surface. 296 soil surface (0-10 cm) samples were collected using grid sampling design from three different degraded alpine meadow regions (1 km2). Then soil water content (SWC) and organic carbon content (OCC) were measured. Classical statistical and geostatistical methods were employed to study the spatial heterogeneities of SWC and OCC under different degradation degrees (Non-degraded ND, moderately degraded MD, extremely degraded ED) of alpine meadow. Results show that both SWC and OCC of alpine meadow were normally distributed with the exception of SWC under ED. On average, both SWC and OCC of alpine meadow decreased in the order that ND > MD > ED. For nugget ratios, SWC and OCC of alpine meadow showed increasing spatial dependence tendency from ND to ED. For the range of spatial variation, both SWC and OCC of alpine meadow showed increasing tendency in distance with the increasing degree of degradation. In all, the degradation of alpine meadow has significant impact on spatial heterogeneities of SWC and OCC of alpine meadow. With increasing of alpine meadow degradation, soil water condition and nutrient condition become worse, and their distributions in spatial become unevenly.
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
Speckle-field propagation in 'frozen' turbulence: brightness function approach
NASA Astrophysics Data System (ADS)
Dudorov, Vadim V.; Vorontsov, Mikhail A.; Kolosov, Valeriy V.
2006-08-01
Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.
Speckle-field propagation in 'frozen' turbulence: brightness function approach.
Dudorov, Vadim V; Vorontsov, Mikhail A; Kolosov, Valeriy V
2006-08-01
Speckle-field long- and short-exposure spatial correlation characteristics for target-in-the-loop (TIL) laser beam propagation and scattering in atmospheric turbulence are analyzed through the use of two different approaches: the conventional Monte Carlo (MC) technique and the recently developed brightness function (BF) method. Both the MC and the BF methods are applied to analysis of speckle-field characteristics averaged over target surface roughness realizations under conditions of 'frozen' turbulence. This corresponds to TIL applications where speckle-field fluctuations associated with target surface roughness realization updates occur within a time scale that can be significantly shorter than the characteristic atmospheric turbulence time. Computational efficiency and accuracy of both methods are compared on the basis of a known analytical solution for the long-exposure mutual correlation function. It is shown that in the TIL propagation scenarios considered the BF method provides improved accuracy and requires significantly less computational time than the conventional MC technique. For TIL geometry with a Gaussian outgoing beam and Lambertian target surface, both analytical and numerical estimations for the speckle-field long-exposure correlation length are obtained. Short-exposure speckle-field correlation characteristics corresponding to propagation in 'frozen' turbulence are estimated using the BF method. It is shown that atmospheric turbulence-induced static refractive index inhomogeneities do not significantly affect the characteristic correlation length of the speckle field, whereas long-exposure spatial correlation characteristics are strongly dependent on turbulence strength.
Bennema, S C; Molento, M B; Scholte, R G; Carvalho, O S; Pritsch, I
2017-11-01
Fascioliasis is a condition caused by the trematode Fasciola hepatica. In this paper, the spatial distribution of F. hepatica in bovines in Brazil was modelled using a decision tree approach and a logistic regression, combined with a geographic information system (GIS) query. In the decision tree and the logistic model, isothermality had the strongest influence on disease prevalence. Also, the 50-year average precipitation in the warmest quarter of the year was included as a risk factor, having a negative influence on the parasite prevalence. The risk maps developed using both techniques, showed a predicted higher prevalence mainly in the South of Brazil. The prediction performance seemed to be high, but both techniques failed to reach a high accuracy in predicting the medium and high prevalence classes to the entire country. The GIS query map, based on the range of isothermality, minimum temperature of coldest month, precipitation of warmest quarter of the year, altitude and the average dailyland surface temperature, showed a possibility of presence of F. hepatica in a very large area. The risk maps produced using these methods can be used to focus activities of animal and public health programmes, even on non-evaluated F. hepatica areas.