Lorenz, David L.; Sanocki, Chris A.; Kocian, Matthew J.
2010-01-01
Knowledge of the peak flow of floods of a given recurrence interval is essential for regulation and planning of water resources and for design of bridges, culverts, and dams along Minnesota's rivers and streams. Statistical techniques are needed to estimate peak flow at ungaged sites because long-term streamflow records are available at relatively few places. Because of the need to have up-to-date peak-flow frequency information in order to estimate peak flows at ungaged sites, the U.S. Geological Survey (USGS) conducted a peak-flow frequency study in cooperation with the Minnesota Department of Transportation and the Minnesota Pollution Control Agency. Estimates of peak-flow magnitudes for 1.5-, 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals are presented for 330 streamflow-gaging stations in Minnesota and adjacent areas in Iowa and South Dakota based on data through water year 2005. The peak-flow frequency information was subsequently used in regression analyses to develop equations relating peak flows for selected recurrence intervals to various basin and climatic characteristics. Two statistically derived techniques-regional regression equation and region of influence regression-can be used to estimate peak flow on ungaged streams smaller than 3,000 square miles in Minnesota. Regional regression equations were developed for selected recurrence intervals in each of six regions in Minnesota: A (northwestern), B (north central and east central), C (northeastern), D (west central and south central), E (southwestern), and F (southeastern). The regression equations can be used to estimate peak flows at ungaged sites. The region of influence regression technique dynamically selects streamflow-gaging stations with characteristics similar to a site of interest. Thus, the region of influence regression technique allows use of a potentially unique set of gaging stations for estimating peak flow at each site of interest. Two methods of selecting streamflow-gaging stations, similarity and proximity, can be used for the region of influence regression technique. The regional regression equation technique is the preferred technique as an estimate of peak flow in all six regions for ungaged sites. The region of influence regression technique is not appropriate for regions C, E, and F because the interrelations of some characteristics of those regions do not agree with the interrelations throughout the rest of the State. Both the similarity and proximity methods for the region of influence technique can be used in the other regions (A, B, and D) to provide additional estimates of peak flow. The peak-flow-frequency estimates and basin characteristics for selected streamflow-gaging stations and regional peak-flow regression equations are included in this report.
Oberg, Kevin A.; Mades, Dean M.
1987-01-01
Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)
Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas
2017-12-01
Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.
The effects of missing data on global ozone estimates
NASA Technical Reports Server (NTRS)
Drewry, J. W.; Robbins, J. L.
1981-01-01
The effects of missing data and model truncation on estimates of the global mean, zonal distribution, and global distribution of ozone are considered. It is shown that missing data can introduce biased estimates with errors that are not accounted for in the accuracy calculations of empirical modeling techniques. Data-fill techniques are introduced and used for evaluating error bounds and constraining the estimate in areas of sparse and missing data. It is found that the accuracy of the global mean estimate is more dependent on data distribution than model size. Zonal features can be accurately described by 7th order models over regions of adequate data distribution. Data variance accounted for by higher order models appears to represent climatological features of columnar ozone rather than pure error. Data-fill techniques can prevent artificial feature generation in regions of sparse or missing data without degrading high order estimates over dense data regions.
Estimation of Biochemical Constituents From Fresh, Green Leaves By Spectrum Matching Techniques
NASA Technical Reports Server (NTRS)
Goetz, A. F. H.; Gao, B. C.; Wessman, C. A.; Bowman, W. D.
1990-01-01
Estimation of biochemical constituents in vegetation such as lignin, cellulose, starch, sugar and protein by remote sensing methods is an important goal in ecological research. The spectral reflectances of dried leaves exhibit diagnostic absorption features which can be used to estimate the abundance of important constituents. Lignin and nitrogen concentrations have been obtained from canopies by use of imaging spectrometry and multiple linear regression techniques. The difficulty in identifying individual spectra of leaf constituents in the region beyond 1 micrometer is that liquid water contained in the leaf dominates the spectral reflectance of leaves in this region. By use of spectrum matching techniques, originally used to quantify whole column water abundance in the atmosphere and equivalent liquid water thickness in leaves, we have been able to remove the liquid water contribution to the spectrum. The residual spectra resemble spectra for cellulose in the 1.1 micrometer region, lignin in the 1.7 micrometer region, and starch in the 2.0-2.3 micrometer region. In the entire 1.0-2.3 micrometer region each of the major constituents contributes to the spectrum. Quantitative estimates will require using unmixing techniques on the residual spectra.
Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros
2014-01-01
When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601
NASA Technical Reports Server (NTRS)
Green, R. N.
1981-01-01
The shape factor, parameter estimation, and deconvolution data analysis techniques were applied to the same set of Earth emitted radiation measurements to determine the effects of different techniques on the estimated radiation field. All three techniques are defined and their assumptions, advantages, and disadvantages are discussed. Their results are compared globally, zonally, regionally, and on a spatial spectrum basis. The standard deviations of the regional differences in the derived radiant exitance varied from 7.4 W-m/2 to 13.5 W-m/2.
Regional distribution of forest height and biomass from multisensor data fusion
Yifan Yu; Sassan Saatch; Linda S. Heath; Elizabeth LaPoint; Ranga Myneni; Yuri Knyazikhin
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM...
Classification of the Regional Ionospheric Disturbance Based on Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Terzi, Merve Begum; Arikan, Orhan; Karatay, Secil; Arikan, Feza; Gulyaeva, Tamara
2016-08-01
In this study, Total Electron Content (TEC) estimated from GPS receivers is used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. For the automated classification of regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. Performance of developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing developed classification technique to Global Ionospheric Map (GIM) TEC data, which is provided by the NASA Jet Propulsion Laboratory (JPL), it is shown that SVM can be a suitable learning method to detect anomalies in TEC variations.
NASA Technical Reports Server (NTRS)
Kotoda, K.; Nakagawa, S.; Kai, K.; Yoshino, M. M.; Takeda, K.; Seki, K.
1985-01-01
In a humid region like Japan, it seems that the radiation term in the energy balance equation plays a more important role for evapotranspiration then does the vapor pressure difference between the surface and lower atmospheric boundary layer. A Priestley-Taylor type equation (equilibrium evaporation model) is used to estimate evapotranspiration. Net radiation, soil heat flux, and surface temperature data are obtained. Only temperature data obtained by remotely sensed techniques are used.
USDA-ARS?s Scientific Manuscript database
Quantification of regional greenhouse gas (GHG) fluxes is essential for establishing mitigation strategies and evaluating their effectiveness. Here, we used multiple top-down approaches and multiple trace gas observations at a tall tower to estimate GHG regional fluxes and evaluate the GHG fluxes de...
D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Alexander, Terry W.; Wilson, Gary L.
1995-01-01
A generalized least-squares regression technique was used to relate the 2- to 500-year flood discharges from 278 selected streamflow-gaging stations to statistically significant basin characteristics. The regression relations (estimating equations) were defined for three hydrologic regions (I, II, and III) in rural Missouri. Ordinary least-squares regression analyses indicate that drainage area (Regions I, II, and III) and main-channel slope (Regions I and II) are the only basin characteristics needed for computing the 2- to 500-year design-flood discharges at gaged or ungaged stream locations. The resulting generalized least-squares regression equations provide a technique for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood discharges on unregulated streams in rural Missouri. The regression equations for Regions I and II were developed from stream-flow-gaging stations with drainage areas ranging from 0.13 to 11,500 square miles and 0.13 to 14,000 square miles, and main-channel slopes ranging from 1.35 to 150 feet per mile and 1.20 to 279 feet per mile. The regression equations for Region III were developed from streamflow-gaging stations with drainage areas ranging from 0.48 to 1,040 square miles. Standard errors of estimate for the generalized least-squares regression equations in Regions I, II, and m ranged from 30 to 49 percent.
D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
Rosado-Mendez, Ivan M; Nam, Kibo; Hall, Timothy J; Zagzebski, James A
2013-07-01
Reported here is a phantom-based comparison of methods for determining the power spectral density (PSD) of ultrasound backscattered signals. Those power spectral density values are then used to estimate parameters describing α(f), the frequency dependence of the acoustic attenuation coefficient. Phantoms were scanned with a clinical system equipped with a research interface to obtain radiofrequency echo data. Attenuation, modeled as a power law α(f)= α0 f (β), was estimated using a reference phantom method. The power spectral density was estimated using the short-time Fourier transform (STFT), Welch's periodogram, and Thomson's multitaper technique, and performance was analyzed when limiting the size of the parameter-estimation region. Errors were quantified by the bias and standard deviation of the α0 and β estimates, and by the overall power-law fit error (FE). For parameter estimation regions larger than ~34 pulse lengths (~1 cm for this experiment), an overall power-law FE of 4% was achieved with all spectral estimation methods. With smaller parameter estimation regions as in parametric image formation, the bias and standard deviation of the α0 and β estimates depended on the size of the parameter estimation region. Here, the multitaper method reduced the standard deviation of the α0 and β estimates compared with those using the other techniques. The results provide guidance for choosing methods for estimating the power spectral density in quantitative ultrasound methods.
We compare biomass burning emissions estimates from four different techniques that use satellite based fire products to determine area burned over regional to global domains. Three of the techniques use active fire detections from polar-orbiting MODIS sensors and one uses detec...
Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard
2004-01-01
Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.
NASA Astrophysics Data System (ADS)
Sehad, Mounir; Lazri, Mourad; Ameur, Soltane
2017-03-01
In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.
Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals
Hodgkins, Glenn A.; Martin, Gary R.
2003-01-01
This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.
NASA Technical Reports Server (NTRS)
Frei, Allan; Nolin, Anne W.; Serreze, Mark C.; Armstrong, Richard L.; McGinnis, David L.; Robinson, David A.
2004-01-01
The purpose of this three-year study is to develop and evaluate techniques to estimate the range of potential hydrological impacts of climate change in mountainous areas. Three main objectives are set out in the proposal. (1) To develop and evaluate transfer functions to link tropospheric circulation to regional snowfall. (2) To evaluate a suite of General Circulation Models (GCMs) for use in estimating synoptic scale circulation and the resultant regional snowfall. And (3) to estimate the range of potential hydrological impacts of changing climate in the two case study areas: the Upper Colorado River basin, and the Catskill Mountains of southeastern New York State. Both regions provide water to large populations.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
Monitoring D-Region Variability from Lightning Measurements
NASA Technical Reports Server (NTRS)
Simoes, Fernando; Berthelier, Jean-Jacques; Pfaff, Robert; Bilitza, Dieter; Klenzing, Jeffery
2011-01-01
In situ measurements of ionospheric D-region characteristics are somewhat scarce and rely mostly on sounding rockets. Remote sensing techniques employing Very Low Frequency (VLF) transmitters can provide electron density estimates from subionospheric wave propagation modeling. Here we discuss how lightning waveform measurements, namely sferics and tweeks, can be used for monitoring the D-region variability and day-night transition, and for local electron density estimates. A brief comparison among D-region aeronomy models is also presented.
B. Lane Rivenbark; C. Rhett Jackson
2004-01-01
Regional average evapotranspiration estimates developed by water balance techniques are frequently used to estimate average discharge in ungaged strttams. However, the lower stream size range for the validity of these techniques has not been explored. Flow records were collected and evaluated for 16 small streams in the Southern Appalachians to test whether the...
NASA Astrophysics Data System (ADS)
Li, Dong; Cheng, Tao; Zhou, Kai; Zheng, Hengbiao; Yao, Xia; Tian, Yongchao; Zhu, Yan; Cao, Weixing
2017-07-01
Red edge position (REP), defined as the wavelength of the inflexion point in the red edge region (680-760 nm) of the reflectance spectrum, has been widely used to estimate foliar chlorophyll content from reflectance spectra. A number of techniques have been developed for REP extraction in the past three decades, but most of them require data-specific parameterization and the consistence of their performance from leaf to canopy levels remains poorly understood. In this study, we propose a new technique (WREP) to extract REPs based on the application of continuous wavelet transform to reflectance spectra. The REP is determined by the zero-crossing wavelength in the red edge region of a wavelet transformed spectrum for a number of scales of wavelet decomposition. The new technique is simple to implement and requires no parameterization from the user as long as continuous wavelet transforms are applied to reflectance spectra. Its performance was evaluated for estimating leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of cereal crops (i.e. rice and wheat) and compared with traditional techniques including linear interpolation, linear extrapolation, polynomial fitting and inverted Gaussian. Our results demonstrated that WREP obtained the best estimation accuracy for both LCC and CCC as compared to traditional techniques. High scales of wavelet decomposition were favorable for the estimation of CCC and low scales for the estimation of LCC. The difference in optimal scale reveals the underlying mechanism of signature transfer from leaf to canopy levels. In addition, crop-specific models were required for the estimation of CCC over the full range. However, a common model could be built with the REPs extracted with Scale 5 of the WREP technique for wheat and rice crops when CCC was less than 2 g/m2 (R2 = 0.73, RMSE = 0.26 g/m2). This insensitivity of WREP to crop type indicates the potential for aerial mapping of chlorophyll content between growth seasons of cereal crops. The new REP extraction technique provides us a new insight for understanding the spectral changes in the red edge region in response to chlorophyll variation from leaf to canopy levels.
Noise estimation for hyperspectral imagery using spectral unmixing and synthesis
NASA Astrophysics Data System (ADS)
Demirkesen, C.; Leloglu, Ugur M.
2014-10-01
Most hyperspectral image (HSI) processing algorithms assume a signal to noise ratio model in their formulation which makes them dependent on accurate noise estimation. Many techniques have been proposed to estimate the noise. A very comprehensive comparative study on the subject is done by Gao et al. [1]. In a nut-shell, most techniques are based on the idea of calculating standard deviation from assumed-to-be homogenous regions in the image. Some of these algorithms work on a regular grid parameterized with a window size w, while others make use of image segmentation in order to obtain homogenous regions. This study focuses not only to the statistics of the noise but to the estimation of the noise itself. A noise estimation technique motivated from a recent HSI de-noising approach [2] is proposed in this study. The denoising algorithm is based on estimation of the end-members and their fractional abundances using non-negative least squares method. The end-members are extracted using the well-known simplex volume optimization technique called NFINDR after manual selection of number of end-members and the image is reconstructed using the estimated endmembers and abundances. Actually, image de-noising and noise estimation are two sides of the same coin: Once we denoise an image, we can estimate the noise by calculating the difference of the de-noised image and the original noisy image. In this study, the noise is estimated as described above. To assess the accuracy of this method, the methodology in [1] is followed, i.e., synthetic images are created by mixing end-member spectra and noise. Since best performing method for noise estimation was spectral and spatial de-correlation (SSDC) originally proposed in [3], the proposed method is compared to SSDC. The results of the experiments conducted with synthetic HSIs suggest that the proposed noise estimation strategy outperforms the existing techniques in terms of mean and standard deviation of absolute error of the estimated noise. Finally, it is shown that the proposed technique demonstrated a robust behavior to the change of its single parameter, namely the number of end-members.
NASA Astrophysics Data System (ADS)
Imani Masouleh, Mehdi; Limebeer, David J. N.
2018-07-01
In this study we will estimate the region of attraction (RoA) of the lateral dynamics of a nonlinear single-track vehicle model. The tyre forces are approximated using rational functions that are shown to capture the nonlinearities of tyre curves significantly better than polynomial functions. An existing sum-of-squares (SOS) programming algorithm for estimating regions of attraction is extended to accommodate the use of rational vector fields. This algorithm is then used to find an estimate of the RoA of the vehicle lateral dynamics. The influence of vehicle parameters and driving conditions on the stability region are studied. It is shown that SOS programming techniques can be used to approximate the stability region without resorting to numerical integration. The RoA estimate from the SOS algorithm is compared to the existing results in the literature. The proposed method is shown to obtain significantly better RoA estimates.
Using x-ray mammograms to assist in microwave breast image interpretation.
Curtis, Charlotte; Frayne, Richard; Fear, Elise
2012-01-01
Current clinical breast imaging modalities include ultrasound, magnetic resonance (MR) imaging, and the ubiquitous X-ray mammography. Microwave imaging, which takes advantage of differing electromagnetic properties to obtain image contrast, shows potential as a complementary imaging technique. As an emerging modality, interpretation of 3D microwave images poses a significant challenge. MR images are often used to assist in this task, and X-ray mammograms are readily available. However, X-ray mammograms provide 2D images of a breast under compression, resulting in significant geometric distortion. This paper presents a method to estimate the 3D shape of the breast and locations of regions of interest from standard clinical mammograms. The technique was developed using MR images as the reference 3D shape with the future intention of using microwave images. Twelve breast shapes were estimated and compared to ground truth MR images, resulting in a skin surface estimation accurate to within an average Euclidean distance of 10 mm. The 3D locations of regions of interest were estimated to be within the same clinical area of the breast as corresponding regions seen on MR imaging. These results encourage investigation into the use of mammography as a source of information to assist with microwave image interpretation as well as validation of microwave imaging techniques.
2014-01-01
Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895
Independent Assessment of ITRF Site Velocities using GPS Imaging
NASA Astrophysics Data System (ADS)
Blewitt, G.; Hammond, W. C.; Kreemer, C.; Altamimi, Z.
2015-12-01
The long-term stability of ITRF is critical to the most challenging scientific applications such as the slow variation of sea level, and of ice sheet loading in Greenland and Antarctica. In 2010, the National Research Council recommended aiming for stability at the level of 1 mm/decade in the ITRF origin and scale. This requires that the ITRF include many globally-distributed sites with motions that are predictable to within a few mm/decade, with a significant number of sites having collocated stations of multiple techniques. Quantifying the stability of ITRF stations can be useful to understand stability of ITRF parameters, and to help the selection and weighting of ITRF stations. Here we apply a new suite of techniques for an independent assessment of ITRF site velocities. Our "GPS Imaging" suite is founded on the principle that, for the case of large numbers of data, the trend can be estimated objectively, automatically, robustly, and accurately by applying non-parametric techniques, which use quantile statistics (e.g., the median). At the foundation of GPS Imaging is the estimator "MIDAS" (Median Interannual Difference Adjusted for Skewness). MIDAS estimates the velocity with a realistic error bar based on sub-sampling the coordinate time series. MIDAS is robust to step discontinuities, outliers, seasonality, and heteroscedasticity. Common-mode noise filters enhance regional- to continental-scale precision in MIDAS estimates, just as they do for standard estimation techniques. Secondly, in regions where there is sufficient spatial sampling, GPS Imaging uses MIDAS velocity estimates to generate a regionally-representative velocity map. For this we apply a median spatial filter to despeckle the maps. We use GPS Imaging to address two questions: (1) How well do the ITRF site velocities derived by parametric estimation agree with non-parametric techniques? (2) Are ITRF site velocities regionally representative? These questions aim to get a handle on (1) the accuracy of ITRF site velocities as a function of characteristics of contributing station data, such as number of step parameters and total time span; and (2) evidence of local processes affecting site velocity, which may impact site stability. Such quantification can be used to rank stations in terms the risk that they may pose to the stability of ITRF.
Regional regression of flood characteristics employing historical information
Tasker, Gary D.; Stedinger, J.R.
1987-01-01
Streamflow gauging networks provide hydrologic information for use in estimating the parameters of regional regression models. The regional regression models can be used to estimate flood statistics, such as the 100 yr peak, at ungauged sites as functions of drainage basin characteristics. A recent innovation in regional regression is the use of a generalized least squares (GLS) estimator that accounts for unequal station record lengths and sample cross correlation among the flows. However, this technique does not account for historical flood information. A method is proposed here to adjust this generalized least squares estimator to account for possible information about historical floods available at some stations in a region. The historical information is assumed to be in the form of observations of all peaks above a threshold during a long period outside the systematic record period. A Monte Carlo simulation experiment was performed to compare the GLS estimator adjusted for historical floods with the unadjusted GLS estimator and the ordinary least squares estimator. Results indicate that using the GLS estimator adjusted for historical information significantly improves the regression model. ?? 1987.
Automated skin lesion segmentation with kernel density estimation
NASA Astrophysics Data System (ADS)
Pardo, A.; Real, E.; Fernandez-Barreras, G.; Madruga, F. J.; López-Higuera, J. M.; Conde, O. M.
2017-07-01
Skin lesion segmentation is a complex step for dermoscopy pathological diagnosis. Kernel density estimation is proposed as a segmentation technique based on the statistic distribution of color intensities in the lesion and non-lesion regions.
A Biomechanical Modeling Guided CBCT Estimation Technique
Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing
2017-01-01
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks. PMID:27831866
NASA Astrophysics Data System (ADS)
Lee, T. R.; Wood, W. T.; Dale, J.
2017-12-01
Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.
Purcell, M; Magette, W L
2009-04-01
Both planning and design of integrated municipal solid waste management systems require accurate prediction of waste generation. This research predicted the quantity and distribution of biodegradable municipal waste (BMW) generation within a diverse 'landscape' of residential areas, as well as from a variety of commercial establishments (restaurants, hotels, hospitals, etc.) in the Dublin (Ireland) region. Socio-economic variables, housing types, and the sizes and main activities of commercial establishments were hypothesized as the key determinants contributing to the spatial variability of BMW generation. A geographical information system (GIS) 'model' of BMW generation was created using ArcMap, a component of ArcGIS 9. Statistical data including socio-economic status and household size were mapped on an electoral district basis. Historical research and data from scientific literature were used to assign BMW generation rates to residential and commercial establishments. These predictions were combined to give overall BMW estimates for the region, which can aid waste planning and policy decisions. This technique will also aid the design of future waste management strategies, leading to policy and practice alterations as a function of demographic changes and development. The household prediction technique gave a more accurate overall estimate of household waste generation than did the social class technique. Both techniques produced estimates that differed from the reported local authority data; however, given that local authority reported figures for the region are below the national average, with some of the waste generated from apartment complexes being reported as commercial waste, predictions arising from this research are believed to be closer to actual waste generation than a comparison to reported data would suggest. By changing the input data, this estimation tool can be adapted for use in other locations. Although focusing on waste in the Dublin region, this method of waste prediction can have significant potential benefits if a universal method can be found to apply it effectively.
NASA Astrophysics Data System (ADS)
El Sharif, H.; Teegavarapu, R. S.
2012-12-01
Spatial interpolation methods used for estimation of missing precipitation data at a site seldom check for their ability to preserve site and regional statistics. Such statistics are primarily defined by spatial correlations and other site-to-site statistics in a region. Preservation of site and regional statistics represents a means of assessing the validity of missing precipitation estimates at a site. This study evaluates the efficacy of a fuzzy-logic methodology for infilling missing historical daily precipitation data in preserving site and regional statistics. Rain gauge sites in the state of Kentucky, USA, are used as a case study for evaluation of this newly proposed method in comparison to traditional data infilling techniques. Several error and performance measures will be used to evaluate the methods and trade-offs in accuracy of estimation and preservation of site and regional statistics.
Regional Distribution of Forest Height and Biomass from Multisensor Data Fusion
NASA Technical Reports Server (NTRS)
Yu, Yifan; Saatchi, Sassan; Heath, Linda S.; LaPoint, Elizabeth; Myneni, Ranga; Knyazikhin, Yuri
2010-01-01
Elevation data acquired from radar interferometry at C-band from SRTM are used in data fusion techniques to estimate regional scale forest height and aboveground live biomass (AGLB) over the state of Maine. Two fusion techniques have been developed to perform post-processing and parameter estimations from four data sets: 1 arc sec National Elevation Data (NED), SRTM derived elevation (30 m), Landsat Enhanced Thematic Mapper (ETM) bands (30 m), derived vegetation index (VI) and NLCD2001 land cover map. The first fusion algorithm corrects for missing or erroneous NED data using an iterative interpolation approach and produces distribution of scattering phase centers from SRTM-NED in three dominant forest types of evergreen conifers, deciduous, and mixed stands. The second fusion technique integrates the USDA Forest Service, Forest Inventory and Analysis (FIA) ground-based plot data to develop an algorithm to transform the scattering phase centers into mean forest height and aboveground biomass. Height estimates over evergreen (R2 = 0.86, P < 0.001; RMSE = 1.1 m) and mixed forests (R2 = 0.93, P < 0.001, RMSE = 0.8 m) produced the best results. Estimates over deciduous forests were less accurate because of the winter acquisition of SRTM data and loss of scattering phase center from tree ]surface interaction. We used two methods to estimate AGLB; algorithms based on direct estimation from the scattering phase center produced higher precision (R2 = 0.79, RMSE = 25 Mg/ha) than those estimated from forest height (R2 = 0.25, RMSE = 66 Mg/ha). We discuss sources of uncertainty and implications of the results in the context of mapping regional and continental scale forest biomass distribution.
Verification of Agricultural Methane Emission Inventories
NASA Astrophysics Data System (ADS)
Desjardins, R. L.; Pattey, E.; Worth, D. E.; VanderZaag, A.; Mauder, M.; Srinivasan, R.; Worthy, D.; Sweeney, C.; Metzger, S.
2017-12-01
It is estimated that agriculture contributes more than 40% of anthropogenic methane (CH4) emissions in North America. However, these estimates, which are either based on the Intergovernmental Panel on Climate Change (IPCC) methodology or inverse modeling techniques, are poorly validated due to the challenges of separating interspersed CH4 sources within agroecosystems. A flux aircraft, instrumented with a fast-response Picarro CH4 analyzer for the eddy covariance (EC) technique and a sampling system for the relaxed eddy accumulation technique (REA), was flown at an altitude of about 150 m along several 20-km transects over an agricultural region in Eastern Canada. For all flight days, the top-down CH4 flux density measurements were compared to the footprint adjusted bottom-up estimates based on an IPCC Tier II methodology. Information on the animal population, land use type and atmospheric and surface variables were available for each transect. Top-down and bottom-up estimates of CH4 emissions were found to be poorly correlated, and wetlands were the most frequent confounding source of CH4; however, there were other sources such as waste treatment plants and biodigesters. Spatially resolved wavelet covariance estimates of CH4 emissions helped identify the contribution of wetlands to the overall CH4 flux, and the dependence of these emissions on temperature. When wetland contribution in the flux footprint was minimized, top-down and bottom-up estimates agreed to within measurement error. This research demonstrates that although existing aircraft-based technology can be used to verify regional ( 100 km2) agricultural CH4 emissions, it remains challenging due to diverse sources of CH4 present in many regions. The use of wavelet covariance to generate spatially-resolved flux estimates was found to be the best way to separate interspersed sources of CH4.
NASA Astrophysics Data System (ADS)
Zhang, X.; Anagnostou, E. N.
2016-12-01
This research contributes to the improvement of high resolution satellite applications in tropical regions with mountainous topography. Such mountainous regions are usually covered by sparse networks of in-situ observations while quantitative precipitation estimation from satellite sensors exhibits strong underestimation of heavy orographically enhanced storm events. To address this issue, our research applies a satellite error correction technique based solely on high-resolution numerical weather predictions (NWP). Our previous work has demonstrated the accuracy of this method in two mid-latitude mountainous regions (Zhang et al. 2013*1, Zhang et al. 2016*2), while the current research focuses on a comprehensive evaluation in three topical mountainous regions: Colombia, Peru and Taiwan. In addition, two different satellite precipitation products, NOAA Climate Prediction Center morphing technique (CMORPH) and Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS), are considered. The study includes a large number of heavy precipitation events (68 events over the three regions) in the period 2004 to 2012. The NWP-based adjustments of the two satellite products are contrasted to their corresponding gauge-adjusted post-processing products. Preliminary results show that the NWP-based adjusted CMORPH product is consistently improved relative to both original and gauge-adjusted precipitation products for all regions and storms examined. The improvement of PERSIANN-CCS product is less significant and less consistent relative to the CMORPH performance improvements from the NWP-based adjustment. *1Zhang, Xinxuan, Emmanouil N. Anagnostou, Maria Frediani, Stavros Solomos, and George Kallos. "Using NWP simulations in satellite rainfall estimation of heavy precipitation events over mountainous areas." Journal of Hydrometeorology 14, no. 6 (2013): 1844-1858.*2 Zhang, Xinxuan, Emmanouil N. Anagnostou, and Humberto Vergara. "Hydrologic Evaluation of NWP-Adjusted CMORPH Estimates of Hurricane-Induced Precipitation in the Southern Appalachians." Journal of Hydrometeorology 17.4 (2016): 1087-1099.
Impact of logging on aboveground biomass stocks in lowland rain forest, Papua New Guinea.
Bryan, Jane; Shearman, Phil; Ash, Julian; Kirkpatrick, J B
2010-12-01
Greenhouse-gas emissions resulting from logging are poorly quantified across the tropics. There is a need for robust measurement of rain forest biomass and the impacts of logging from which carbon losses can be reliably estimated at regional and global scales. We used a modified Bitterlich plotless technique to measure aboveground live biomass at six unlogged and six logged rain forest areas (coupes) across two approximately 3000-ha regions at the Makapa concession in lowland Papua New Guinea. "Reduced-impact logging" is practiced at Makapa. We found the mean unlogged aboveground biomass in the two regions to be 192.96 +/- 4.44 Mg/ha and 252.92 +/- 7.00 Mg/ha (mean +/- SE), which was reduced by logging to 146.92 +/- 4.58 Mg/ha and 158.84 +/- 4.16, respectively. Killed biomass was not a fixed proportion, but varied with unlogged biomass, with 24% killed in the lower-biomass region, and 37% in the higher-biomass region. Across the two regions logging resulted in a mean aboveground carbon loss of 35 +/- 2.8 Mg/ha. The plotless technique proved efficient at estimating mean aboveground biomass and logging damage. We conclude that substantial bias is likely to occur within biomass estimates derived from single unreplicated plots.
NASA Astrophysics Data System (ADS)
Sinha, Mangalika; Modi, Mohammed H.
2017-10-01
In-depth compositional analysis of 240 Å thick aluminium oxide thin film has been carried out using soft x-ray reflectivity (SXR) and x-ray photoelectron spectroscopy technique (XPS). The compositional details of the film is estimated by modelling the optical index profile obtained from the SXR measurements over 60-200 Å wavelength region. The SXR measurements are carried out at Indus-1 reflectivity beamline. The method suggests that the principal film region is comprised of Al2O3 and AlOx (x = 1.6) phases whereas the interface region comprised of SiO2 and AlOx (x = 1.6) mixture. The soft x-ray reflectivity technique combined with XPS measurements explains the compositional details of principal layer. Since the interface region cannot be analyzed with the XPS technique in a non-destructive manner in such a case the SXR technique is a powerful tool for nondestructive compositional analysis of interface region.
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
Segmentation of the Speaker's Face Region with Audiovisual Correlation
NASA Astrophysics Data System (ADS)
Liu, Yuyu; Sato, Yoichi
The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
Regionalisation of low flow frequency curves for the Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Mamun, Abdullah A.; Hashim, Alias; Daoud, Jamal I.
2010-02-01
SUMMARYRegional maps and equations for the magnitude and frequency of 1, 7 and 30-day low flows were derived and are presented in this paper. The river gauging stations of neighbouring catchments that produced similar low flow frequency curves were grouped together. As such, the Peninsular Malaysia was divided into seven low flow regions. Regional equations were developed using the multivariate regression technique. An empirical relationship was developed for mean annual minimum flow as a function of catchment area, mean annual rainfall and mean annual evaporation. The regional equations exhibited good coefficient of determination ( R2 > 0.90). Three low flow frequency curves showing the low, mean and high limits for each region were proposed based on a graphical best-fit technique. Knowing the catchment area, mean annual rainfall and evaporation in the region, design low flows of different durations can be easily estimated for the ungauged catchments. This procedure is expected to overcome the problem of data unavailability in estimating low flows in the Peninsular Malaysia.
Gao, Yongnian; Gao, Junfeng; Yin, Hongbin; Liu, Chuansheng; Xia, Ting; Wang, Jing; Huang, Qi
2015-03-15
Remote sensing has been widely used for ater quality monitoring, but most of these monitoring studies have only focused on a few water quality variables, such as chlorophyll-a, turbidity, and total suspended solids, which have typically been considered optically active variables. Remote sensing presents a challenge in estimating the phosphorus concentration in water. The total phosphorus (TP) in lakes has been estimated from remotely sensed observations, primarily using the simple individual band ratio or their natural logarithm and the statistical regression method based on the field TP data and the spectral reflectance. In this study, we investigated the possibility of establishing a spatial modeling scheme to estimate the TP concentration of a large lake from multi-spectral satellite imagery using band combinations and regional multivariate statistical modeling techniques, and we tested the applicability of the spatial modeling scheme. The results showed that HJ-1A CCD multi-spectral satellite imagery can be used to estimate the TP concentration in a lake. The correlation and regression analysis showed a highly significant positive relationship between the TP concentration and certain remotely sensed combination variables. The proposed modeling scheme had a higher accuracy for the TP concentration estimation in the large lake compared with the traditional individual band ratio method and the whole-lake scale regression-modeling scheme. The TP concentration values showed a clear spatial variability and were high in western Lake Chaohu and relatively low in eastern Lake Chaohu. The northernmost portion, the northeastern coastal zone and the southeastern portion of western Lake Chaohu had the highest TP concentrations, and the other regions had the lowest TP concentration values, except for the coastal zone of eastern Lake Chaohu. These results strongly suggested that the proposed modeling scheme, i.e., the band combinations and the regional multivariate statistical modeling techniques, demonstrated advantages for estimating the TP concentration in a large lake and had a strong potential for universal application for the TP concentration estimation in large lake waters worldwide. Copyright © 2014 Elsevier Ltd. All rights reserved.
Joint inversion of regional and teleseismic earthquake waveforms
NASA Astrophysics Data System (ADS)
Baker, Mark R.; Doser, Diane I.
1988-03-01
A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.
Regional-scale analysis of extreme precipitation from short and fragmented records
NASA Astrophysics Data System (ADS)
Libertino, Andrea; Allamano, Paola; Laio, Francesco; Claps, Pierluigi
2018-02-01
Rain gauge is the oldest and most accurate instrument for rainfall measurement, able to provide long series of reliable data. However, rain gauge records are often plagued by gaps, spatio-temporal discontinuities and inhomogeneities that could affect their suitability for a statistical assessment of the characteristics of extreme rainfall. Furthermore, the need to discard the shorter series for obtaining robust estimates leads to ignore a significant amount of information which can be essential, especially when large return periods estimates are sought. This work describes a robust statistical framework for dealing with uneven and fragmented rainfall records on a regional spatial domain. The proposed technique, named "patched kriging" allows one to exploit all the information available from the recorded series, independently of their length, to provide extreme rainfall estimates in ungauged areas. The methodology involves the sequential application of the ordinary kriging equations, producing a homogeneous dataset of synthetic series with uniform lengths. In this way, the errors inherent to any regional statistical estimation can be easily represented in the spatial domain and, possibly, corrected. Furthermore, the homogeneity of the obtained series, provides robustness toward local artefacts during the parameter-estimation phase. The application to a case study in the north-western Italy demonstrates the potential of the methodology and provides a significant base for discussing its advantages over previous techniques.
Paleohydrologic techniques used to define the spatial occurrence of floods
Jarrett, R.D.
1990-01-01
Defining the cause and spatial characteristics of floods may be difficult because of limited streamflow and precipitation data. New paleohydrologic techniques that incorporate information from geomorphic, sedimentologic, and botanic studies provide important supplemental information to define homogeneous hydrologic regions. These techniques also help to define the spatial structure of rainstorms and floods and improve regional flood-frequency estimates. The occurrence and the non-occurrence of paleohydrologic evidence of floods, such as flood bars, alluvial fans, and tree scars, provide valuable hydrologic information. The paleohydrologic research to define the spatial characteristics of floods improves the understanding of flood hydrometeorology. This research was used to define the areal extent and contributing drainage area of flash floods in Colorado. Also, paleohydrologic evidence was used to define the spatial boundaries for the Colorado foothills region in terms of the meteorologic cause of flooding and elevation. In general, above 2300 m, peak flows are caused by snowmelt. Below 2300 m, peak flows primarily are caused by rainfall. The foothills region has an upper elevation limit of about 2300 m and a lower elevation limit of about 1500 m. Regional flood-frequency estimates that incorporate the paleohydrologic information indicate that the Big Thompson River flash flood of 1976 had a recurrence interval of approximately 10,000 years. This contrasts markedly with 100 to 300 years determined by using conventional hydrologic analyses. Flood-discharge estimates based on rainfall-runoff methods in the foothills of Colorado result in larger values than those estimated with regional flood-frequency relations, which are based on long-term streamflow data. Preliminary hydrologic and paleohydrologic research indicates that intense rainfall does not occur at higher elevations in other Rocky Mountain states and that the highest elevations for rainfall-producing floods vary by latitude. The study results have implications for floodplain management and design of hydraulic structures in the mountains of Colorado and other Rocky Mountain States. ?? 1990.
Estimation of Regional Carbon Balance from Atmospheric Observations
NASA Astrophysics Data System (ADS)
Denning, S.; Uliasz, M.; Skidmore, J.
2002-12-01
Variations in the concentration of CO2 and other trace gases in time and space contain information about sources and sinks at regional scales. Several methods have been developed to quantitatively extract this information from atmospheric measurements. Mass-balance techniques depend on the ability to repeatedly sample the same mass of air, which involves careful attention to airmass trajectories. Inverse and adjoint techniques rely on decomposition of the source field into quasi-independent "basis functions" that are propagated through transport models and then used to synthesize optimal linear combinations that best match observations. A recently proposed method for regional flux estimation from continuous measurements at tall towers relies on time-mean vertical gradients, and requires careful trajectory analysis to map the estimates onto regional ecosystems. Each of these techniques is likely to be applied to measurements made during the North American Carbon Program. We have also explored the use of Bayesian synthesis inversion at regional scales, using a Lagrangian particle dispersion model driven by mesoscale transport fields. Influence functions were calculated for each hypothetical observation in a realistic diurnally-varying flow. These influence functions were then treated as basis functions for the purpose of separate inversions for daytime photosynthesis and 24-hour mean ecosystem respiration. Our results highlight the importance of estimating CO2 fluxes through the lateral boundaries of the model. Respiration fluxes were well constrained by one or two hypothetical towers, regardless of inflow fluxes. Time-varying assimilation fluxes were less well constrained, and much more dependent on knowledge of inflow fluxes. The small net difference between respiration and photosynthesis was the most difficult to determine, being extremely sensitive to knowledge of inflow fluxes. Finally, we explored the feasibility of directly incorporating mid-day concentration values measured at surface-layer flux towers in global inversions for regional surface fluxes. We found that such data would substantially improve the observational constraint on current carbon cycle models, especially if applied selectively to a well-designed subset of the current network of flux towers.
Estimation of selected flow and water-quality characteristics of Alaskan streams
Parks, Bruce; Madison, R.J.
1985-01-01
Although hydrologic data are either sparse or nonexistent for large areas of Alaska, the drainage area, area of lakes, glacier and forest cover, and average precipitation in a hydrologic basin of interest can be measured or estimated from existing maps. Application of multiple linear regression techniques indicates that statistically significant correlations exist between properties of basins determined from maps and measured streamflow characteristics. This suggests that corresponding characteristics of ungaged basins can be estimated. Streamflow frequency characteristics can be estimated from regional equations developed for southeast, south-central and Yukon regions. Statewide or modified regional equations must be used, however, for the southwest, northwest, and Arctic Slope regions where there is a paucity of data. Equations developed from basin characteristics are given to estimate suspended-sediment values for glacial streams and, with less reliability, for nonglacial streams. Equations developed from available specific conductance data are given to estimate concentrations of major dissolved inorganic constituents. Suggestions are made for expanding the existing data base and thus improving the ability to estimate hydrologic characteristics for Alaskan streams. (USGS)
A Generalized Distance’ Estimation Procedure for Intra-Urban Interaction
Bettinger . It is found that available estimation techniques necessarily result in non-integer solutions. A mathematical device is therefore...The estimation of urban and regional travel patterns has been a necessary part of current efforts to establish land use guidelines for the Texas...paper details computational experience with travel estimation within Corpus Christi, Texas, using a new convex programming approach of Charnes, Raike and
Restoration of out-of-focus images based on circle of confusion estimate
NASA Astrophysics Data System (ADS)
Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto
2002-11-01
In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.
Funk, Christopher C.; Michaelsen, Joel C.
2004-01-01
An extension of Sinclair's diagnostic model of orographic precipitation (“VDEL”) is developed for use in data-poor regions to enhance rainfall estimates. This extension (VDELB) combines a 2D linearized internal gravity wave calculation with the dot product of the terrain gradient and surface wind to approximate terrain-induced vertical velocity profiles. Slope, wind speed, and stability determine the velocity profile, with either sinusoidal or vertically decaying (evanescent) solutions possible. These velocity profiles replace the parameterized functions in the original VDEL, creating VDELB, a diagnostic accounting for buoyancy effects. A further extension (VDELB*) uses an on/off constraint derived from reanalysis precipitation fields. A validation study over 365 days in the Pacific Northwest suggests that VDELB* can best capture seasonal and geographic variations. A new statistical data-fusion technique is presented and is used to combine VDELB*, reanalysis, and satellite rainfall estimates in southern Africa. The technique, matched filter regression (MFR), sets the variance of the predictors equal to their squared correlation with observed gauge data and predicts rainfall based on the first principal component of the combined data. In the test presented here, mean absolute errors from the MFR technique were 35% lower than the satellite estimates alone. VDELB assumes a linear solution to the wave equations and a Boussinesq atmosphere, and it may give unrealistic responses under extreme conditions. Nonetheless, the results presented here suggest that diagnostic models, driven by reanalysis data, can be used to improve satellite rainfall estimates in data-sparse regions.
Comparison of estimated and measured sediment yield in the Gualala River
Matthew O’Connor; Jack Lewis; Robert Pennington
2012-01-01
This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by the need to assess potential water quality impacts of a proposed vineyard development project in the Gualala River watershed. Previous erosion rate estimates were developed using sediment source assessment techniques by the North Coast Regional...
Observation of sea-ice dynamics using synthetic aperture radar images: Automated analysis
NASA Technical Reports Server (NTRS)
Vesecky, John F.; Samadani, Ramin; Smith, Martha P.; Daida, Jason M.; Bracewell, Ronald N.
1988-01-01
The European Space Agency's ERS-1 satellite, as well as others planned to follow, is expected to carry synthetic-aperture radars (SARs) over the polar regions beginning in 1989. A key component in utilization of these SAR data is an automated scheme for extracting the sea-ice velocity field from a time sequence of SAR images of the same geographical region. Two techniques for automated sea-ice tracking, image pyramid area correlation (hierarchical correlation) and feature tracking, are described. Each technique is applied to a pair of Seasat SAR sea-ice images. The results compare well with each other and with manually tracked estimates of the ice velocity. The advantages and disadvantages of these automated methods are pointed out. Using these ice velocity field estimates it is possible to construct one sea-ice image from the other member of the pair. Comparing the reconstructed image with the observed image, errors in the estimated velocity field can be recognized and a useful probable error display created automatically to accompany ice velocity estimates. It is suggested that this error display may be useful in segmenting the sea ice observed into regions that move as rigid plates of significant ice velocity shear and distortion.
Estimates of wildland fire emissions
Yongqiang Liu; John J. Qu; Wanting Wang; Xianjun Hao
2013-01-01
Wildland fire missions can significantly affect regional and global air quality, radiation, climate, and the carbon cycle. A fundamental and yet challenging prerequisite to understanding the environmental effects is to accurately estimate fire emissions. This chapter describes and analyzes fire emission calculations. Various techniques (field measurements, empirical...
Techniques for estimating magnitude and frequency of floods in Minnesota
Guetzkow, Lowell C.
1977-01-01
Estimating relations have been developed to provide engineers and designers with improved techniques for defining flow-frequency characteristics to satisfy hydraulic planning and design requirements. The magnitude and frequency of floods up to the 100-year recurrence interval can be determined for most streams in Minnesota by methods presented. By multiple regression analysis, equations have been developed for estimating flood-frequency relations at ungaged sites on natural flow streams. Eight distinct hydrologic regions are delineated within the State with boundaries defined generally by river basin divides. Regression equations are provided for each region which relate selected frequency floods to significant basin parameters. For main-stem streams, graphs are presented showing floods for selected recurrence intervals plotted against contributing drainage area. Flow-frequency estimates for intervening sites along the Minnesota River, Mississippi River, and the Red River of the North can be derived from these graphs. Flood-frequency characteristics are tabulated for 201 paging stations having 10 or more years of record.
NASA Astrophysics Data System (ADS)
Cui, Y.; Brioude, J. F.; Angevine, W. M.; McKeen, S. A.; Henze, D. K.; Bousserez, N.; Liu, Z.; McDonald, B.; Peischl, J.; Ryerson, T. B.; Frost, G. J.; Trainer, M.
2016-12-01
Production of unconventional natural gas grew rapidly during the past ten years in the US which led to an increase in emissions of methane (CH4) and, depending on the shale region, nitrogen oxides (NOx). In terms of radiative forcing, CH4 is the second most important greenhouse gas after CO2. NOx is a precursor of ozone (O3) in the troposphere and nitrate particles, both of which are regulated by the US Clean Air Act. Emission estimates of CH4 and NOx from the shale regions are still highly uncertain. We present top-down estimates of CH4 and NOx surface fluxes from the Haynesville and Fayetteville shale production regions using aircraft data collected during the Southeast Nexus of Climate Change and Air Quality (SENEX) field campaign (June-July, 2013) and the Shale Oil and Natural Gas Nexus (SONGNEX) field campaign (March-May, 2015) within a mesoscale inversion framework. The inversion method is based on a mesoscale Bayesian inversion system using multiple transport models. EPA's 2011 National CH4 and NOx Emission Inventories are used as prior information to optimize CH4 and NOx emissions. Furthermore, the posterior CH4 emission estimates are used to constrain NOx emission estimates using a flux ratio inversion technique. Sensitivity of the posterior estimates to the use of off-diagonal terms in the error covariance matrices, the transport models, and prior estimates is discussed. Compared to the ground-based in-situ observations, the optimized CH4 and NOx inventories improve ground level CH4 and O3 concentrations calculated by the Weather Research and Forecasting mesoscale model coupled with chemistry (WRF-Chem).
Richardson, Claire; Rutherford, Shannon; Agranovski, Igor
2018-06-01
Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.
An Alternate Method for Estimating Dynamic Height from XBT Profiles Using Empirical Vertical Modes
NASA Technical Reports Server (NTRS)
Lagerloef, Gary S. E.
1994-01-01
A technique is presented that applies modal decomposition to estimate dynamic height (0-450 db) from Expendable BathyThermograph (XBT) temperature profiles. Salinity-Temperature-Depth (STD) data are used to establish empirical relationships between vertically integrated temperature profiles and empirical dynamic height modes. These are then applied to XBT data to estimate dynamic height. A standard error of 0.028 dynamic meters is obtained for the waters of the Gulf of Alaska- an ocean region subject to substantial freshwater buoyancy forcing and with a T-S relationship that has considerable scatter. The residual error is a substantial improvement relative to the conventional T-S correlation technique when applied to this region. Systematic errors between estimated and true dynamic height were evaluated. The 20-year-long time series at Ocean Station P (50 deg N, 145 deg W) indicated weak variations in the error interannually, but not seasonally. There were no evident systematic alongshore variations in the error in the ocean boundary current regime near the perimeter of the Alaska gyre. The results prove satisfactory for the purpose of this work, which is to generate dynamic height from XBT data for coanalysis with satellite altimeter data, given that the altimeter height precision is likewise on the order of 2-3 cm. While the technique has not been applied to other ocean regions where the T-S relation has less scatter, it is suggested that it could provide some improvement over previously applied methods, as well.
A technique for estimating time of concentration and storage coefficient values for Illinois streams
Graf, Julia B.; Garklavs, George; Oberg, Kevin A.
1982-01-01
Values of the unit hydrograph parameters time of concentration (TC) and storage coefficient (R) can be estimated for streams in Illinois by a two-step technique developed from data for 98 gaged basins in the State. The sum of TC and R is related to stream length (L) and main channel slope (S) by the relation (TC + R)e = 35.2L0.39S-0.78. The variable R/(TC + R) is not significantly correlated with drainage area, slope, or length, but does exhibit a regional trend. Regional values of R/(TC + R) are used with the computed values of (TC + R)e to solve for estimated values of time of concentration (TCe) and storage coefficient (Re). The use of the variable R/(TC + R) is thought to account for variations in unit hydrograph parameters caused by physiographic variables such as basin topography, flood-plain development, and basin storage characteristics. (USGS)
Connectivity modeling and graph theory analysis predict recolonization in transient populations
NASA Astrophysics Data System (ADS)
Rognstad, Rhiannon L.; Wethey, David S.; Oliver, Hilde; Hilbish, Thomas J.
2018-07-01
Population connectivity plays a major role in the ecology and evolution of marine organisms. In these systems, connectivity of many species occurs primarily during a larval stage, when larvae are frequently too small and numerous to track directly. To indirectly estimate larval dispersal, ocean circulation models have emerged as a popular technique. Here we use regional ocean circulation models to estimate dispersal of the intertidal barnacle Semibalanus balanoides at its local distribution limit in Southwest England. We incorporate historical and recent repatriation events to provide support for our modeled dispersal estimates, which predict a recolonization rate similar to that observed in two recolonization events. Using graph theory techniques to describe the dispersal landscape, we identify likely physical barriers to dispersal in the region. Our results demonstrate the use of recolonization data to support dispersal models and how these models can be used to describe population connectivity.
Heidari, M.; Ranjithan, S.R.
1998-01-01
In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.In using non-linear optimization techniques for estimation of parameters in a distributed ground water model, the initial values of the parameters and prior information about them play important roles. In this paper, the genetic algorithm (GA) is combined with the truncated-Newton search technique to estimate groundwater parameters for a confined steady-state ground water model. Use of prior information about the parameters is shown to be important in estimating correct or near-correct values of parameters on a regional scale. The amount of prior information needed for an accurate solution is estimated by evaluation of the sensitivity of the performance function to the parameters. For the example presented here, it is experimentally demonstrated that only one piece of prior information of the least sensitive parameter is sufficient to arrive at the global or near-global optimum solution. For hydraulic head data with measurement errors, the error in the estimation of parameters increases as the standard deviation of the errors increases. Results from our experiments show that, in general, the accuracy of the estimated parameters depends on the level of noise in the hydraulic head data and the initial values used in the truncated-Newton search technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambie, F.W.; Yee, S.N.
The purpose of this and a previous project was to examine the feasibility of estimating intermediate grade uranium (0.01 to 0.05% U/sub 3/O/sub 8/) on the basis of existing, sparsely drilled holes. All data are from the Powder River Basin in Wyoming. DOE makes preliminary estimates of endowment by calculating an Average Area of Influence (AAI) based on densely drilled areas, multiplying that by the thickness of the mineralization and then dividing by a tonnage factor. The resulting tonnage of ore is then multiplied by the average grade of the interval to obtain the estimate of U/sub 3/O/sub 8/ tonnage.more » Total endowment is the sum of these values over all mineralized intervals in all wells in the area. In regions where wells are densely drilled and approximately regularly spaced this technique approaches the classical polygonal estimation technique used to estimate ore reserves and should be fairly reliable. The method is conservative because: (1) in sparsely drilled regions a large fraction of the area is not considered to contribute to endowment; (2) there is a bias created by the different distributions of point grades and mining block grades. A conservative approach may be justified for purposes of ore reserve estimation, where large investments may hinge on local forecasts. But for estimates of endowment over areas as large as 1/sup 0/ by 2/sup 0/ quadrangles, or the nation as a whole, errors in local predictions are not critical as long as they tend to cancel and a less conservative estimation approach may be justified.One candidate, developed for this study and described is called the contoured thickness technique. A comparison of estimates based on the contoured thickness approach with DOE calculations for five areas of Wyoming roll-fronts in the Powder River Basin is presented. The sensitivity of the technique to well density is examined and the question of predicting intermediate grade endowment from data on higher grades is discussed.« less
Estimation of global anthropogenic dust aerosol using CALIOP satellite
NASA Astrophysics Data System (ADS)
Chen, B.; Huang, J.; Liu, J.
2014-12-01
Anthropogenic dust aerosols are those produced by human activity, which mainly come from cropland, pasture, and urban in this paper. Because understanding of the emissions of anthropogenic dust is still very limited, a new technique for separating anthropogenic dust from natural dustusing CALIPSO dust and planetary boundary layer height retrievalsalong with a land use dataset is introduced. Using this technique, the global distribution of dust is analyzed and the relative contribution of anthropogenic and natural dust sources to regional and global emissions are estimated. Local anthropogenic dust aerosol due to human activity, such as agriculture, industrial activity, transportation, and overgrazing, accounts for about 22.3% of the global continentaldust load. Of these anthropogenic dust aerosols, more than 52.5% come from semi-arid and semi-wet regions. On the whole, anthropogenic dust emissions from East China and India are higher than other regions.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
Li, Y; Chappell, A; Nyamdavaa, B; Yu, H; Davaasuren, D; Zoljargal, K
2015-03-01
The (137)Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many (137)Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of (137)Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954-2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate (137)Cs-derived net soil redistribution across scales of variation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reid, Malcolm J; Langford, Katherine H; Grung, Merete; Gjerde, Hallvard; Amundsen, Ellen J; Morland, Jorg; Thomas, Kevin V
2012-01-01
Objectives A range of approaches are now available to estimate the level of drug use in the community so it is desirable to critically compare results from the differing techniques. This paper presents a comparison of the results from three methods for estimating the level of cocaine use in the general population. Design The comparison applies to; a set of regional-scale sample survey questionnaires, a representative sample survey on drug use among drivers and an analysis of the quantity of cocaine-related metabolites in sewage. Setting 14 438 participants provided data for the set of regional-scale sample survey questionnaires; 2341 drivers provided oral-fluid samples and untreated sewage from 570 000 people was analysed for biomarkers of cocaine use. All data were collected in Oslo, Norway. Results 0.70 (0.36–1.03) % of drivers tested positive for cocaine use which suggest a prevalence that is higher than the 0.22 (0.13–0.30) % (per day) figure derived from regional-scale survey questionnaires, but the degree to which cocaine consumption in the driver population follows the general population is an unanswered question. Despite the comparatively low-prevalence figure the survey questionnaires did provide estimates of the volume of consumption that are comparable with the amount of cocaine-related metabolites in sewage. Per-user consumption estimates are however highlighted as a significant source of uncertainty as little or no data on the quantities consumed by individuals are available, and much of the existing data are contradictory. Conclusions The comparison carried out in the present study can provide an excellent means of checking the quality and accuracy of the three measurement techniques because they each approach the problem from a different viewpoint. Together the three complimentary techniques provide a well-balanced assessment of the drug-use situation in a given community and identify areas where more research is needed. PMID:23144259
NASA Astrophysics Data System (ADS)
Yoo, S. H.
2017-12-01
Monitoring seismologists have successfully used seismic coda for event discrimination and yield estimation for over a decade. In practice seismologists typically analyze long-duration, S-coda signals with high signal-to-noise ratios (SNR) at regional and teleseismic distances, since the single back-scattering model reasonably predicts decay of the late coda. However, seismic monitoring requirements are shifting towards smaller, locally recorded events that exhibit low SNR and short signal lengths. To be successful at characterizing events recorded at local distances, we must utilize the direct-phase arrivals, as well as the earlier part of the coda, which is dominated by multiple forward scattering. To remedy this problem, we have developed a new hybrid method known as full-waveform envelope template matching to improve predicted envelope fits over the entire waveform and account for direct-wave and early coda complexity. We accomplish this by including a multiple forward-scattering approximation in the envelope modeling of the early coda. The new hybrid envelope templates are designed to fit local and regional full waveforms and produce low-variance amplitude estimates, which will improve yield estimation and discrimination between earthquakes and explosions. To demonstrate the new technique, we applied our full-waveform envelope template-matching method to the six known North Korean (DPRK) underground nuclear tests and four aftershock events following the September 2017 test. We successfully discriminated the event types and estimated the yield for all six nuclear tests. We also applied the same technique to the 2015 Tianjin explosions in China, and another suspected low-yield explosion at the DPRK test site on May 12, 2010. Our results show that the new full-waveform envelope template-matching method significantly improves upon longstanding single-scattering coda prediction techniques. More importantly, the new method allows monitoring seismologists to extend coda-based techniques to lower magnitude thresholds and low-yield local explosions.
Kjelstrom, L.C.
1998-01-01
Methods for estimating daily mean discharges for selected flow durations and flood discharge for selected recurrence intervals at ungaged sites in central Idaho were applied using data collected at streamflow-gaging stations in the area. The areal and seasonal variability of discharge from ungaged drainage basins may be described by estimating daily mean discharges that are exceeded 20, 50, and 80 percent of the time each month. At 73 gaging stations, mean monthly discharge was regressed with discharge at three points—20, 50, and 80—from daily mean flow-duration curves for each month. Regression results were improved by dividing the study area into six regions. Previously determined estimates of mean monthly discharge from about 1,200 ungaged drainage basins provided the basis for applying the developed techniques to the ungaged basins. Estimates of daily mean discharges that are exceeded 20, 50, and 80 percent of the time each month at ungaged drainage basins can be made by multiplying mean monthly discharges estimated at ungaged sites by a regression factor for the appropriate region. In general, the flow-duration data were less accurately estimated at discharges exceeded 80 percent of the time than at discharges exceeded 20 percent of the time. Curves drawn through the three points for each of the six regions were most similar in July and most different from December through March. Coefficients of determination of the regressions indicate that differences in mean monthly discharge largely explain differences in discharge at points on the daily mean flow-duration curve. Inherent in the method are errors in the technique used to estimate mean monthly discharge. Flood discharge estimates for selected recurrence intervals at ungaged sites upstream or downstream from gaging stations can be determined by a transfer technique. A weighted ratio of drainage area times flood discharge for selected recurrence intervals at the gaging station can be used to estimate flood discharge at the ungaged site. Best results likely are obtained when the difference between gaged and ungaged drainage areas is small.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Costa, S. R. X.; Paiao, L. B. F.; Mendonca, F. J.; Shimabukuro, Y. E.; Duarte, V.
1983-01-01
The two phase sampling technique was applied to estimate the area cultivated with sugar cane in an approximately 984 sq km pilot region of Campos. Correlation between existing aerial photography and LANDSAT data was used. The two phase sampling technique corresponded to 99.6% of the results obtained by aerial photography, taken as ground truth. This estimate has a standard deviation of 225 ha, which constitutes a coefficient of variation of 0.6%.
Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...
NASA Astrophysics Data System (ADS)
Mishra, Anoop; Rafiq, Mohammd
2017-12-01
This is the first attempt to merge highly accurate precipitation estimates from Global Precipitation Measurement (GPM) with gap free satellite observations from Meteosat to develop a regional rainfall monitoring algorithm to estimate heavy rainfall over India and nearby oceanic regions. Rainfall signature is derived from Meteosat observations and is co-located against rainfall from GPM to establish a relationship between rainfall and signature for various rainy seasons. This relationship can be used to monitor rainfall over India and nearby oceanic regions. Performance of this technique was tested by applying it to monitor heavy precipitation over India. It is reported that our algorithm is able to detect heavy rainfall. It is also reported that present algorithm overestimates rainfall areal spread as compared to rain gauge based rainfall product. This deficiency may arise from various factors including uncertainty caused by use of different sensors from different platforms (difference in viewing geometry from MFG and GPM), poor relationship between warm rain (light rain) and IR brightness temperature, and weak characterization of orographic rain from IR signature. We validated hourly rainfall estimated from the present approach with independent observations from GPM. We also validated daily rainfall from this approach with rain gauge based product from India Meteorological Department (IMD). Present technique shows a Correlation Coefficient (CC) of 0.76, a bias of -2.72 mm, a Root Mean Square Error (RMSE) of 10.82 mm, Probability of Detection (POD) of 0.74, False Alarm Ratio (FAR) of 0.34 and a Skill score of 0.36 with daily rainfall from rain gauge based product of IMD at 0.25° resolution. However, FAR reduces to 0.24 for heavy rainfall events. Validation results with rain gauge observations reveal that present technique outperforms available satellite based rainfall estimates for monitoring heavy rainfall over Indian region.
Geothermometric evaluation of geothermal resources in southeastern Idaho
NASA Astrophysics Data System (ADS)
Neupane, G.; Mattson, E. D.; McLing, T. L.; Palmer, C. D.; Smith, R. W.; Wood, T. R.; Podgorney, R. K.
2016-01-01
Southeastern Idaho exhibits numerous warm springs, warm water from shallow wells, and hot water from oil and gas test wells that indicate a potential for geothermal development in the area. We have estimated reservoir temperatures from chemical composition of thermal waters in southeastern Idaho using an inverse geochemical modeling technique (Reservoir Temperature Estimator, RTEst) that calculates the temperature at which multiple minerals are simultaneously at equilibrium while explicitly accounting for the possible loss of volatile constituents (e.g., CO2), boiling and/or water mixing. The temperature estimates in the region varied from moderately warm (59 °C) to over 175 °C. Specifically, hot springs near Preston, Idaho, resulted in the highest reservoir temperature estimates in the region.
Evaluation of a technique for satellite-derived area estimation of forest fires
NASA Technical Reports Server (NTRS)
Cahoon, Donald R., Jr.; Stocks, Brian J.; Levine, Joel S.; Cofer, Wesley R., III; Chung, Charles C.
1992-01-01
The advanced very high resolution radiometer (AVHRR), has been found useful for the location and monitoring of both smoke and fires because of the daily observations, the large geographical coverage of the imagery, the spectral characteristics of the instrument, and the spatial resolution of the instrument. This paper will discuss the application of AVHRR data to assess the geographical extent of burning. Methods have been developed to estimate the surface area of burning by analyzing the surface area effected by fire with AVHRR imagery. Characteristics of the AVHRR instrument, its orbit, field of view, and archived data sets are discussed relative to the unique surface area of each pixel. The errors associated with this surface area estimation technique are determined using AVHRR-derived area estimates of target regions with known sizes. This technique is used to evaluate the area burned during the Yellowstone fires of 1988.
Comparison of Globally Complete Versions of GPCP and CMAP Monthly Precipitation Analyses
NASA Technical Reports Server (NTRS)
Curtis, Scott; Adler, Robert; Huffman, George
1998-01-01
In this study two global observational precipitation products, namely the Global Precipitation Climatology Project's (GPCP) community data set and CPC's Merged Analysis of Precipitation (CMAP), are compared on global to regional scales in the context of the different satellite and gauge data inputs and merger techniques. The average annual global precipitation rates, calculated from data common in regions/times to both GPCP and CMAP, are similar for the two. However, CMAP is larger than GPCP in the tropics because: (1) CMAP values in the tropics are adjusted month-by month to atoll gauge data in the West Pacific, which are greater than any satellite observations used; and (2) CMAP is produced from a linear combination of data inputs, which tends to give higher values than the microwave emission estimates alone to which the inputs are adjusted in the GPCP merger over the ocean. The CMAP month-to-month adjustment to the atolls also appears to introduce temporal variations throughout the tropics which are not detected by satellite-only products. On the other hand, GPCP is larger than CMAP in the high-latitude oceans, where CMAP includes the scattering based microwave estimates which are consistently smaller than the emission estimates used in both techniques. Also, in the polar regions GPCP transitions from the emission microwave estimates to the larger TOVS-based estimates. Finally, in high-latitude land areas GPCP can be significantly larger than CMAP because GPCP attempts to correct the gauge estimates for errors due to wind loss effects.
Bakun, W.H.; Scotti, O.
2006-01-01
Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.
Looking inside the microseismic cloud using seismic interferometry
NASA Astrophysics Data System (ADS)
Matzel, E.; Rhode, A.; Morency, C.; Templeton, D. C.; Pyle, M. L.
2015-12-01
Microseismicity provides a direct means of measuring the physical characteristics of active tectonic features such as fault zones. Thousands of microquakes are often associated with an active site. This cloud of microseismicity helps define the tectonically active region. When processed using novel geophysical techniques, we can isolate the energy sensitive to the faulting region, itself. The virtual seismometer method (VSM) is a technique of seismic interferometry that provides precise estimates of the GF between earthquakes. In many ways the converse of ambient noise correlation, it is very sensitive to the source parameters (location, mechanism and magnitude) and to the Earth structure in the source region. In a region with 1000 microseisms, we can calculate roughly 500,000 waveforms sampling the active zone. At the same time, VSM collapses the computation domain down to the size of the cloud of microseismicity, often by 2-3 orders of magnitude. In simple terms VSM involves correlating the waveforms from a pair of events recorded at an individual station and then stacking the results over all stations to obtain the final result. In the far-field, when most of the stations in a network fall along a line between the two events, the result is an estimate of the GF between the two, modified by the source terms. In this geometry each earthquake is effectively a virtual seismometer recording all the others. When applied to microquakes, this alignment is often not met, and we also need to address the effects of the geometry between the two microquakes relative to each seismometer. Nonetheless, the technique is quite robust, and highly sensitive to the microseismic cloud. Using data from the Salton Sea geothermal region, we demonstrate the power of the technique, illustrating our ability to scale the technique from the far-field, where sources are well separated, to the near field where their locations fall within each other's uncertainty ellipse. VSM provides better illumination of the complex subsurface by generating precise, high frequency estimates of the GF and resolution of seismic properties between every pair of events. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
Design of surface-water data networks for regional information
Moss, Marshall E.; Gilroy, E.J.; Tasker, Gary D.; Karlinger, M.R.
1982-01-01
This report describes a technique, Network Analysis of Regional Information (NARI), and the existing computer procedures that have been developed for the specification of the regional information-cost relation for several statistical parameters of streamflow. The measure of information used is the true standard error of estimate of a regional logarithmic regression. The cost is a function of the number of stations at which hydrologic data are collected and the number of years for which the data are collected. The technique can be used to obtain either (1) a minimum cost network that will attain a prespecified accuracy and reliability or (2) a network that maximizes information given a set of budgetary and time constraints.
Comparison of local- to regional-scale estimates of ground-water recharge in Minnesota, USA
Delin, G.N.; Healy, R.W.; Lorenz, D.L.; Nimmo, J.R.
2007-01-01
Regional ground-water recharge estimates for Minnesota were compared to estimates made on the basis of four local- and basin-scale methods. Three local-scale methods (unsaturated-zone water balance, water-table fluctuations (WTF) using three approaches, and age dating of ground water) yielded point estimates of recharge that represent spatial scales from about 1 to about 1000 m2. A fourth method (RORA, a basin-scale analysis of streamflow records using a recession-curve-displacement technique) yielded recharge estimates at a scale of 10–1000s of km2. The RORA basin-scale recharge estimates were regionalized to estimate recharge for the entire State of Minnesota on the basis of a regional regression recharge (RRR) model that also incorporated soil and climate data. Recharge rates estimated by the RRR model compared favorably to the local and basin-scale recharge estimates. RRR estimates at study locations were about 41% less on average than the unsaturated-zone water-balance estimates, ranged from 44% greater to 12% less than estimates that were based on the three WTF approaches, were about 4% less than the age dating of ground-water estimates, and were about 5% greater than the RORA estimates. Of the methods used in this study, the WTF method is the simplest and easiest to apply. Recharge estimates made on the basis of the UZWB method were inconsistent with the results from the other methods. Recharge estimates using the RRR model could be a good source of input for regional ground-water flow models; RRR model results currently are being applied for this purpose in USGS studies elsewhere.
Due to the lack of an analytical technique for directly quantifying the atmospheric concentrations of primary (OCpri) and secondary (OCsec) organic carbon aerosols, different indirect methods have been developed to estimate their concentrations. In this stu...
Sensitivity of FIA Volume Estimates to Changes in Stratum Weights and Number of Strata
James A. Westfall; Michael Hoppus
2005-01-01
In the Northeast region, the USDA Forest Service Forest Inventory and Analysis (FIA) program utilizes stratified sampling techniques to improve the precision of population estimates. Recently, interpretation of aerial photographs was replaced with classified remotely sensed imagery to determine stratum weights and plot stratum assignments. However, stratum weights...
NASA Astrophysics Data System (ADS)
Friedel, M. J.; Daughney, C.
2016-12-01
The development of a successful surface-groundwater management strategy depends on the quality of data provided for analysis. This study evaluates the statistical robustness when using a modified self-organizing map (MSOM) technique to estimate missing values for three hypersurface models: synoptic groundwater-surface water hydrochemistry, time-series of groundwater-surface water hydrochemistry, and mixed-survey (combination of groundwater-surface water hydrochemistry and lithologies) hydrostratigraphic unit data. These models of increasing complexity are developed and validated based on observations from the Southland region of New Zealand. In each case, the estimation method is sufficiently robust to cope with groundwater-surface water hydrochemistry vagaries due to sample size and extreme data insufficiency, even when >80% of the data are missing. The estimation of surface water hydrochemistry time series values enabled the evaluation of seasonal variation, and the imputation of lithologies facilitated the evaluation of hydrostratigraphic controls on groundwater-surface water interaction. The robust statistical results for groundwater-surface water models of increasing data complexity provide justification to apply the MSOM technique in other regions of New Zealand and abroad.
Ponderosa pine forest reconstruction: Comparisons with historical data
David W. Huffman; Margaret M. Moore; W. Wallace Covington; Joseph E. Crouse; Peter Z. Fule
2001-01-01
Dendroecological forest reconstruction techniques are used to estimate presettlement structure of northern Arizona ponderosa pine forests. To test the accuracy of these techniques, we remeasured 10 of the oldest forest plots in Arizona, a subset of 51 historical plots established throughout the region from 1909 to 1913, and compared reconstruction outputs to historical...
Challenges of model transferability to data-scarce regions (Invited)
NASA Astrophysics Data System (ADS)
Samaniego, L. E.
2013-12-01
Developing the ability to globally predict the movement of water on the land surface at spatial scales from 1 to 5 km constitute one of grand challenges in land surface modelling. Copying with this grand challenge implies that land surface models (LSM) should be able to make reliable predictions across locations and/or scales other than those used for parameter estimation. In addition to that, data scarcity and quality impose further difficulties in attaining reliable predictions of water and energy fluxes at the scales of interest. Current computational limitations impose also seriously limitations to exhaustively investigate the parameter space of LSM over large domains (e.g. greater than half a million square kilometers). Addressing these challenges require holistic approaches that integrate the best techniques available for parameter estimation, field measurements and remotely sensed data at their native resolutions. An attempt to systematically address these issues is the multiscale parameterisation technique (MPR) that links high resolution land surface characteristics with effective model parameters. This technique requires a number of pedo-transfer functions and a much fewer global parameters (i.e. coefficients) to be inferred by calibration in gauged basins. The key advantage of this technique is the quasi-scale independence of the global parameters which enables to estimate global parameters at coarser spatial resolutions and then to transfer them to (ungauged) areas and scales of interest. In this study we show the ability of this technique to reproduce the observed water fluxes and states over a wide range of climate and land surface conditions ranging from humid to semiarid and from sparse to dense forested regions. Results of transferability of global model parameters in space (from humid to semi-arid basins) and across scales (from coarser to finer) clearly indicate the robustness of this technique. Simulations with coarse data sets (e.g. EOBS forcing 25x25 km2, FAO soil map 1:5000000) using parameters obtained with high resolution information (REGNIE forcing 1x1 km2, BUEK soil map 1:1000000) in different climatic regions indicate the potential of MPR for prediction in data-scarce regions. In this presentation, we will also discuss how the transferability of global model parameters across scales and locations helps to identify deficiencies in model structure and regionalization functions.
NASA Astrophysics Data System (ADS)
Adushkin, V. V.
- A statistical procedure is described for estimating the yields of underground nuclear tests at the former Soviet Semipalatinsk test site using the peak amplitudes of short-period surface waves observed at near-regional distances (Δ < 150 km) from these explosions. This methodology is then applied to data recorded from a large sample of the Semipalatinsk explosions, including the Soviet JVE explosion of September 14, 1988, and it is demonstrated that it provides seismic estimates of explosion yield which are typically within 20% of the yields determined for these same explosions using more accurate, non-seismic techniques based on near-source observations.
Williams-Sether, Tara
2015-08-06
Annual peak-flow frequency data from 231 U.S. Geological Survey streamflow-gaging stations in North Dakota and parts of Montana, South Dakota, and Minnesota, with 10 or more years of unregulated peak-flow record, were used to develop regional regression equations for exceedance probabilities of 0.5, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 using generalized least-squares techniques. Updated peak-flow frequency estimates for 262 streamflow-gaging stations were developed using data through 2009 and log-Pearson Type III procedures outlined by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data. An average generalized skew coefficient was determined for three hydrologic zones in North Dakota. A StreamStats web application was developed to estimate basin characteristics for the regional regression equation analysis. Methods for estimating a weighted peak-flow frequency for gaged sites and ungaged sites are presented.
Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis
Peng, Zhenyun; Zhang, Yaohui
2014-01-01
Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182
Regional estimation of extreme suspended sediment concentrations using watershed characteristics
NASA Astrophysics Data System (ADS)
Tramblay, Yves; Ouarda, Taha B. M. J.; St-Hilaire, André; Poulin, Jimmy
2010-01-01
SummaryThe number of stations monitoring daily suspended sediment concentration (SSC) has been decreasing since the 1980s in North America while suspended sediment is considered as a key variable for water quality. The objective of this study is to test the feasibility of regionalising extreme SSC, i.e. estimating SSC extremes values for ungauged basins. Annual maximum SSC for 72 rivers in Canada and USA were modelled with probability distributions in order to estimate quantiles corresponding to different return periods. Regionalisation techniques, originally developed for flood prediction in ungauged basins, were tested using the climatic, topographic, land cover and soils attributes of the watersheds. Two approaches were compared, using either physiographic characteristics or seasonality of extreme SSC to delineate the regions. Multiple regression models to estimate SSC quantiles as a function of watershed characteristics were built in each region, and compared to a global model including all sites. Regional estimates of SSC quantiles were compared with the local values. Results show that regional estimation of extreme SSC is more efficient than a global regression model including all sites. Groups/regions of stations have been identified, using either the watershed characteristics or the seasonality of occurrence for extreme SSC values providing a method to better describe the extreme events of SSC. The most important variables for predicting extreme SSC are the percentage of clay in the soils, precipitation intensity and forest cover.
Space, time, and the third dimension (model error)
Moss, Marshall E.
1979-01-01
The space-time tradeoff of hydrologic data collection (the ability to substitute spatial coverage for temporal extension of records or vice versa) is controlled jointly by the statistical properties of the phenomena that are being measured and by the model that is used to meld the information sources. The control exerted on the space-time tradeoff by the model and its accompanying errors has seldom been studied explicitly. The technique, known as Network Analyses for Regional Information (NARI), permits such a study of the regional regression model that is used to relate streamflow parameters to the physical and climatic characteristics of the drainage basin.The NARI technique shows that model improvement is a viable and sometimes necessary means of improving regional data collection systems. Model improvement provides an immediate increase in the accuracy of regional parameter estimation and also increases the information potential of future data collection. Model improvement, which can only be measured in a statistical sense, cannot be quantitatively estimated prior to its achievement; thus an attempt to upgrade a particular model entails a certain degree of risk on the part of the hydrologist.
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
NASA Astrophysics Data System (ADS)
Vista Wulandari, Ayu; Rizki Pratama, Khafid; Ismail, Prayoga
2018-05-01
Accurate and realtime data in wide spatial space at this time is still a problem because of the unavailability of observation of rainfall in each region. Weather satellites have a very wide range of observations and can be used to determine rainfall variability with better resolution compared with a limited direct observation. Utilization of Himawari-8 satellite data in estimating rainfall using Convective Stratiform Technique (CST) method. The CST method is performed by separating convective and stratiform cloud components using infrared channel satellite data. Cloud components are classified by slope because the physical and dynamic growth processes are very different. This research was conducted in Bali area on December 14, 2016 by verifying the result of CST process with rainfall data from Ngurah Rai Meteorology Station Bali. It is found that CST method result had simililar value with data observation in Ngurah Rai meteorological station, so it assumed that CST method can be used for rainfall estimation in Bali region.
Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-07-01
The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.
Geological Interpretation of PSInSAR Data at Regional Scale
Meisina, Claudia; Zucca, Francesco; Notti, Davide; Colombo, Alessio; Cucchi, Anselmo; Savio, Giuliano; Giannico, Chiara; Bianchi, Marco
2008-01-01
Results of a PSInSAR™ project carried out by the Regional Agency for Environmental Protection (ARPA) in Piemonte Region (Northern Italy) are presented and discussed. A methodology is proposed for the interpretation of the PSInSAR™ data at the regional scale, easy to use by the public administrations and by civil protection authorities. Potential and limitations of the PSInSAR™ technique for ground movement detection on a regional scale and monitoring are then estimated in relationship with different geological processes and various geological environments. PMID:27873940
Estimating Crop Growth Stage by Combining Meteorological and Remote Sensing Based Techniques
NASA Astrophysics Data System (ADS)
Champagne, C.; Alavi-Shoushtari, N.; Davidson, A. M.; Chipanshi, A.; Zhang, Y.; Shang, J.
2016-12-01
Estimations of seeding, harvest and phenological growth stage of crops are important sources of information for monitoring crop progress and crop yield forecasting. Growth stage has been traditionally estimated at the regional level through surveys, which rely on field staff to collect the information. Automated techniques to estimate growth stage have included agrometeorological approaches that use temperature and day length information to estimate accumulated heat and photoperiod, with thresholds used to determine when these stages are most likely. These approaches however, are crop and hybrid dependent, and can give widely varying results depending on the method used, particularly if the seeding date is unknown. Methods to estimate growth stage from remote sensing have progressed greatly in the past decade, with time series information from the Normalized Difference Vegetation Index (NDVI) the most common approach. Time series NDVI provide information on growth stage through a variety of techniques, including fitting functions to a series of measured NDVI values or smoothing these values and using thresholds to detect changes in slope that are indicative of rapidly increasing or decreasing `greeness' in the vegetation cover. The key limitations of these techniques for agriculture are frequent cloud cover in optical data that lead to errors in estimating local features in the time series function, and the incongruity between changes in greenness and traditional agricultural growth stages. There is great potential to combine both meteorological approaches and remote sensing to overcome the limitations of each technique. This research will examine the accuracy of both meteorological and remote sensing approaches over several agricultural sites in Canada, and look at the potential to integrate these techniques to provide improved estimates of crop growth stage for common field crops.
On the use of INS to improve Feature Matching
NASA Astrophysics Data System (ADS)
Masiero, A.; Guarnieri, A.; Vettore, A.; Pirotti, F.
2014-11-01
The continuous technological improvement of mobile devices opens the frontiers of Mobile Mapping systems to very compact systems, i.e. a smartphone or a tablet. This motivates the development of efficient 3D reconstruction techniques based on the sensors typically embedded in such devices, i.e. imaging sensors, GPS and Inertial Navigation System (INS). Such methods usually exploits photogrammetry techniques (structure from motion) to provide an estimation of the geometry of the scene. Actually, 3D reconstruction techniques (e.g. structure from motion) rely on use of features properly matched in different images to compute the 3D positions of objects by means of triangulation. Hence, correct feature matching is of fundamental importance to ensure good quality 3D reconstructions. Matching methods are based on the appearance of features, that can change as a consequence of variations of camera position and orientation, and environment illumination. For this reason, several methods have been developed in recent years in order to provide feature descriptors robust (ideally invariant) to such variations, e.g. Scale-Invariant Feature Transform (SIFT), Affine SIFT, Hessian affine and Harris affine detectors, Maximally Stable Extremal Regions (MSER). This work deals with the integration of information provided by the INS in the feature matching procedure: a previously developed navigation algorithm is used to constantly estimate the device position and orientation. Then, such information is exploited to estimate the transformation of feature regions between two camera views. This allows to compare regions from different images but associated to the same feature as seen by the same point of view, hence significantly easing the comparison of feature characteristics and, consequently, improving matching. SIFT-like descriptors are used in order to ensure good matching results in presence of illumination variations and to compensate the approximations related to the estimation process.
Observation-Driven Estimation of the Spatial Variability of 20th Century Sea Level Rise
NASA Astrophysics Data System (ADS)
Hamlington, B. D.; Burgos, A.; Thompson, P. R.; Landerer, F. W.; Piecuch, C. G.; Adhikari, S.; Caron, L.; Reager, J. T.; Ivins, E. R.
2018-03-01
Over the past two decades, sea level measurements made by satellites have given clear indications of both global and regional sea level rise. Numerous studies have sought to leverage the modern satellite record and available historic sea level data provided by tide gauges to estimate past sea level rise, leading to several estimates for the 20th century trend in global mean sea level in the range between 1 and 2 mm/yr. On regional scales, few attempts have been made to estimate trends over the same time period. This is due largely to the inhomogeneity and quality of the tide gauge network through the 20th century, which render commonly used reconstruction techniques inadequate. Here, a new approach is adopted, integrating data from a select set of tide gauges with prior estimates of spatial structure based on historical sea level forcing information from the major contributing processes over the past century. The resulting map of 20th century regional sea level rise is optimized to agree with the tide gauge-measured trends, and provides an indication of the likely contributions of different sources to regional patterns. Of equal importance, this study demonstrates the sensitivities of this regional trend map to current knowledge and uncertainty of the contributing processes.
Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska
NASA Astrophysics Data System (ADS)
Bonin, J. A.; Chambers, D. P.
2012-12-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.
Advances in the regionalization approach: geostatistical techniques for estimating flood quantiles
NASA Astrophysics Data System (ADS)
Chiarello, Valentina; Caporali, Enrica; Matthies, Hermann G.
2015-04-01
The knowledge of peak flow discharges and associated floods is of primary importance in engineering practice for planning of water resources and risk assessment. Streamflow characteristics are usually estimated starting from measurements of river discharges at stream gauging stations. However, the lack of observations at site of interest as well as the measurement inaccuracies, bring inevitably to the necessity of developing predictive models. Regional analysis is a classical approach to estimate river flow characteristics at sites where little or no data exists. Specific techniques are needed to regionalize the hydrological variables over the considered area. Top-kriging or topological kriging, is a kriging interpolation procedure that takes into account the geometric organization and structure of hydrographic network, the catchment area and the nested nature of catchments. The continuous processes in space defined for the point variables are represented by a variogram. In Top-kriging, the measurements are not point values but are defined over a non-zero catchment area. Top-kriging is applied here over the geographical space of Tuscany Region, in Central Italy. The analysis is carried out on the discharge data of 57 consistent runoff gauges, recorded from 1923 to 2014. Top-kriging give also an estimation of the prediction uncertainty in addition to the prediction itself. The results are validated using a cross-validation procedure implemented in the package rtop of the open source statistical environment R The results are compared through different error measurement methods. Top-kriging seems to perform better in nested catchments and larger scale catchments but no for headwater or where there is a high variability for neighbouring catchments.
NASA Astrophysics Data System (ADS)
Keeler, D. G.; Rupper, S.; Forster, R. R.; Miège, C.; Brewer, S.; Koenig, L.
2017-12-01
The West Antarctic Ice Sheet (WAIS) could be a substantial source of future sea level rise, with 3+ meters of potential increase stored in the ice sheet. Adequate predictions of WAIS contributions, however, depend on well-constrained surface mass balance estimates for the region. Given the sparsity of available data, such estimates are tenuous. Although new data are periodically added, further research (both to collect more data and better utilize existing data) is critical to addressing these issues. Here we present accumulation data from 9 shallow firn cores and 600 km of Ku band radar traces collected as part of the Satellite Era Antarctic Traverse (SEAT) 2011/2012 field season. Using these data, combined with similar data collected during the SEAT 2010/2011 field season, we investigate the spatial variability in accumulation across the WAIS Divide and surrounding regions. We utilize seismic interpretation and 3D visualization tools to investigate the extent and variations of laterally continuous internal horizons in the radar profiles, and compare the results to nearby firn cores. Previous results show that clearly visible, laterally continuous horizons in radar returns in this area do not always represent annual accumulation isochrones, but can instead represent multi-year or sub-annual events. The automated application of Bayesian inference techniques to averaged estimates of multiple adjacent radar traces, however, can estimate annually-resolved independent age-depth scales for these radar data. We use these same automated techniques on firn core isotopic records to infer past snow accumulation rates, allowing a direct comparison with the radar-derived results. Age-depth scales based on manual annual-layer counting of geochemical and isotopic species from these same cores provide validation for the automated approaches. Such techniques could theoretically be applied to additional radar/core data sets in polar regions (e.g. Operation IceBridge), thereby increasing the number of high resolution accumulation records available in these data-sparse regions. An increased understanding of the variability in magnitude and past rates of surface mass balance can provide better constraints on sea level projections and more precise context for present-day and future observations in these regions.
Classification of Regional Ionospheric Disturbances Based on Support Vector Machines
NASA Astrophysics Data System (ADS)
Begüm Terzi, Merve; Arikan, Feza; Arikan, Orhan; Karatay, Secil
2016-07-01
Ionosphere is an anisotropic, inhomogeneous, time varying and spatio-temporally dispersive medium whose parameters can be estimated almost always by using indirect measurements. Geomagnetic, gravitational, solar or seismic activities cause variations of ionosphere at various spatial and temporal scales. This complex spatio-temporal variability is challenging to be identified due to extensive scales in period, duration, amplitude and frequency of disturbances. Since geomagnetic and solar indices such as Disturbance storm time (Dst), F10.7 solar flux, Sun Spot Number (SSN), Auroral Electrojet (AE), Kp and W-index provide information about variability on a global scale, identification and classification of regional disturbances poses a challenge. The main aim of this study is to classify the regional effects of global geomagnetic storms and classify them according to their risk levels. For this purpose, Total Electron Content (TEC) estimated from GPS receivers, which is one of the major parameters of ionosphere, will be used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. In this work, for the automated classification of the regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. SVM is a supervised learning model used for classification with associated learning algorithm that analyze the data and recognize patterns. In addition to performing linear classification, SVM can efficiently perform nonlinear classification by embedding data into higher dimensional feature spaces. Performance of the developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from the GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing the developed classification technique to the Global Ionospheric Map (GIM) TEC data which is provided by the NASA Jet Propulsion Laboratory (JPL), it will be shown that SVM can be a suitable learning method to detect the anomalies in Total Electron Content (TEC) variations. This study is supported by TUBITAK 114E541 project as a part of the Scientific and Technological Research Projects Funding Program (1001).
Subsurface structures of buried features in the lunar Procellarum region
NASA Astrophysics Data System (ADS)
Wang, Wenrui; Heki, Kosuke
2017-07-01
The Gravity Recovery and Interior Laboratory (GRAIL) mission unraveled numbers of features showing strong gravity anomalies without prominent topographic signatures in the lunar Procellarum region. These features, located in different geologic units, are considered to have complex subsurface structures reflecting different evolution processes. By using the GRAIL level-1 data, we estimated the free-air and Bouguer gravity anomalies in several selected regions including such intriguing features. With the three-dimensional inversion technique, we recovered subsurface density structures in these regions.
Estimation of accuracy of earth-rotation parameters in different frequency bands
NASA Astrophysics Data System (ADS)
Vondrak, J.
1986-11-01
The accuracies of earth-rotation parameters as determined by five different observational techniques now available (i.e., optical astrometry /OA/, Doppler tracking of satellites /DTS/, satellite laser ranging /SLR/, very long-base interferometry /VLBI/ and lunar laser ranging /LLR/) are estimated. The differences between the individual techniques in all possible combinations, separated by appropriate filters into three frequency bands, were used to estimate the accuracies of the techniques for periods from 0 to 200 days, from 200 to 1000 days and longer than 1000 days. It is shown that for polar motion the most accurate results are obtained with VLBI anad SLR, especially in the short-period region; OA and DTS are less accurate, but with longer periods the differences in accuracy are less pronounced. The accuracies of UTI-UTC as determined by OA, VLBI and LLR are practically equivalent, the differences being less than 40 percent.
An analysis of production and costs in high-lead yarding.
Magnus E. Tennas; Robert H. Ruth; Carl M. Berntsen
1955-01-01
In recent years loggers and timber owners have needed better information for estimating logging costs in the Douglas-fir region. Brandstrom's comprehensive study, published in 1933 (1), has long been used as a guide in making cost estimates. But the use of new equipment and techniques and an overall increase in logging costs have made it increasingly difficult to...
NASA Astrophysics Data System (ADS)
Uniyal, D.; Kimothi, M. M.; Bhagya, N.; Ram, R. D.; Patel, N. K.; Dhaundiya, V. K.
2014-11-01
Wheat is an economically important Rabi crop for the state, which is grown on around 26 % of total available agriculture area in the state. There is a variation in productivity of wheat crop in hilly and tarai region. The agricultural productivity is less in hilly region in comparison of tarai region due to terrace cultivation, traditional system of agriculture, small land holdings, variation in physiography, top soil erosion, lack of proper irrigation system etc. Pre-harvest acreage/yield/production estimation of major crops is being done with the help of conventional crop cutting method, which is biased, inaccurate and time consuming. Remote Sensing data with multi-temporal and multi-spectral capabilities has shown new dimension in crop discrimination analysis and acreage/yield/production estimation in recent years. In view of this, Uttarakhand Space Applications Centre (USAC), Dehradun with the collaboration of Space Applications Centre (SAC), ISRO, Ahmedabad and Uttarakhand State Agriculture Department, have developed different techniques for the discrimination of crops and estimation of pre-harvest wheat acreage/yield/production. In the 1st phase, five districts (Dehradun, Almora, Udham Singh Nagar, Pauri Garhwal and Haridwar) with distinct physiography i.e. hilly and plain regions, have been selected for testing and verification of techniques using IRS (Indian Remote Sensing Satellites), LISS-III, LISS-IV satellite data of Rabi season for the year 2008-09 and whole 13 districts of the Uttarakhand state from 2009-14 along with ground data were used for detailed analysis. Five methods have been developed i.e. NDVI (Normalized Differential Vegetation Index), Supervised classification, Spatial modeling, Masking out method and Programming on visual basics methods using multitemporal satellite data of Rabi season along with the collateral and ground data. These methods were used for wheat discriminations and preharvest acreage estimations and subsequently results were compared with Bureau of Estimation Statistics (BES). Out of these five different methods, wheat area that was estimated by spatial modeling and programming on visual basics has been found quite near to Bureau of Estimation Statistics (BES). But for hilly region, maximum fields were going in shadow region, so it was difficult to estimate accurate result, so frequency distribution curve method has been used and frequency range has been decided to discriminate wheat pixels from other pixels in hilly region, digitized those regions and result shows good result. For yield estimation, an algorithm has been developed by using soil characteristics i.e. texture, depth, drainage, temperature, rainfall and historical yield data. To get the production estimation, estimated yield multiplied by acreage of crop per hectare. Result shows deviation for acreage estimation from BES is around 3.28 %, 2.46 %, 3.45 %, 1.56 %, 1.2 % and 1.6 % (estimation not declared till now by state Agriculture dept. For the year 2013-14) estimation and deviation for production estimation is around 4.98 %, 3.66 % 3.21 % , 3.1 % NA and 2.9 % for the consecutive above mentioned years i.e. 2008-09, 2009-10, 2010-11, 2011-12, 2012-13 and 2013-14. The estimated data has been provided to State Agriculture department for their use. To forecast production before harvest facilitate the formulation of workable marketing strategies leading to better export/import of crop in the state, which will help to lead better economic condition of the state. Yield estimation would help agriculture department in assessment of productivity of land for specific crop. Pre-harvest wheat acreage/production estimation, is useful to facilitate the reliable and timely estimates and enable the administrators and planners to take strategic decisions on import-export policy matters and trade negotiations.
Estimating Bacterial Diversity for Ecological Studies: Methods, Metrics, and Assumptions
Birtel, Julia; Walser, Jean-Claude; Pichon, Samuel; Bürgmann, Helmut; Matthews, Blake
2015-01-01
Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. PMID:25915756
Seasonal and regional variations of primary (OCpri) and secondary (OCsec) organic carbon aerosols across the continental U.S. for the year 2001 were examined by a semi-empirical technique using observed OC and elemental carbon (EC) data from 142 routine moni...
Rainfall estimation for real time flood monitoring using geostationary meteorological satellite data
NASA Astrophysics Data System (ADS)
Veerakachen, Watcharee; Raksapatcharawong, Mongkol
2015-09-01
Rainfall estimation by geostationary meteorological satellite data provides good spatial and temporal resolutions. This is advantageous for real time flood monitoring and warning systems. However, a rainfall estimation algorithm developed in one region needs to be adjusted for another climatic region. This work proposes computationally-efficient rainfall estimation algorithms based on an Infrared Threshold Rainfall (ITR) method calibrated with regional ground truth. Hourly rain gauge data collected from 70 stations around the Chao-Phraya river basin were used for calibration and validation of the algorithms. The algorithm inputs were derived from FY-2E satellite observations consisting of infrared and water vapor imagery. The results were compared with the Global Satellite Mapping of Precipitation (GSMaP) near real time product (GSMaP_NRT) using the probability of detection (POD), root mean square error (RMSE) and linear correlation coefficient (CC) as performance indices. Comparison with the GSMaP_NRT product for real time monitoring purpose shows that hourly rain estimates from the proposed algorithm with the error adjustment technique (ITR_EA) offers higher POD and approximately the same RMSE and CC with less data latency.
One technique for refining the global Earth gravity models
NASA Astrophysics Data System (ADS)
Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.
2017-01-01
The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.
Computerized image analysis: estimation of breast density on mammograms
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Petrick, Nicholas; Sahiner, Berkman; Helvie, Mark A.; Roubidoux, Marilyn A.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.
2000-06-01
An automated image analysis tool is being developed for estimation of mammographic breast density, which may be useful for risk estimation or for monitoring breast density change in a prevention or intervention program. A mammogram is digitized using a laser scanner and the resolution is reduced to a pixel size of 0.8 mm X 0.8 mm. Breast density analysis is performed in three stages. First, the breast region is segmented from the surrounding background by an automated breast boundary-tracking algorithm. Second, an adaptive dynamic range compression technique is applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rule-based classification is used to classify the breast images into several classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold is automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area is then estimated. In this preliminary study, we analyzed the interobserver variation of breast density estimation by two experienced radiologists using BI-RADS lexicon. The radiologists' visually estimated percent breast densities were compared with the computer's calculation. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility in comparison with the subjective visual assessment by radiologists.
[Proposed method to estimate underreporting of induced abortion in Spain].
Rodríguez Blas, C; Sendra Gutiérrez, J M; Regidor Poyatos, E; Gutiérrez Fisac, J L; Iñigo Martínez, J
1994-01-01
In Spain, from 1987 to 1990 the rate of legal abortion reported to the health authorities has doubled; nevertheless, the observed geographical differences suggest to an underreporting of the number of voluntary pregnancy terminations. Based on information on several sociodemographic, economic and cultural characteristics, contraceptive use, availability of abortion services, fertility indices, and maternal and child health status, five homogenEous groups of autonomous region were identified applying factor and cluster analysis techniques. To estimate the level of underreporting, we assumed that all the regions which shape a cluster ought to have the same abortion rate that the region with the highest rate in each group. We estimate that about 18,463 abortions (33.2%) were not reported during 1990. The proposed method can be used for assessing the notification since it allows to identify geographical areas where very similar rates of legal abortion are expected.
Estimation of mercury emission from different sources to atmosphere in Chongqing, China.
Wang, Dingyong; He, Lei; Wei, Shiqiang; Feng, Xinbin
2006-08-01
This investigation presents a first assessment of the contribution to the regional mercury budget from anthropogenic and natural sources in Chongqing, an important industrial region in southwest China. The emissions of mercury to atmosphere from anthropogenic sources in the region were estimated through indirect approaches, i.e. using commonly acceptable emission factors method, which based on annual process throughputs or consumption for these sources. The natural mercury emissions were estimated from selected natural sources by the dynamic flux chamber technique. The results indicated that the anthropogenic mercury emissions totaled approximately 8.85 tons (t), more than 50% of this total originated in coal combustion and 23.7% of this total emission in the industrial process (include cement production, metal smelting and chemical industry). The natural emissions represented approximately 17% of total emissions (1.78 t yr(-1)). The total mercury emission to atmosphere in Chongqing in 2001 was 10.63 t.
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard
1994-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. ne parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
Satellite angular velocity estimation based on star images and optical flow techniques.
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-09-25
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.
Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques
Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele
2013-01-01
An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023
Application of split window technique to TIMS data
NASA Technical Reports Server (NTRS)
Matsunaga, Tsuneo; Rokugawa, Shuichi; Ishii, Yoshinori
1992-01-01
Absorptions by the atmosphere in thermal infrared region are mainly due to water vapor, carbon dioxide, and ozone. As the content of water vapor in the atmosphere greatly changes according to weather conditions, it is important to know its amount between the sensor and the ground for atmospheric corrections of thermal Infrared Multispectral Scanner (TIMS) data (i.e. radiosonde). On the other hand, various atmospheric correction techniques were already developed for sea surface temperature estimations from satellites. Among such techniques, Split Window technique, now widely used for AVHRR (Advanced Very High Resolution Radiometer), uses no radiosonde or any kind of supplementary data but a difference between observed brightness temperatures in two channels for estimating atmospheric effects. Applications of Split Window technique to TIMS data are discussed because availability of atmospheric profile data is not clear when ASTER operates. After these theoretical discussions, the technique is experimentally applied to TIMS data at three ground targets and results are compared with atmospherically corrected data using LOWTRAN 7 with radiosonde data.
Renal parameter estimates in unrestrained dogs
NASA Technical Reports Server (NTRS)
Rader, R. D.; Stevens, C. M.
1974-01-01
A mathematical formulation has been developed to describe the hemodynamic parameters of a conceptualized kidney model. The model was developed by considering regional pressure drops and regional storage capacities within the renal vasculature. Estimation of renal artery compliance, pre- and postglomerular resistance, and glomerular filtration pressure is feasible by considering mean levels and time derivatives of abdominal aortic pressure and renal artery flow. Changes in the smooth muscle tone of the renal vessels induced by exogenous angiotensin amide, acetylcholine, and by the anaesthetic agent halothane were estimated by use of the model. By employing totally implanted telemetry, the technique was applied on unrestrained dogs to measure renal resistive and compliant parameters while the dogs were being subjected to obedience training, to avoidance reaction, and to unrestrained caging.
William Salas; Steve Hagen
2013-01-01
This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...
Estimation and analysis of interannual variations in tropical oceanic rainfall using data from SSM/I
NASA Technical Reports Server (NTRS)
Berg, Wesley
1992-01-01
Rainfall over tropical ocean regions, particularly in the tropical Pacific, is estimated using Special Sensor Microwave/Imager (SSM/I) data. Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-Matrix algorithm. Comparisons with other satellite techniques are made to validate the SSM/I results for the tropical Pacific. The correlation coefficients are relatively high for the three data sets investigated, especially for the annual case.
Attenuation Characteristics of High Frequency Seismic Waves in Southern India
NASA Astrophysics Data System (ADS)
Sivaram, K.; Utpal, Saikia; Kanna, Nagaraju; Kumar, Dinesh
2017-07-01
We present a systematic study of seismic attenuation and its related Q structure derived from the spectral analysis of P-, S-waves in the southern India. The study region is separated into parts of EDC (Eastern Dharwar Craton), Western Dharwar Craton (WDC) and Southern Granulite Terrain (SGT). The study is carried out in the frequency range 1-20 Hz, using a single-station spectral ratio technique. We make use of about 45 earthquakes, recorded in a network of about 32 broadband 3-component seismograph-stations, having magnitudes ( M L) varying from 1.6 to 4.5, to estimate the average seismic body wave attenuation quality factors; Q P and Q S. Their estimated average values are observed to be fitting to the power law form of Q = Q 0 f n . The averaged power law relations for Southern Indian region (as a whole) are obtained as Q P = (95 ± 1.12) f (1.32±0.01); Q S = (128 ± 1.84) f (1.49±0.01). Based on the stations and recorded local earthquakes, for parts of EDC, WDC and SGT, the average power law estimates are obtained as: Q P = (97 ± 5) f (1.40±0.03), Q S = (116 ± 1.5) f (1.48±0.01) for EDC region; Q P = (130 ± 7) f (1.20±0.03), Q S = (103 ± 3) f (1.49±0.02) for WDC region; Q P = (68 ± 2) f (1.4±0.02), Q S = (152 ± 6) f (1.48±0.02) for SGT region. These estimates are weighed against coda Q ( Q C) estimates, using the coda decay technique, which is based on a weak backscattering of S-waves. A major observation in the study of body wave analysis is the low body wave Q ( Q 0 < 200), moderately high value of the frequency-exponent, ` n' (>0.5) and Q S/ Q P ≫ 1, suggesting lateral stretches of dominant scattering mode of seismic wave propagation. This primarily could be attributed to possible thermal anomalies and spread of partially fluid-saturated rock-masses in the crust and upper mantle of the southern Indian region, which, however, needs further laboratory studies. Such physical conditions might partly be correlated to the active seismicity and intraplate tectonism, especially in SGT and EDC regions, as per the observed low- Q P and Q S values. Additionally, the enrichment of coda waves and significance of scattering mechanisms is evidenced in our observation of Q C > Q S estimates. Lapse time study shows Q C values increasing with lapse time. High Q C values at 40 s lapse times in WDC indicate that it may be a relatively stable region. In the absence of detailed body wave attenuation studies in this region, the frequency dependent Q relationships developed here are useful for the estimation of earthquake source parameters of the region. Also, these relations may be used for the simulation of earthquake strong ground motions which are required for the estimation of seismic hazard, geotechnical and retrofitting analysis of critical structures in the region.
Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events
NASA Astrophysics Data System (ADS)
McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.
2015-12-01
Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.
Dole-Olivier, Marie-José; Galassi, Diana M. P.; Hogan, John-Paul; Wood, Paul J.
2016-01-01
The hyporheic zone of river ecosystems provides a habitat for a diverse macroinvertebrate community that makes a vital contribution to ecosystem functioning and biodiversity. However, effective methods for sampling this community have proved difficult to establish, due to the inaccessibility of subsurface sediments. The aim of this study was to compare the two most common semi-quantitative macroinvertebrate pump-sampling techniques: Bou-Rouch and vacuum-pump sampling. We used both techniques to collect replicate samples in three contrasting temperate-zone streams, in each of two biogeographical regions (Atlantic region, central England, UK; Continental region, southeast France). Results were typically consistent across streams in both regions: Bou-Rouch samples provided significantly higher estimates of taxa richness, macroinvertebrate abundance, and the abundance of all UK and eight of 10 French common taxa. Seven and nine taxa which were rare in Bou-Rouch samples were absent from vacuum-pump samples in the UK and France, respectively; no taxon was repeatedly sampled exclusively by the vacuum pump. Rarefaction curves (rescaled to the number of incidences) and non-parametric richness estimators indicated no significant difference in richness between techniques, highlighting the capture of more individuals as crucial to Bou-Rouch sampling performance. Compared to assemblages in replicate vacuum-pump samples, multivariate analyses indicated greater distinction among Bou-Rouch assemblages from different streams, as well as significantly greater consistency in assemblage composition among replicate Bou-Rouch samples collected in one stream. We recommend Bou-Rouch sampling for most study types, including rapid biomonitoring surveys and studies requiring acquisition of comprehensive taxon lists that include rare taxa. Despite collecting fewer macroinvertebrates, vacuum-pump sampling remains an important option for inexpensive and rapid sample collection. PMID:27723819
NASA Astrophysics Data System (ADS)
Eldardiry, H. A.; Habib, E. H.
2014-12-01
Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Method and program product for determining a radiance field in an optical environment
NASA Technical Reports Server (NTRS)
Reinersman, Phillip N. (Inventor); Carder, Kendall L. (Inventor)
2007-01-01
A hybrid method is presented by which Monte Carlo techniques are combined with iterative relaxation techniques to solve the Radiative Transfer Equation in arbitrary one-, two- or three-dimensional optical environments. The optical environments are first divided into contiguous regions, or elements, with Monte Carlo techniques then being employed to determine the optical response function of each type of element. The elements are combined, and the iterative relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. This hybrid model is capable of providing estimates of the under-water light field needed to expedite inspection of ship hulls and port facilities. It is also capable of providing estimates of the subaerial light field for structured, absorbing or non-absorbing environments such as shadows of mountain ranges within and without absorption spectral bands such as water vapor or CO.sub.2 bands.
Three-dimensional analysis of magnetometer array data
NASA Technical Reports Server (NTRS)
Richmond, A. D.; Baumjohann, W.
1984-01-01
A technique is developed for mapping magnetic variation fields in three dimensions using data from an array of magnetometers, based on the theory of optimal linear estimation. The technique is applied to data from the Scandinavian Magnetometer Array. Estimates of the spatial power spectra for the internal and external magnetic variations are derived, which in turn provide estimates of the spatial autocorrelation functions of the three magnetic variation components. Statistical errors involved in mapping the external and internal fields are quantified and displayed over the mapping region. Examples of field mapping and of separation into external and internal components are presented. A comparison between the three-dimensional field separation and a two-dimensional separation from a single chain of stations shows that significant differences can arise in the inferred internal component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
Dual-band beacon experiment over Southeast Asia for ionospheric irregularity analysis
NASA Astrophysics Data System (ADS)
Watthanasangmechai, K.; Yamamoto, M.; Saito, A.; Saito, S.; Maruyama, T.; Tsugawa, T.; Nishioka, M.
2013-12-01
An experiment of dual-band beacon over Southeast Asia was started in March 2012 in order to capture and analyze ionospheric irregularities in equatorial region. Five GNU Radio Beacon Receivers (GRBRs) were aligned along 100 degree geographic longitude. The distances between the stations reach more than 500 km. The field of view of this observational network covers +/- 20 degree geomagnetic latitude including the geomagnetic equator. To capture ionospheric irregularities, the absolute TEC estimation technique was developed. The two-station method (Leitinger et al., 1975) is generally accepted as a suitable method to estimate TEC offsets of dual-band beacon experiment. However, the distances between the stations directly affect on the robustness of the technique. In Southeast Asia, the observational network is too sparse to attain a benefit of the classic two-station method. Moreover, the least-squares approch used in the two-station method tries too much to adjust the small scales of the TEC distribution which are the local minima. We thus propose a new technique to estimate the TEC offsets with the supporting data from absolute GPS-TEC from local GPS receivers and the ionospheric height from local ionosondes. The key of the proposed technique is to utilize the brute-force technique with weighting function to find the TEC offset set that yields a global minimum of RMSE in whole parameter space. The weight is not necessary when the TEC distribution is smooth, while it significantly improves the TEC estimation during the ESF events. As a result, the latitudinal TEC shows double-hump distribution because of the Equatorial Ionization Anomaly (EIA). In additions, the 100km-scale fluctuations from an Equatorial Spread F (ESF) are captured at night time in equinox seasons. The plausible linkage of the meridional wind with triggering of ESF is under invatigating and will be presented. The proposed method is successful to estimate the latitudinal TEC distribution from dual-band frequency beacon data for the sparse observational network in Southeast Asia which may be useful for other equatorial sectors like Affrican region as well.
Ries, Kernell G.; Crouse, Michele Y.
2002-01-01
For many years, the U.S. Geological Survey (USGS) has been developing regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, these equations have been developed on a Statewide or metropolitan-area basis as part of cooperative study programs with specific State Departments of Transportation. In 1994, the USGS released a computer program titled the National Flood Frequency Program (NFF), which compiled all the USGS available regression equations for estimating the magnitude and frequency of floods in the United States and Puerto Rico. NFF was developed in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency. Since the initial release of NFF, the USGS has produced new equations for many areas of the Nation. A new version of NFF has been developed that incorporates these new equations and provides additional functionality and ease of use. NFF version 3 provides regression-equation estimates of flood-peak discharges for unregulated rural and urban watersheds, flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals. The Program also provides weighting techniques to improve estimates of flood-peak discharges for gaging stations and ungaged sites. The information provided by NFF should be useful to engineers and hydrologists for planning and design applications. This report describes the flood-regionalization techniques used in NFF and provides guidance on the applicability and limitations of the techniques. The NFF software and the documentation for the regression equations included in NFF are available at http://water.usgs.gov/software/nff.html.
21 CFR 870.2780 - Hydraulic, pneumatic, or photoelectric plethysmographs.
Code of Federal Regulations, 2012 CFR
2012-04-01
... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices..., pneumatic, or photoelectric plethysmograph is a device used to estimate blood flow in a region of the body using hydraulic, pneumatic, or photoelectric measurement techniques. (b) Classification. Class II...
21 CFR 870.2780 - Hydraulic, pneumatic, or photoelectric plethysmographs.
Code of Federal Regulations, 2014 CFR
2014-04-01
... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices..., pneumatic, or photoelectric plethysmograph is a device used to estimate blood flow in a region of the body using hydraulic, pneumatic, or photoelectric measurement techniques. (b) Classification. Class II...
21 CFR 870.2780 - Hydraulic, pneumatic, or photoelectric plethysmographs.
Code of Federal Regulations, 2013 CFR
2013-04-01
... HUMAN SERVICES (CONTINUED) MEDICAL DEVICES CARDIOVASCULAR DEVICES Cardiovascular Monitoring Devices..., pneumatic, or photoelectric plethysmograph is a device used to estimate blood flow in a region of the body using hydraulic, pneumatic, or photoelectric measurement techniques. (b) Classification. Class II...
Soft tissue strain measurement using an optical method
NASA Astrophysics Data System (ADS)
Toh, Siew Lok; Tay, Cho Jui; Goh, Cho Hong James
2008-11-01
Digital image correlation (DIC) is a non-contact optical technique that allows the full-field estimation of strains on a surface under an applied deformation. In this project, the application of an optimized DIC technique is applied, which can achieve efficiency and accuracy in the measurement of two-dimensional deformation fields in soft tissue. This technique relies on matching the random patterns recorded in images to directly obtain surface displacements and to get displacement gradients from which the strain field can be determined. Digital image correlation is a well developed technique that has numerous and varied engineering applications, including the application in soft and hard tissue biomechanics. Chicken drumstick ligaments were harvested and used during the experiments. The surface of the ligament was speckled with black paint to allow for correlation to be done. Results show that the stress-strain curve exhibits a bi-linear behavior i.e. a "toe region" and a "linear elastic region". The Young's modulus obtained for the toe region is about 92 MPa and the modulus for the linear elastic region is about 230 MPa. The results are within the values for mammalian anterior cruciate ligaments of 150-300 MPa.
Determining Titan surface topography from Cassini SAR data
Stiles, Bryan W.; Hensley, Scott; Gim, Yonggyu; Bates, David M.; Kirk, Randolph L.; Hayes, Alex; Radebaugh, Jani; Lorenz, Ralph D.; Mitchell, Karl L.; Callahan, Philip S.; Zebker, Howard; Johnson, William T.K.; Wall, Stephen D.; Lunine, Jonathan I.; Wood, Charles A.; Janssen, Michael; Pelletier, Frederic; West, Richard D.; Veeramacheneni, Chandini
2009-01-01
A technique, referred to as SARTopo, has been developed for obtaining surface height estimates with 10 km horizontal resolution and 75 m vertical resolution of the surface of Titan along each Cassini Synthetic Aperture Radar (SAR) swath. We describe the technique and present maps of the co-located data sets. A global map and regional maps of Xanadu and the northern hemisphere hydrocarbon lakes district are included in the results. A strength of the technique is that it provides topographic information co-located with SAR imagery. Having a topographic context vastly improves the interpretability of the SAR imagery and is essential for understanding Titan. SARTopo is capable of estimating surface heights for most of the SAR-imaged surface of Titan. Currently nearly 30% of the surface is within 100 km of a SARTopo height profile. Other competing techniques provide orders of magnitude less coverage. We validate the SARTopo technique through comparison with known geomorphological features such as mountain ranges and craters, and by comparison with co-located nadir altimetry, including a 3000 km strip that had been observed by SAR a month earlier. In this area, the SARTopo and nadir altimetry data sets are co-located tightly (within 5-10 km for one 500 km section), have similar resolution, and as expected agree closely in surface height. Furthermore the region contains prominent high spatial resolution topography, so it provides an excellent test of the resolution and precision of both techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, J; Tian, X; Segars, P
2016-06-15
Purpose: To develop an automated technique for estimating patient-specific regional imparted energy and dose from tube current modulated (TCM) computed tomography (CT) exams across a diverse set of head and body protocols. Methods: A library of 58 adult computational anthropomorphic extended cardiac-torso (XCAT) phantoms were used to model a patient population. A validated Monte Carlo program was used to simulate TCM CT exams on the entire library of phantoms for three head and 10 body protocols. The net imparted energy to the phantoms, normalized by dose length product (DLP), and the net tissue mass in each of the scan regionsmore » were computed. A knowledgebase containing relationships between normalized imparted energy and scanned mass was established. An automated computer algorithm was written to estimate the scanned mass from actual clinical CT exams. The scanned mass estimate, DLP of the exam, and knowledgebase were used to estimate the imparted energy to the patient. The algorithm was tested on 20 chest and 20 abdominopelvic TCM CT exams. Results: The normalized imparted energy increased with increasing kV for all protocols. However, the normalized imparted energy was relatively unaffected by the strength of the TCM. The average imparted energy was 681 ± 376 mJ for abdominopelvic exams and 274 ± 141 mJ for chest exams. Overall, the method was successful in providing patientspecific estimates of imparted energy for 98% of the cases tested. Conclusion: Imparted energy normalized by DLP increased with increasing tube potential. However, the strength of the TCM did not have a significant effect on the net amount of energy deposited to tissue. The automated program can be implemented into the clinical workflow to provide estimates of regional imparted energy and dose across a diverse set of clinical protocols.« less
A log-linear model approach to estimation of population size using the line-transect sampling method
Anderson, D.R.; Burnham, K.P.; Crain, B.R.
1978-01-01
The technique of estimating wildlife population size and density using the belt or line-transect sampling method has been used in many past projects, such as the estimation of density of waterfowl nestling sites in marshes, and is being used currently in such areas as the assessment of Pacific porpoise stocks in regions of tuna fishing activity. A mathematical framework for line-transect methodology has only emerged in the last 5 yr. In the present article, we extend this mathematical framework to a line-transect estimator based upon a log-linear model approach.
A regional high-resolution carbon flux inversion of North America for 2004
NASA Astrophysics Data System (ADS)
Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.
2010-05-01
Resolving the discrepancies between NEE estimates based upon (1) ground studies and (2) atmospheric inversion results, demands increasingly sophisticated techniques. In this paper we present a high-resolution inversion based upon a regional meteorology model (RAMS) and an underlying biosphere (SiB3) model, both running on an identical 40 km grid over most of North America. Current operational systems like CarbonTracker as well as many previous global inversions including the Transcom suite of inversions have utilized inversion regions formed by collapsing biome-similar grid cells into larger aggregated regions. An extreme example of this might be where corrections to NEE imposed on forested regions on the east coast of the United States might be the same as that imposed on forests on the west coast of the United States while, in reality, there likely exist subtle differences in the two areas, both natural and anthropogenic. Our current inversion framework utilizes a combination of previously employed inversion techniques while allowing carbon flux corrections to be biome independent. Temporally and spatially high-resolution results utilizing biome-independent corrections provide insight into carbon dynamics in North America. In particular, we analyze hourly CO2 mixing ratio data from a sparse network of eight towers in North America for 2004. A prior estimate of carbon fluxes due to Gross Primary Productivity (GPP) and Ecosystem Respiration (ER) is constructed from the SiB3 biosphere model on a 40 km grid. A combination of transport from the RAMS and the Parameterized Chemical Transport Model (PCTM) models is used to forge a connection between upwind biosphere fluxes and downwind observed CO2 mixing ratio data. A Kalman filter procedure is used to estimate weekly corrections to biosphere fluxes based upon observed CO2. RMSE-weighted annual NEE estimates, over an ensemble of potential inversion parameter sets, show a mean estimate 0.57 Pg/yr sink in North America. We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
Optical coherence elastography for cellular-scale stiffness imaging of mouse aorta
NASA Astrophysics Data System (ADS)
Wijesinghe, Philip; Johansen, Niloufer J.; Curatolo, Andrea; Sampson, David D.; Ganss, Ruth; Kennedy, Brendan F.
2017-04-01
We have developed a high-resolution optical coherence elastography system capable of estimating Young's modulus in tissue volumes with an isotropic resolution of 15 μm over a 1 mm lateral field of view and a 100 μm axial depth of field. We demonstrate our technique on healthy and hypertensive, freshly excised and intact mouse aortas. Our technique has the capacity to delineate the individual mechanics of elastic lamellae and vascular smooth muscle. Further, we observe global and regional vascular stiffening in hypertensive aortas, and note the presence of local micro-mechanical signatures, characteristic of fibrous and lipid-rich regions.
Revised techniques for estimating peak discharges from channel width in Montana
Parrett, Charles; Hull, J.A.; Omang, R.J.
1987-01-01
This study was conducted to develop new estimating equations based on channel width and the updated flood frequency curves of previous investigations. Simple regression equations for estimating peak discharges with recurrence intervals of 2, 5, 10 , 25, 50, and 100 years were developed for seven regions in Montana. The standard errors of estimates for the equations that use active channel width as the independent variables ranged from 30% to 87%. The standard errors of estimate for the equations that use bankfull width as the independent variable ranged from 34% to 92%. The smallest standard errors generally occurred in the prediction equations for the 2-yr flood, 5-yr flood, and 10-yr flood, and the largest standard errors occurred in the prediction equations for the 100-yr flood. The equations that use active channel width and the equations that use bankfull width were determined to be about equally reliable in five regions. In the West Region, the equations that use bankfull width were slightly more reliable than those based on active channel width, whereas in the East-Central Region the equations that use active channel width were slightly more reliable than those based on bankfull width. Compared with similar equations previously developed, the standard errors of estimate for the new equations are substantially smaller in three regions and substantially larger in two regions. Limitations on the use of the estimating equations include: (1) The equations are based on stable conditions of channel geometry and prevailing water and sediment discharge; (2) The measurement of channel width requires a site visit, preferably by a person with experience in the method, and involves appreciable measurement errors; (3) Reliability of results from the equations for channel widths beyond the range of definition is unknown. In spite of the limitations, the estimating equations derived in this study are considered to be as reliable as estimating equations based on basin and climatic variables. Because the two types of estimating equations are independent, results from each can be weighted inversely proportional to their variances, and averaged. The weighted average estimate has a variance less than either individual estimate. (Author 's abstract)
Husak, G.J.; Marshall, M. T.; Michaelsen, J.; Pedreros, Diego; Funk, Christopher C.; Galu, G.
2008-01-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
NASA Astrophysics Data System (ADS)
Husak, G. J.; Marshall, M. T.; Michaelsen, J.; Pedreros, D.; Funk, C.; Galu, G.
2008-07-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, P.; Schultz, C.; Larsen, S.
1997-07-15
Monitoring of a CTBT will require transportable seismic identification techniques, especially in regions where there is limited data. Unfortunately, most existing techniques are empirical and can not be used reliably in new regions. Our goal is to help develop transportable regional identification techniques by improving our ability to predict the behavior of regional phases and discriminants in diverse geologic regions and in regions with little or no data. Our approach is to use numerical modeling to understand the physical basis for regional wave propagation phenomena and to use this understanding to help explain observed behavior of regional phases and discriminants.more » In this paper, we focus on results from simulations of data in selected regions and investigate the sensitivity of these regional simulations to various features of the crustal structure. Our initial models use teleseismically estimated source locations, mechanisms, and durations and seismological structures that have been determined by others. We model the Mb 5.9, October 1992, Cairo Egypt earthquake at a station at Ankara Turkey (ANTO) using a two-dimensional crustal model consisting of a water layer over a deep sedimentary basin with a thinning crust beneath the basin. Despite the complex tectonics of the Eastern Mediterranean region, we find surprisingly good agreement between the observed data and synthetics based on this relatively smooth two-dimensional model.« less
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Yao, Jianing; Chu, Ying-Ju; Meemon, Panomsak; Rolland, Jannick P.; Parker, Kevin J.
2016-03-01
Optical Coherence Elastography (OCE) is a widely investigated noninvasive technique for estimating the mechanical properties of tissue. In particular, vibrational OCE methods aim to estimate the shear wave velocity generated by an external stimulus in order to calculate the elastic modulus of tissue. In this study, we compare the performance of five acquisition and processing techniques for estimating the shear wave speed in simulations and experiments using tissue-mimicking phantoms. Accuracy, contrast-to-noise ratio, and resolution are measured for all cases. The first two techniques make the use of one piezoelectric actuator for generating a continuous shear wave propagation (SWP) and a tone-burst propagation (TBP) of 400 Hz over the gelatin phantom. The other techniques make use of one additional actuator located on the opposite side of the region of interest in order to create an interference pattern. When both actuators have the same frequency, a standing wave (SW) pattern is generated. Otherwise, when there is a frequency difference df between both actuators, a crawling wave (CrW) pattern is generated and propagates with less speed than a shear wave, which makes it suitable for being detected by the 2D cross-sectional OCE imaging. If df is not small compared to the operational frequency, the CrW travels faster and a sampled version of it (SCrW) is acquired by the system. Preliminary results suggest that TBP (error < 4.1%) and SWP (error < 6%) techniques are more accurate when compared to mechanical measurement test results.
Recchia, Gabriel L; Louwerse, Max M
2016-11-01
Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. Copyright © 2015 Cognitive Science Society, Inc.
Griffiths, Ronald; Topping, David
2015-01-01
Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain-size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a channel reach is in a state of sediment accumulation, deficit or stasis. Many studies have estimated sediment loads from ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of sediment loads in semi-arid climates, where rainfall events, contributing geology, and vegetation have large spatial variability.
NASA Astrophysics Data System (ADS)
Kumar, Shashi; Khati, Unmesh G.; Chandola, Shreya; Agrawal, Shefali; Kushwaha, Satya P. S.
2017-08-01
The regulation of the carbon cycle is a critical ecosystem service provided by forests globally. It is, therefore, necessary to have robust techniques for speedy assessment of forest biophysical parameters at the landscape level. It is arduous and time taking to monitor the status of vast forest landscapes using traditional field methods. Remote sensing and GIS techniques are efficient tools that can monitor the health of forests regularly. Biomass estimation is a key parameter in the assessment of forest health. Polarimetric SAR (PolSAR) remote sensing has already shown its potential for forest biophysical parameter retrieval. The current research work focuses on the retrieval of forest biophysical parameters of tropical deciduous forest, using fully polarimetric spaceborne C-band data with Polarimetric SAR Interferometry (PolInSAR) techniques. PolSAR based Interferometric Water Cloud Model (IWCM) has been used to estimate aboveground biomass (AGB). Input parameters to the IWCM have been extracted from the decomposition modeling of SAR data as well as PolInSAR coherence estimation. The technique of forest tree height retrieval utilized PolInSAR coherence based modeling approach. Two techniques - Coherence Amplitude Inversion (CAI) and Three Stage Inversion (TSI) - for forest height estimation are discussed, compared and validated. These techniques allow estimation of forest stand height and true ground topography. The accuracy of the forest height estimated is assessed using ground-based measurements. PolInSAR based forest height models showed enervation in the identification of forest vegetation and as a result height values were obtained in river channels and plain areas. Overestimation in forest height was also noticed at several patches of the forest. To overcome this problem, coherence and backscatter based threshold technique is introduced for forest area identification and accurate height estimation in non-forested regions. IWCM based modeling for forest AGB retrieval showed R2 value of 0.5, RMSE of 62.73 (t ha-1) and a percent accuracy of 51%. TSI based PolInSAR inversion modeling showed the most accurate result for forest height estimation. The correlation between the field measured forest height and the estimated tree height using TSI technique is 62% with an average accuracy of 91.56% and RMSE of 2.28 m. The study suggested that PolInSAR coherence based modeling approach has significant potential for retrieval of forest biophysical parameters.
NASA Astrophysics Data System (ADS)
Huang, D.; Wang, G.
2014-12-01
Stochastic simulation of spatially distributed ground-motion time histories is important for performance-based earthquake design of geographically distributed systems. In this study, we develop a novel technique to stochastically simulate regionalized ground-motion time histories using wavelet packet analysis. First, a transient acceleration time history is characterized by wavelet-packet parameters proposed by Yamamoto and Baker (2013). The wavelet-packet parameters fully characterize ground-motion time histories in terms of energy content, time- frequency-domain characteristics and time-frequency nonstationarity. This study further investigates the spatial cross-correlations of wavelet-packet parameters based on geostatistical analysis of 1500 regionalized ground motion data from eight well-recorded earthquakes in California, Mexico, Japan and Taiwan. The linear model of coregionalization (LMC) is used to develop a permissible spatial cross-correlation model for each parameter group. The geostatistical analysis of ground-motion data from different regions reveals significant dependence of the LMC structure on regional site conditions, which can be characterized by the correlation range of Vs30 in each region. In general, the spatial correlation and cross-correlation of wavelet-packet parameters are stronger if the site condition is more homogeneous. Using the regional-specific spatial cross-correlation model and cokriging technique, wavelet packet parameters at unmeasured locations can be best estimated, and regionalized ground-motion time histories can be synthesized. Case studies and blind tests demonstrated that the simulated ground motions generally agree well with the actual recorded data, if the influence of regional-site conditions is considered. The developed method has great potential to be used in computational-based seismic analysis and loss estimation in a regional scale.
Geophysical analysis for the Ada Tepe region (Bulgaria) - case study
NASA Astrophysics Data System (ADS)
Trifonova, Petya; Metodiev, Metodi; Solakov, Dimcho; Simeonova, Stela; Vatseva, Rumiana
2013-04-01
According to the current archeological investigations Ada Tepe is the oldest gold mine in Europe with Late Bronze and Early Iron age. It is a typical low-sulfidation epithermal gold deposit and is hosted in Maastrichtian-Paleocene sedimentary rocks above a detachment fault contact with underlying Paleozoic metamorphic rocks. Ada Tepe (25o.39'E; 41o.25'N) is located in the Eastern Rhodope unit. The region is highly segmented despite the low altitude (470-750 m) due to widespread volcanic and sediment rocks susceptible to torrential erosion during the cold season. Besides the thorough geological exploration focused on identifying cost-effective stocks of mineral resources, a detailed geophysical analysis concernig diferent stages of the gold extraction project was accomplished. We present the main results from the geophysical investigation aimed to clarify the complex seismotectonic setting of the Ada Tepe site region. The overall study methodology consists of collecting, reviewing and estimating geophysical and seismological information to constrain the model used for seismic hazard assessment of the area. Geophysical information used in the present work consists of gravity, geomagnetic and seismological data. Interpretation of gravity data is applied to outline the axes of steep gravity transitions marked as potential axes of faults, flexures and other structures of dislocation. Direct inverse techniques are also utilized to estimate the form and depth of anomalous sources. For the purposes of seismological investigation of the Ada Tepe site region an earthquake catalogue is compiled for the time period 510BC - 2011AD. Statistical parameters of seismicity - annual seismic rate parameter, ?, and the b-value of the Gutenberg-Richter exponential relation for Ada Tepe site region, are estimated. All geophysical datasets and derived results are integrated using GIS techniques ensuring interoperability of data when combining, processing and visualizing obtained information from different sources.
NASA Astrophysics Data System (ADS)
Huang, Xiaokun; Zhang, You; Wang, Jing
2018-02-01
Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.
Variability of measurements of sweat sodium using the regional absorbent-patch method.
Dziedzic, Christine E; Ross, Megan L; Slater, Gary J; Burke, Louise M
2014-09-01
There is interest in including recommendations for the replacement of the sodium lost in sweat in individualized hydration plans for athletes. Although the regional absorbent-patch method provides a practical approach to measuring sweat sodium losses in field conditions, there is a need to understand the variability of estimates associated with this technique. Sweat samples were collected from the forearms, chest, scapula, and thigh of 12 cyclists during 2 standardized cycling time trials in the heat and 2 in temperate conditions. Single measure analysis of sodium concentration was conducted immediately by ion-selective electrodes (ISE). A subset of 30 samples was frozen for reanalysis of sodium concentration using ISE, flame photometry (FP), and conductivity (SC). Sweat samples collected in hot conditions produced higher sweat sodium concentrations than those from the temperate environment (P = .0032). A significant difference (P = .0048) in estimates of sweat sodium concentration was evident when calculated from the forearm average (mean ± 95% CL; 64 ± 12 mmol/L) compared with using a 4-site equation (70 ± 12 mmol/L). There was a high correlation between the values produced using different analytical techniques (r2 = .95), but mean values were different between treatments (frozen FP, frozen SC > immediate ISE > frozen ISE; P < .0001). Whole-body sweat sodium concentration estimates differed depending on the number of sites included in the calculation. Environmental testing conditions should be considered in the interpretation of results. The impact of sample freezing and subsequent analytical technique was small but statistically significant. Nevertheless, when undertaken using a standardized protocol, the regional absorbent-patch method appears to be a relatively robust field test.
NASA Astrophysics Data System (ADS)
Barba, M.; Willis, M. J.; Tiampo, K. F.; Lynett, P. J.; Mätzler, E.; Thorsøe, K.; Higman, B. M.; Thompson, J. A.; Morin, P. J.
2017-12-01
We use a combination of geodetic imaging techniques and modelling efforts to examine the June 2017 Karrat Fjord, West Greenland, landslide and tsunami event. Our efforts include analysis of pre-cursor motions extracted from Sentinal SAR interferometry that we improved with high-resolution Digital Surface Models derived from commercial imagery and geo-coded Structure from Motion analyses. We produce well constrained estimates of landslide volume through DSM differencing by improving the ArcticDEM coverage of the region, and provide modeled tsunami run-up estimates at villages around the region, constrained with in-situ observations provided by the Greenlandic authorities. Estimates of run-up at unoccupied coasts are derived using a blend of high resolution imagery and elevation models. We further detail post-failure slope stability for areas of interest around the Karrat Fjord region. Warming trends in the region from model and satellite analysis are combined with optical imagery to ascertain whether the influence of melting permafrost and the formation of small springs on a slight bench on the mountainside that eventually failed can be used as indicators of future events.
Peak-flow characteristics of Wyoming streams
Miller, Kirk A.
2003-01-01
Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.
Kriging analysis of mean annual precipitation, Powder River Basin, Montana and Wyoming
Karlinger, M.R.; Skrivan, James A.
1981-01-01
Kriging is a statistical estimation technique for regionalized variables which exhibit an autocorrelation structure. Such structure can be described by a semi-variogram of the observed data. The kriging estimate at any point is a weighted average of the data, where the weights are determined using the semi-variogram and an assumed drift, or lack of drift, in the data. Block, or areal, estimates can also be calculated. The kriging algorithm, based on unbiased and minimum-variance estimates, involves a linear system of equations to calculate the weights. Kriging variances can then be used to give confidence intervals of the resulting estimates. Mean annual precipitation in the Powder River basin, Montana and Wyoming, is an important variable when considering restoration of coal-strip-mining lands of the region. Two kriging analyses involving data at 60 stations were made--one assuming no drift in precipitation, and one a partial quadratic drift simulating orographic effects. Contour maps of estimates of mean annual precipitation were similar for both analyses, as were the corresponding contours of kriging variances. Block estimates of mean annual precipitation were made for two subbasins. Runoff estimates were 1-2 percent of the kriged block estimates. (USGS)
Modeling particle number concentrations along Interstate 10 in El Paso, Texas
Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias
2014-01-01
Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294
Estimating the D-Region Ionospheric Electron Density Profile Using VLF Narrowband Transmitters
NASA Astrophysics Data System (ADS)
Gross, N. C.; Cohen, M.
2016-12-01
The D-region ionospheric electron density profile plays an important role in many applications, including long-range and transionospheric communications, and coupling between the lower atmosphere and the upper ionosphere occurs, and estimation of very low frequency (VLF) wave propagation within the earth-ionosphere waveguide. However, measuring the D-region ionospheric density profile has been a challenge. The D-region is about 60 to 90 [km] in altitude, which is higher than planes and balloons can fly but lower than satellites can orbit. Researchers have previously used VLF remote sensing techniques, from either narrowband transmitters or sferics, to estimate the density profile, but these estimations are typically during a short time frame and over a single propagation path.We report on an effort to construct estimates of the D-region ionospheric electron density profile over multiple narrowband transmission paths for long periods of time. Measurements from multiple transmitters at multiple receivers are analyzed concurrently to minimize false solutions and improve accuracy. Likewise, time averaging is used to remove short transient noise at the receivers. The cornerstone of the algorithm is an artificial neural network (ANN), where input values are the received amplitude and phase for the narrowband transmitters and the outputs are the commonly known h' and beta two parameter exponential electron density profile. Training data for the ANN is generated using the Navy's Long-Wavelength Propagation Capability (LWPC) model. Results show the algorithm performs well under smooth ionospheric conditions and when proper geometries for the transmitters and receivers are used.
Multitaper spectral analysis of atmospheric radar signals
NASA Astrophysics Data System (ADS)
Anandan, V.; Pan, C.; Rajalakshmi, T.; Ramachandra Reddy, G.
2004-11-01
Multitaper spectral analysis using sinusoidal taper has been carried out on the backscattered signals received from the troposphere and lower stratosphere by the Gadanki Mesosphere-Stratosphere-Troposphere (MST) radar under various conditions of the signal-to-noise ratio. Comparison of study is made with sinusoidal taper of the order of three and single tapers of Hanning and rectangular tapers, to understand the relative merits of processing under the scheme. Power spectra plots show that echoes are better identified in the case of multitaper estimation, especially in the region of a weak signal-to-noise ratio. Further analysis is carried out to obtain three lower order moments from three estimation techniques. The results show that multitaper analysis gives a better signal-to-noise ratio or higher detectability. The spectral analysis through multitaper and single tapers is subjected to study of consistency in measurements. Results show that the multitaper estimate is better consistent in Doppler measurements compared to single taper estimates. Doppler width measurements with different approaches were studied and the results show that the estimation was better in the multitaper technique in terms of temporal resolution and estimation accuracy.
NASA Astrophysics Data System (ADS)
Griffiths, Ronald E.; Topping, David J.
2017-11-01
Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a river reach is in a state of sediment accumulation, deficit or stasis. Many sediment-budget studies have estimated the sediment loads of ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of loads in regions where rainfall events, contributing geology, and vegetation have large spatial and/or temporal variability. Previous estimates of the combined mean-annual sediment load of all ungaged tributaries to the Colorado River downstream from Glen Canyon Dam vary by over a factor of three; this range in estimated sediment loads has resulted in different researchers reaching opposite conclusions on the sign (accumulation or deficit) of the sediment budget for particular reaches of the Colorado River. To better evaluate the supply of fine sediment (sand, silt, and clay) from these tributaries to the Colorado River, eight gages were established on previously ungaged tributaries in Glen, Marble, and Grand canyons. Results from this sediment-monitoring network show that previous estimates of the annual sediment loads of these tributaries were too high and that the sediment budget for the Colorado River below Glen Canyon Dam is more negative than previously calculated by most researchers. As a result of locally intense rainfall events with footprints smaller than the receiving basin, floods from a single tributary in semi-arid regions can have large (≥ 10 ×) differences in sediment concentrations between equal magnitude flows. Because sediment loads do not necessarily correlate with drainage size, and may vary by two orders of magnitude on an annual basis, using techniques such as sediment-yield equations to estimate the sediment loads of ungaged tributaries may lead to large errors in sediment budgets.
Griffiths, Ronald; Topping, David
2017-01-01
Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a river reach is in a state of sediment accumulation, deficit or stasis. Many sediment-budget studies have estimated the sediment loads of ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of loads in regions where rainfall events, contributing geology, and vegetation have large spatial and/or temporal variability.Previous estimates of the combined mean-annual sediment load of all ungaged tributaries to the Colorado River downstream from Glen Canyon Dam vary by over a factor of three; this range in estimated sediment loads has resulted in different researchers reaching opposite conclusions on the sign (accumulation or deficit) of the sediment budget for particular reaches of the Colorado River. To better evaluate the supply of fine sediment (sand, silt, and clay) from these tributaries to the Colorado River, eight gages were established on previously ungaged tributaries in Glen, Marble, and Grand canyons. Results from this sediment-monitoring network show that previous estimates of the annual sediment loads of these tributaries were too high and that the sediment budget for the Colorado River below Glen Canyon Dam is more negative than previously calculated by most researchers. As a result of locally intense rainfall events with footprints smaller than the receiving basin, floods from a single tributary in semi-arid regions can have large (≥ 10 ×) differences in sediment concentrations between equal magnitude flows. Because sediment loads do not necessarily correlate with drainage size, and may vary by two orders of magnitude on an annual basis, using techniques such as sediment-yield equations to estimate the sediment loads of ungaged tributaries may lead to large errors in sediment budgets.
Satellite-enhanced dynamical downscaling for the analysis of extreme events
NASA Astrophysics Data System (ADS)
Nunes, Ana M. B.
2016-09-01
The use of regional models in the downscaling of general circulation models provides a strategy to generate more detailed climate information. In that case, boundary-forcing techniques can be useful to maintain the large-scale features from the coarse-resolution global models in agreement with the inner modes of the higher-resolution regional models. Although those procedures might improve dynamics, downscaling via regional modeling still aims for better representation of physical processes. With the purpose of improving dynamics and physical processes in regional downscaling of global reanalysis, the Regional Spectral Model—originally developed at the National Centers for Environmental Prediction—employs a newly reformulated scale-selective bias correction, together with the 3-hourly assimilation of the satellite-based precipitation estimates constructed from the Climate Prediction Center morphing technique. The two-scheme technique for the dynamical downscaling of global reanalysis can be applied in analyses of environmental disasters and risk assessment, with hourly outputs, and resolution of about 25 km. Here the satellite-enhanced dynamical downscaling added value is demonstrated in simulations of the first reported hurricane in the western South Atlantic Ocean basin through comparisons with global reanalyses and satellite products available in ocean areas.
Garabedian, Stephen P.
1986-01-01
A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.
NASA Astrophysics Data System (ADS)
Cardona, Javier Fernando; García Bonilla, Alba Carolina; Tomás García, Rogelio
2017-11-01
This article shows that the effect of all quadrupole errors present in an interaction region with low β * can be modeled by an equivalent magnetic kick, which can be estimated from action and phase jumps found on beam position data. This equivalent kick is used to find the strengths that certain normal and skew quadrupoles located on the IR must have to make an effective correction in that region. Additionally, averaging techniques to reduce noise on beam position data, which allows precise estimates of equivalent kicks, are presented and mathematically justified. The complete procedure is tested with simulated data obtained from madx and 2015-LHC experimental data. The analyses performed in the experimental data indicate that the strengths of the IR skew quadrupole correctors and normal quadrupole correctors can be estimated within a 10% uncertainty. Finally, the effect of IR corrections in the β* is studied, and a correction scheme that returns this parameter to its designed value is proposed.
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, Andre A.; Vonderhaar, T. H.; Johnson, L. R.; Laybe, P.; Reinke, D.
1987-01-01
The analysis of 18 convective clusters demonstrates that the extension of the Area-Time-Integral (ATI) technique to the use of satellite data is possible. The differences of the internal structures of the radar reflectivity features, and of the satellite features, give rise to differences in estimating rain volumes by delineating area; however, by focusing upon the area integrated over the lifetime of the storm, it is suggested that some of the errors produced by the differences in the cloud geometries as viewed by radar or satellite are minimized. The results are good and future developments should consider data from different climatic regions and should allow for implementation of the technique in a general circulation model.
Global observations of tropospheric BrO columns using GOME-2 satellite data
NASA Astrophysics Data System (ADS)
Theys, N.; van Roozendael, M.; Hendrick, F.; Yang, X.; de Smedt, I.; Richter, A.; Begoin, M.; Errera, Q.; Johnston, P. V.; Kreher, K.; de Mazière, M.
2010-11-01
Measurements from the GOME-2 satellite instrument have been analyzed for tropospheric BrO using a residual technique that combines measured BrO columns and estimates of the stratospheric BrO content from a climatological approach driven by O3 and NO2 observations. Comparisons between the GOME-2 results and BrO vertical columns derived from correlative ground-based and SCIAMACHY nadir observations, present a good level of consistency. We show that the adopted technique enables separation of stratospheric and tropospheric fractions of the measured total BrO columns and allows quantitative study of the BrO plumes in polar regions. While some satellite observed plumes of enhanced BrO can be explained by stratospheric descending air, we show that most BrO hotspots are of tropospheric origin, although they are often associated to regions with low tropopause heights as well. Elaborating on simulations using the p-TOMCAT tropospheric chemical transport model, this result is found to be consistent with the mechanism of bromine release through sea salt aerosols production during blowing snow events. Outside polar regions, evidence is provided for a global tropospheric BrO background with column of 1-3×1013 molec/cm2, consistent with previous estimates.
A three-microphone acoustic reflection technique using transmitted acoustic waves in the airway.
Fujimoto, Yuki; Huang, Jyongsu; Fukunaga, Toshiharu; Kato, Ryo; Higashino, Mari; Shinomiya, Shohei; Kitadate, Shoko; Takahara, Yutaka; Yamaya, Atsuyo; Saito, Masatoshi; Kobayashi, Makoto; Kojima, Koji; Oikawa, Taku; Nakagawa, Ken; Tsuchihara, Katsuma; Iguchi, Masaharu; Takahashi, Masakatsu; Mizuno, Shiro; Osanai, Kazuhiro; Toga, Hirohisa
2013-10-15
The acoustic reflection technique noninvasively measures airway cross-sectional area vs. distance functions and uses a wave tube with a constant cross-sectional area to separate incidental and reflected waves introduced into the mouth or nostril. The accuracy of estimated cross-sectional areas gets worse in the deeper distances due to the nature of marching algorithms, i.e., errors of the estimated areas in the closer distances accumulate to those in the further distances. Here we present a new technique of acoustic reflection from measuring transmitted acoustic waves in the airway with three microphones and without employing a wave tube. Using miniaturized microphones mounted on a catheter, we estimated reflection coefficients among the microphones and separated incidental and reflected waves. A model study showed that the estimated cross-sectional area vs. distance function was coincident with the conventional two-microphone method, and it did not change with altered cross-sectional areas at the microphone position, although the estimated cross-sectional areas are relative values to that at the microphone position. The pharyngeal cross-sectional areas including retropalatal and retroglossal regions and the closing site during sleep was visualized in patients with obstructive sleep apnea. The method can be applicable to larger or smaller bronchi to evaluate the airspace and function in these localized airways.
Larry W. van Tassell; E. Tom Bartlett; John E. Mitchell
2001-01-01
Scenario analysis techniques were used to combine projections from 35 grazed forage experts to estimate future forage demand scenarios and examine factors that are anticipated to impact the use of grazed forages in the South, North, and West Regions of the United States. The amount of land available for forage production is projected to decrease in all regions while...
1982-09-01
estimate of transportation stress for the remainder of each distributor’s service area by using approximation techniques developed in earlier SYSTAN work ...LUCKY STORES INC MILAN IL 7 0 0 2 3 2" 43,COO PLAINFIELD SUPER VALU PLAINFIELD IL 2 0 0 0 0 2 25,000 SUFER VALU STORES INC DES HOINES IA 1 0 I 0 0 0
Estimation of root zone storage capacity at the catchment scale using improved Mass Curve Technique
NASA Astrophysics Data System (ADS)
Zhao, Jie; Xu, Zongxue; Singh, Vijay P.
2016-09-01
The root zone storage capacity (Sr) greatly influences runoff generation, soil water movement, and vegetation growth and is hence an important variable for ecological and hydrological modelling. However, due to the great heterogeneity in soil texture and structure, there seems to be no effective approach to monitor or estimate Sr at the catchment scale presently. To fill the gap, in this study the Mass Curve Technique (MCT) was improved by incorporating a snowmelt module for the estimation of Sr at the catchment scale in different climatic regions. The "range of perturbation" method was also used to generate different scenarios for determining the sensitivity of the improved MCT-derived Sr to its influencing factors after the evaluation of plausibility of Sr derived from the improved MCT. Results can be showed as: (i) Sr estimates of different catchments varied greatly from ∼10 mm to ∼200 mm with the changes of climatic conditions and underlying surface characteristics. (ii) The improved MCT is a simple but powerful tool for the Sr estimation in different climatic regions of China, and incorporation of more catchments into Sr comparisons can further improve our knowledge on the variability of Sr. (iii) Variation of Sr values is an integrated consequence of variations in rainfall, snowmelt water and evapotranspiration. Sr values are most sensitive to variations in evapotranspiration of ecosystems. Besides, Sr values with a longer return period are more stable than those with a shorter return period when affected by fluctuations in its influencing factors.
Techniques for estimating flood-peak discharges of rural, unregulated streams in Ohio
Koltun, G.F.
2003-01-01
Regional equations for estimating 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood-peak discharges at ungaged sites on rural, unregulated streams in Ohio were developed by means of ordinary and generalized least-squares (GLS) regression techniques. One-variable, simple equations and three-variable, full-model equations were developed on the basis of selected basin characteristics and flood-frequency estimates determined for 305 streamflow-gaging stations in Ohio and adjacent states. The average standard errors of prediction ranged from about 39 to 49 percent for the simple equations, and from about 34 to 41 percent for the full-model equations. Flood-frequency estimates determined by means of log-Pearson Type III analyses are reported along with weighted flood-frequency estimates, computed as a function of the log-Pearson Type III estimates and the regression estimates. Values of explanatory variables used in the regression models were determined from digital spatial data sets by means of a geographic information system (GIS), with the exception of drainage area, which was determined by digitizing the area within basin boundaries manually delineated on topographic maps. Use of GIS-based explanatory variables represents a major departure in methodology from that described in previous reports on estimating flood-frequency characteristics of Ohio streams. Examples are presented illustrating application of the regression equations to ungaged sites on ungaged and gaged streams. A method is provided to adjust regression estimates for ungaged sites by use of weighted and regression estimates for a gaged site on the same stream. A region-of-influence method, which employs a computer program to estimate flood-frequency characteristics for ungaged sites based on data from gaged sites with similar characteristics, was also tested and compared to the GLS full-model equations. For all recurrence intervals, the GLS full-model equations had superior prediction accuracy relative to the simple equations and therefore are recommended for use.
Survival of European mouflon (Artiodactyla: Bovidae) in Hawai'i based on tooth cementum lines
Hess, S.C.; Stephens, R.M.; Thompson, T.L.; Danner, R.M.; Kawakami, B.
2011-01-01
Reliable techniques for estimating age of ungulates are necessary to determine population parameters such as age structure and survival. Techniques that rely on dentition, horn, and facial patterns have limited utility for European mouflon sheep (Ovis gmelini musimon), but tooth cementum lines may offer a useful alternative. Cementum lines may not be reliable outside temperate regions, however, because lack of seasonality in diet may affect annulus formation. We evaluated the utility of tooth cementum lines for estimating age of mouflon in Hawai'i in comparison to dentition. Cementum lines were present in mouflon from Mauna Loa, island of Hawai'i, but were less distinct than in North American sheep. The two age-estimation methods provided similar estimates for individuals aged ???3 yr by dentition (the maximum age estimable by dentition), with exact matches in 51% (18/35) of individuals, and an average difference of 0.8 yr (range 04). Estimates of age from cementum lines were higher than those from dentition in 40% (14/35) and lower in 9% (3/35) of individuals. Discrepancies in age estimates between techniques and between paired tooth samples estimated by cementum lines were related to certainty categories assigned by the clarity of cementum lines, reinforcing the importance of collecting a sufficient number of samples to compensate for samples of lower quality, which in our experience, comprised approximately 22% of teeth. Cementum lines appear to provide relatively accurate age estimates for mouflon in Hawai'i, allow estimating age beyond 3 yr, and they offer more precise estimates than tooth eruption patterns. After constructing an age distribution, we estimated annual survival with a log-linear model to be 0.596 (95% CI 0.5540.642) for this heavily controlled population. ?? 2011 by University of Hawai'i Press.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Curtis, Scott; Einaudi, Franco (Technical Monitor)
2001-01-01
Multi-purpose remote-sensing products from various satellites have proved crucial in developing global estimates of precipitation. Examples of these products include low-earth-orbit and geosynchronous-orbit infrared (leo- and geo-IR), Outgoing Longwave Radiation (OLR), Television Infrared Operational Satellite (TIROS) Operational Vertical Sounder (TOVS) data, and passive microwave data such as that from the Special Sensor Microwave/ Imager (SSM/I). Each of these datasets has served as the basis for at least one useful quasi-global precipitation estimation algorithm; however, the quality of estimates varies tremendously among the algorithms for the different climatic regions around the globe.
Seliske, L; Norwood, T A; McLaughlin, J R; Wang, S; Palleschi, C; Holowaty, E
2016-06-07
An important public health goal is to decrease the prevalence of key behavioural risk factors, such as tobacco use and obesity. Survey information is often available at the regional level, but heterogeneity within large geographic regions cannot be assessed. Advanced spatial analysis techniques are demonstrated to produce sensible micro area estimates of behavioural risk factors that enable identification of areas with high prevalence. A spatial Bayesian hierarchical model was used to estimate the micro area prevalence of current smoking and excess bodyweight for the Erie-St. Clair region in southwestern Ontario. Estimates were mapped for male and female respondents of five cycles of the Canadian Community Health Survey (CCHS). The micro areas were 2006 Census Dissemination Areas, with an average population of 400-700 people. Two individual-level models were specified: one controlled for survey cycle and age group (model 1), and one controlled for survey cycle, age group and micro area median household income (model 2). Post-stratification was used to derive micro area behavioural risk factor estimates weighted to the population structure. SaTScan analyses were conducted on the granular, postal-code level CCHS data to corroborate findings of elevated prevalence. Current smoking was elevated in two urban areas for both sexes (Sarnia and Windsor), and an additional small community (Chatham) for males only. Areas of excess bodyweight were prevalent in an urban core (Windsor) among males, but not females. Precision of the posterior post-stratified current smoking estimates was improved in model 2, as indicated by narrower credible intervals and a lower coefficient of variation. For excess bodyweight, both models had similar precision. Aggregation of the micro area estimates to CCHS design-based estimates validated the findings. This is among the first studies to apply a full Bayesian model to complex sample survey data to identify micro areas with variation in risk factor prevalence, accounting for spatial correlation and other covariates. Application of micro area analysis techniques helps define areas for public health planning, and may be informative to surveillance and research modeling of relevant chronic disease outcomes.
Painter, Jaime A.; Torak, Lynn J.; Jones, John W.
2015-09-30
Methods to estimate irrigation withdrawal using nationally available datasets and techniques that are transferable to other agricultural regions were evaluated by the U.S. Geological Survey as part of the Apalachicola-Chattahoochee-Flint (ACF) River Basin focus area study of the National Water Census (ACF–FAS). These methods investigated the spatial, temporal, and quantitative distributions of water withdrawal for irrigation in the southwestern Georgia region of the ACF–FAS, filling a vital need to inform science-based decisions regarding resource management and conservation. The crop– demand method assumed that only enough water is pumped onto a crop to satisfy the deficit between evapotranspiration and precipitation. A second method applied a geostatistical regimen of variography and conditional simulation to monthly metered irrigation withdrawal to estimate irrigation withdrawal where data do not exist. A third method analyzed Landsat satellite imagery using an automated approach to generate monthly estimates of irrigated lands. These methods were evaluated independently and compared collectively with measured water withdrawal information available in the Georgia part of the ACF–FAS, principally in the Chattahoochee-Flint River Basin. An assessment of each method’s contribution to the National Water Census program was also made to identify transfer value of the methods to the national program and other water census studies. None of the three methods evaluated represent a turnkey process to estimate irrigation withdrawal on any spatial (local or regional) or temporal (monthly or annual) extent. Each method requires additional information on agricultural practices during the growing season to complete the withdrawal estimation process. Spatial and temporal limitations inherent in identifying irrigated acres during the growing season, and in designing spatially and temporally representative monitor (meter) networks, can belie the ability of the methods to produce accurate irrigation-withdrawal estimates that can be used to produce dependable and consistent assessments of water availability and use for the National Water Census. Emerging satellite-data products and techniques for data analysis can generate high spatial-resolution estimates of irrigated-acres distributions with near-term temporal frequencies compatible with the needs of the ACF–FAS and the National Water Census.
POF-Darts: Geometric adaptive sampling for probability of failure
Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; ...
2016-06-18
We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink,more » improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. Furthermore, we present various examples to demonstrate the efficiency of our novel approach.« less
4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Feza; Arikan, Orhan
2016-07-01
Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
Comparison of ionospheric plasma drifts obtained by different techniques
NASA Astrophysics Data System (ADS)
Kouba, Daniel; Arikan, Feza; Arikan, Orhan; Toker, Cenk; Mosna, Zbysek; Gok, Gokhan; Rejfek, Lubos; Ari, Gizem
2016-07-01
Ionospheric observatory in Pruhonice (Czech Republic, 50N, 14.9E) provides regular ionospheric sounding using Digisonde DPS-4D. The paper is focused on F-region vertical drift data. Vertical component of the drift velocity vector can be estimated by several methods. Digisonde DPS-4D allows sounding in drift mode with direct output represented by drift velocity vector. The Digisonde located in Pruhonice provides direct drift measurement routinely once per 15 minutes. However, also other different techniques can be found in the literature, for example the indirect estimation based on the temporal evolution of measured ionospheric characteristics is often used for calculation of the vertical drift component. The vertical velocity is thus estimated according to the change of characteristics scaled from the classical quarter-hour ionograms. In present paper direct drift measurement is compared with technique based on measuring of the virtual height at fixed frequency from the F-layer trace on ionogram, technique based on variation of h`F and hmF. This comparison shows possibility of using different methods for calculating vertical drift velocity and their relationship to the direct measurement used by Digisonde. This study is supported by the Joint TUBITAK 114E092 and AS CR 14/001 projects.
A multilayered polyurethane foam technique for skin graft immobilization.
Nakamura, Motoki; Ito, Erika; Kato, Hiroshi; Watanabe, Shoichi; Morita, Akimichi
2012-02-01
Several techniques are applicable for skin graft immobilization. Although the sponge dressing is a popular technique, pressure failure near the center of the graft is a weakness of the technique that can result in engraftment failure. To evaluate the efficacy of a new skin graft immobilization technique using multilayered polyurethane foam in vivo and in vitro. Twenty-six patients underwent a full-thickness skin graft. Multiple layers of a hydrocellular polyurethane foam dressing were used for skin graft immobilization. In addition, we created an in vitro skin graft model that allowed us to estimate immobilization pressure at the center and edges of skin grafts of various sizes. Overall mean graft survival was 88.9%. In the head and neck region (19 patients), mean graft survival was 93.6%. Based on the in vitro outcomes, this technique supplies effective pressure (<30 mmHg) to the center region of the skin graft. This multilayered polyurethane foam dressing is simple, safe, and effective for skin graft immobilization. © 2011 by the American Society for Dermatologic Surgery, Inc. Published by Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
IDENTIFICATION OF MEMBERS IN THE CENTRAL AND OUTER REGIONS OF GALAXY CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serra, Ana Laura; Diaferio, Antonaldo, E-mail: serra@ph.unito.it
2013-05-10
The caustic technique measures the mass of galaxy clusters in both their virial and infall regions and, as a byproduct, yields the list of cluster galaxy members. Here we use 100 galaxy clusters with mass M{sub 200} {>=} 10{sup 14} h {sup -1} M{sub Sun} extracted from a cosmological N-body simulation of a {Lambda}CDM universe to test the ability of the caustic technique to identify the cluster galaxy members. We identify the true three-dimensional members as the gravitationally bound galaxies. The caustic technique uses the caustic location in the redshift diagram to separate the cluster members from the interlopers. Wemore » apply the technique to mock catalogs containing 1000 galaxies in the field of view of 12 h {sup -1} Mpc on a side at the cluster location. On average, this sample size roughly corresponds to 180 real galaxy members within 3r{sub 200}, similar to recent redshift surveys of cluster regions. The caustic technique yields a completeness, the fraction of identified true members, f{sub c} = 0.95 {+-} 0.03, within 3r{sub 200}. The contamination, the fraction of interlopers in the observed catalog of members, increases from f{sub i}=0.020{sup +0.046}{sub -0.015} at r{sub 200} to f{sub i}=0.08{sup +0.11}{sub -0.05} at 3r{sub 200}. No other technique for the identification of the members of a galaxy cluster provides such large completeness and small contamination at these large radii. The caustic technique assumes spherical symmetry and the asphericity of the cluster is responsible for most of the spread of the completeness and the contamination. By applying the technique to an approximately spherical system obtained by stacking the individual clusters, the spreads decrease by at least a factor of two. We finally estimate the cluster mass within 3r{sub 200} after removing the interlopers: for individual clusters, the mass estimated with the virial theorem is unbiased and within 30% of the actual mass; this spread decreases to less than 10% for the spherically symmetric stacked cluster.« less
Statistical Field Estimation and Scale Estimation for Complex Coastal Regions and Archipelagos
2009-05-01
instruments applied to mode-73. Deep-Sea Research, 23:559–582. Brown , R. G. and Hwang , P. Y. C. (1997). Introduction to Random Signals and Applied Kalman ...the covariance matrix becomes neg- ative due to numerical issues ( Brown and Hwang , 1997). Some useful techniques to counter these divergence problems...equations ( Brown and Hwang , 1997). If the number of observations is large, divergence problems can arise under certain con- ditions due to truncation errors
Jennings, M.E.; Thomas, W.O.; Riggs, H.C.
1994-01-01
For many years, the U.S. Geological Survey (USGS) has been involved in the development of regional regression equations for estimating flood magnitude and frequency at ungaged sites. These regression equations are used to transfer flood characteristics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally these equations have been developed on a statewide or metropolitan area basis as part of cooperative study programs with specific State Departments of Transportation or specific cities. The USGS, in cooperation with the Federal Highway Administration and the Federal Emergency Management Agency, has compiled all the current (as of September 1993) statewide and metropolitan area regression equations into a micro-computer program titled the National Flood Frequency Program.This program includes regression equations for estimating flood-peak discharges and techniques for estimating a typical flood hydrograph for a given recurrence interval peak discharge for unregulated rural and urban watersheds. These techniques should be useful to engineers and hydrologists for planning and design applications. This report summarizes the statewide regression equations for rural watersheds in each State, summarizes the applicable metropolitan area or statewide regression equations for urban watersheds, describes the National Flood Frequency Program for making these computations, and provides much of the reference information on the extrapolation variables needed to run the program.
Commowick, Olivier; Akhondi-Asl, Alireza; Warfield, Simon K.
2012-01-01
We present a new algorithm, called local MAP STAPLE, to estimate from a set of multi-label segmentations both a reference standard segmentation and spatially varying performance parameters. It is based on a sliding window technique to estimate the segmentation and the segmentation performance parameters for each input segmentation. In order to allow for optimal fusion from the small amount of data in each local region, and to account for the possibility of labels not being observed in a local region of some (or all) input segmentations, we introduce prior probabilities for the local performance parameters through a new Maximum A Posteriori formulation of STAPLE. Further, we propose an expression to compute confidence intervals in the estimated local performance parameters. We carried out several experiments with local MAP STAPLE to characterize its performance and value for local segmentation evaluation. First, with simulated segmentations with known reference standard segmentation and spatially varying performance, we show that local MAP STAPLE performs better than both STAPLE and majority voting. Then we present evaluations with data sets from clinical applications. These experiments demonstrate that spatial adaptivity in segmentation performance is an important property to capture. We compared the local MAP STAPLE segmentations to STAPLE, and to previously published fusion techniques and demonstrate the superiority of local MAP STAPLE over other state-of-the- art algorithms. PMID:22562727
NASA Astrophysics Data System (ADS)
Shanafield, M.; Cook, P. G.
2014-12-01
When estimating surface water-groundwater fluxes, the use of complimentary techniques helps to fill in uncertainties in any individual method, and to potentially gain a better understanding of spatial and temporal variability in a system. It can also be a way of preventing the loss of data during infrequent and unpredictable flow events. For example, much of arid Australia relies on groundwater, which is recharged by streamflow through ephemeral streams during flood events. Three recent surface water/groundwater investigations from arid Australian systems provide good examples of how using multiple field and analysis techniques can help to more fully characterize surface water-groundwater fluxes, but can also result in conflicting values over varying spatial and temporal scales. In the Pilbara region of Western Australia, combining streambed radon measurements, vertical heat transport modeling, and a tracer test helped constrain very low streambed residence times, which are on the order of minutes. Spatial and temporal variability between the methods yielded hyporheic exchange estimates between 10-4 m2 s-1 and 4.2 x 10-2 m2 s-1. In South Australia, three-dimensional heat transport modeling captured heterogeneity within 20 square meters of streambed, identifying areas of sandy soil (flux rates of up to 3 m d-1) and clay (flux rates too slow to be accurately characterized). Streamflow front modeling showed similar flux rates, but averaged over 100 m long stream segments for a 1.6 km reach. Finally, in central Australia, several methods are used to decipher whether any of the flow down a highly ephemeral river contributes to regional groundwater recharge, showing that evaporation and evapotranspiration likely accounts for all of the infiltration into the perched aquifer. Lessons learned from these examples demonstrate the influences of the spatial and temporal variability between techniques on estimated fluxes.
Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes
NASA Astrophysics Data System (ADS)
McClure, Jeffrey; Yarusevych, Serhiy
2017-05-01
The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.
Echocardiographic strain and strain-rate imaging: a new tool to study regional myocardial function.
D'hooge, Jan; Bijnens, Bart; Thoen, Jan; Van de Werf, Frans; Sutherland, George R; Suetens, Paul
2002-09-01
Ultrasonic imaging is the noninvasive clinical imaging modality of choice for diagnosing heart disease. At present, two-dimensional ultrasonic grayscale images provide a relatively cheap, fast, bedside method to study the morphology of the heart. Several methods have been proposed to assess myocardial function. These have been based on either grayscale or motion (velocity) information measured in real-time. However, the quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. To do this, ultrasonic strain and strain-rate imaging have been introduced. In the clinical setting, these techniques currently only allow one component of the true three-dimensional deformation to be measured. Clinical, multidimensional strain (rate) information can currently thus only be obtained by combining data acquired using different transducer positions. Nevertheless, given the appropriate postprocessing, the clinical value of these techniques has already been shown. Moreover, multidimensional strain and strain-rate estimation of the heart in vivo by means of a single ultrasound acquisition has been shown to be feasible. In this paper, the new techniques of ultrasonic strain rate and strain imaging of the heart are reviewed in terms of definitions, data acquisition, strain-rate estimation, postprocessing, and parameter extraction. Their clinical validation and relevance will be discussed using clinical examples on relevant cardiac pathology. Based on these examples, suggestions are made for future developments of these techniques.
Wetzel, Kim L.; Bettandorff, J.M.
1986-01-01
Techniques are presented for estimating various streamflow characteristics, such as peak flows, mean monthly and annual flows, flow durations, and flow volumes, at ungaged sites on unregulated streams in the Eastern Coal region. Streamflow data and basin characteristics for 629 gaging stations were used to develop multiple-linear-regression equations. Separate equations were developed for the Eastern and Interior Coal Provinces. Drainage area is an independent variable common to all equations. Other variables needed, depending on the streamflow characteristic, are mean annual precipitation, mean basin elevation, main channel length, basin storage, main channel slope, and forest cover. A ratio of the observed 50- to 90-percent flow durations was used in the development of relations to estimate low-flow frequencies in the Eastern Coal Province. Relations to estimate low flows in the Interior Coal Province are not presented because the standard errors were greater than 0.7500 log units and were considered to be of poor reliability.
High-resolution EEG techniques for brain-computer interface applications.
Cincotti, Febo; Mattia, Donatella; Aloise, Fabio; Bufalari, Simona; Astolfi, Laura; De Vico Fallani, Fabrizio; Tocci, Andrea; Bianchi, Luigi; Marciani, Maria Grazia; Gao, Shangkai; Millan, Jose; Babiloni, Fabio
2008-01-15
High-resolution electroencephalographic (HREEG) techniques allow estimation of cortical activity based on non-invasive scalp potential measurements, using appropriate models of volume conduction and of neuroelectrical sources. In this study we propose an application of this body of technologies, originally developed to obtain functional images of the brain's electrical activity, in the context of brain-computer interfaces (BCI). Our working hypothesis predicted that, since HREEG pre-processing removes spatial correlation introduced by current conduction in the head structures, by providing the BCI with waveforms that are mostly due to the unmixed activity of a small cortical region, a more reliable classification would be obtained, at least when the activity to detect has a limited generator, which is the case in motor related tasks. HREEG techniques employed in this study rely on (i) individual head models derived from anatomical magnetic resonance images, (ii) distributed source model, composed of a layer of current dipoles, geometrically constrained to the cortical mantle, (iii) depth-weighted minimum L(2)-norm constraint and Tikhonov regularization for linear inverse problem solution and (iv) estimation of electrical activity in cortical regions of interest corresponding to relevant Brodmann areas. Six subjects were trained to learn self modulation of sensorimotor EEG rhythms, related to the imagination of limb movements. Off-line EEG data was used to estimate waveforms of cortical activity (cortical current density, CCD) on selected regions of interest. CCD waveforms were fed into the BCI computational pipeline as an alternative to raw EEG signals; spectral features are evaluated through statistical tests (r(2) analysis), to quantify their reliability for BCI control. These results are compared, within subjects, to analogous results obtained without HREEG techniques. The processing procedure was designed in such a way that computations could be split into a setup phase (which includes most of the computational burden) and the actual EEG processing phase, which was limited to a single matrix multiplication. This separation allowed to make the procedure suitable for on-line utilization, and a pilot experiment was performed. Results show that lateralization of electrical activity, which is expected to be contralateral to the imagined movement, is more evident on the estimated CCDs than in the scalp potentials. CCDs produce a pattern of relevant spectral features that is more spatially focused, and has a higher statistical significance (EEG: 0.20+/-0.114 S.D.; CCD: 0.55+/-0.16 S.D.; p=10(-5)). A pilot experiment showed that a trained subject could utilize voluntary modulation of estimated CCDs for accurate (eight targets) on-line control of a cursor. This study showed that it is practically feasible to utilize HREEG techniques for on-line operation of a BCI system; off-line analysis suggests that accuracy of BCI control is enhanced by the proposed method.
LAND USE CHANGE DUE TO URBANIZATION FOR THE NEUSE RIVER BASIN
The Urban Growth Model (UGM) was applied to analysis of land use change in the Neuse River Basin as part of a larger project for estimating the regional and broader impact of urbanization. UGM is based on cellular automation (CA) simulation techniques developed at the University...
Development of known-fate survival monitoring techniques for juvenile wild pigs (Sus scrofa)
David A. Keiter; John C. Kilgo; Mark A. Vukovich; Fred L. Cunningham; James C. Beasley
2017-01-01
Context. Wild pigs are an invasive species linked to numerous negative impacts on natural and anthropogenic ecosystems in many regions of the world. Robust estimates of juvenile wild pig survival are needed to improve population dynamics models to facilitate management of this economically and ecologically...
TRANSFERRING TECHNOLOGIES, TOOLS AND TECHNIQUES: THE NATIONAL COASTAL ASSESSMENT
The purpose of the National Coastal Assessment (NCA) is to estimate the status and trends of the condition of the nation's coastal resources on a state, regional and national basis. Based on NCA monitoring from 1999-2001, 100% of the nation's estuarine waters (at over 2500 locati...
NASA Astrophysics Data System (ADS)
Srivastava, R. K.; Panda, R. K.; Halder, Debjani
2017-08-01
The primary objective of this study was to evaluate the performance of the time-domain reflectometry (TDR) technique for daily evapotranspiration estimation of peanut and maize crop in a sub-humid region. Four independent methods were used to estimate crop evapotranspiration (ETc), namely, soil water balance budgeting approach, energy balance approach—(Bowen ratio), empirical methods approach, and Pan evaporation method. The soil water balance budgeting approach utilized the soil moisture measurement by gravimetric and TDR method. The empirical evapotranspiration methods such as combination approach (FAO-56 Penman-Monteith and Penman), temperature-based approach (Hargreaves-Samani), and radiation-based approach (Priestley-Taylor, Turc, Abetw) were used to estimate the reference evapotranspiration (ET0). The daily ETc determined by the FAO-56 Penman-Monteith, Priestley-Taylor, Turc, Pan evaporation, and Bowen ratio were found to be at par with the ET values derived from the soil water balance budget; while the methods Abetw, Penman, and Hargreaves-Samani were not found to be ideal for the determination of ETc. The study illustrates the in situ applicability of the TDR method in order to make it possible for a user to choose the best way for the optimum water consumption for a given crop in a sub-humid region. The study suggests that the FAO-56 Penman-Monteith, Turc, and Priestley-Taylor can be used for the determination of crop ETc using TDR in comparison to soil water balance budget.
Kobayashi, Masanao; Asada, Yasuki; Matsubara, Kosuke; Suzuki, Shouichi; Matsunaga, Yuta; Haba, Tomonobu; Kawaguchi, Ai; Daioku, Tomihiko; Toyama, Hiroshi; Kato, Ryoichi
2017-05-01
Adequate dose management during computed tomography is important. In the present study, the dosimetric application software ImPACT was added to a functional calculator of the size-specific dose estimate and was part of the scan settings for the auto exposure control (AEC) technique. This study aimed to assess the practicality and accuracy of the modified ImPACT software for dose estimation. We compared the conversion factors identified by the software with the values reported by the American Association of Physicists in Medicine Task Group 204, and we noted similar results. Moreover, doses were calculated with the AEC technique and a fixed-tube current of 200 mA for the chest-pelvis region. The modified ImPACT software could estimate each organ dose, which was based on the modulated tube current. The ability to perform beneficial modifications indicates the flexibility of the ImPACT software. The ImPACT software can be further modified for estimation of other doses. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean
2016-04-01
A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.
[Measurement and estimation methods and research progress of snow evaporation in forests].
Li, Hui-Dong; Guan, De-Xin; Jin, Chang-Jie; Wang, An-Zhi; Yuan, Feng-Hui; Wu, Jia-Bing
2013-12-01
Accurate measurement and estimation of snow evaporation (sublimation) in forests is one of the important issues to the understanding of snow surface energy and water balance, and it is also an essential part of regional hydrological and climate models. This paper summarized the measurement and estimation methods of snow evaporation in forests, and made a comprehensive applicability evaluation, including mass-balance methods (snow water equivalent method, comparative measurements of snowfall and through-snowfall, snow evaporation pan, lysimeter, weighing of cut tree, weighing interception on crown, and gamma-ray attenuation technique) and micrometeorological methods (Bowen-ratio energy-balance method, Penman combination equation, aerodynamics method, surface temperature technique and eddy covariance method). Also this paper reviewed the progress of snow evaporation in different forests and its influencal factors. At last, combining the deficiency of past research, an outlook for snow evaporation rearch in forests was presented, hoping to provide a reference for related research in the future.
Bisese, James A.
1995-01-01
Methods are presented for estimating the peak discharges of rural, unregulated streams in Virginia. A Pearson Type III distribution is fitted to the logarithms of the unregulated annual peak-discharge records from 363 stream-gaging stations in Virginia to estimate the peak discharge at these stations for recurrence intervals of 2 to 500 years. Peak-discharge characteristics for 284 unregulated stations are divided into eight regions based on physiographic province, and regressed on basin characteristics, including drainage area, main channel length, main channel slope, mean basin elevation, percentage of forest cover, mean annual precipitation, and maximum rainfall intensity. Regression equations for each region are computed by use of the generalized least-squares method, which accounts for spatial and temporal correlation between nearby gaging stations. This regression technique weights the significance of each station to the regional equation based on the length of records collected at each cation, the correlation between annual peak discharges among the stations, and the standard deviation of the annual peak discharge for each station.Drainage area proved to be the only significant explanatory variable in four regions, while other regions have as many as three significant variables. Standard errors of the regression equations range from 30 to 80 percent. Alternate equations using drainage area only are provided for the five regions with more than one significant explanatory variable.Methods and sample computations are provided to estimate peak discharges at gaged and engaged sites in Virginia for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, and to adjust the regression estimates for sites on gaged streams where nearby gaging-station records are available.
Methods for estimating magnitude and frequency of floods in Montana based on data through 1983
Omang, R.J.; Parrett, Charles; Hull, J.A.
1986-01-01
Equations are presented for estimating flood magnitudes for ungaged sites in Montana based on data through 1983. The State was divided into eight regions based on hydrologic conditions, and separate multiple regression equations were developed for each region. These equations relate annual flood magnitudes and frequencies to basin characteristics and are applicable only to natural flow streams. In three of the regions, equations also were developed relating flood magnitudes and frequencies to basin characteristics and channel geometry measurements. The standard errors of estimate for an exceedance probability of 1% ranged from 39% to 87%. Techniques are described for estimating annual flood magnitude and flood frequency information at ungaged sites based on data from gaged sites on the same stream. Included are curves relating flood frequency information to drainage area for eight major streams in the State. Maximum known flood magnitudes in Montana are compared with estimated 1 %-chance flood magnitudes and with maximum known floods in the United States. Values of flood magnitudes for selected exceedance probabilities and values of significant basin characteristics and channel geometry measurements for all gaging stations used in the analysis are tabulated. Included are 375 stations in Montana and 28 nearby stations in Canada and adjoining States. (Author 's abstract)
Improved Sizing of Impact Damage in Composites Based on Thermographic Response
NASA Technical Reports Server (NTRS)
Winfree, William P.; Howell Patricia A.; Leckey, Cara A.; Rogge, Matthew D.
2013-01-01
Impact damage in thin carbon fiber reinforced polymer composites often results in a relatively small region of damage at the front surface, with increasing damage near the back surface. Conventional methods for reducing the pulsed thermographic responses of the composite tend to underestimate the size of the back surface damage, since the smaller near surface damage gives the largest thermographic indication. A method is presented for reducing the thermographic data to produce an estimated size for the impact damage that is much closer to the size of the damage estimated from other NDE techniques such as microfocus x-ray computed tomography and pulse echo ultrasonics. Examples of the application of the technique to experimental data acquired on specimens with impact damage are presented. The method is also applied to the results of thermographic simulations to investigate the limitations of the technique.
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min
2013-08-01
Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the estimation of missing, hazardous or costly data to facilitate water resources management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, Jim; Penuelas, J.; Guenther, Alex B.
To survey landscape-scale fluxes of biogenic gases, a100-meterTeflon tube was attached to a tethered balloon as a sampling inlet for a fast response Proton Transfer Reaction Mass Spectrometer (PTRMS). Along with meteorological instruments deployed on the tethered balloon and at 3-mand outputs from a regional weather model, these observations were used to estimate landscape scale biogenic volatile organic compound fluxes with two micrometeorological techniques: mixed layer variance and surface layer gradients. This highly mobile sampling system was deployed at four field sites near Barcelona to estimate landscape-scale BVOC emission factors in a relatively short period (3 weeks). The two micrometeorologicalmore » techniques agreed within the uncertainty of the flux measurements at all four sites even though the locations had considerable heterogeneity in species distribution and complex terrain. The observed fluxes were significantly different than emissions predicted with an emission model using site-specific emission factors and land-cover characteristics. Considering the wide range in reported BVOC emission factors of VOCs for individual vegetation species (more than an order of magnitude), this flux estimation technique is useful for constraining BVOC emission factors used as model inputs.« less
Basic research for the geodynamics program
NASA Technical Reports Server (NTRS)
1984-01-01
Some objectives of this geodynamic program are: (1) optimal utilization of laser and VLBI observations as reference frames for geodynamics, (2) utilization of range difference observations in geodynamics, and (3) estimation techniques in crustal deformation analysis. The determination of Earth rotation parameters from different space geodetic systems is studied. Also reported on is the utilization of simultaneous laser range differences for the determination of baseline variation. An algorithm for the analysis of regional or local crustal deformation measurements is proposed along with other techniques and testing procedures. Some results of the reference from comparisons in terms of the pole coordinates from different techniques are presented.
Nonlocal Intracranial Cavity Extraction
Manjón, José V.; Eskildsen, Simon F.; Coupé, Pierrick; Romero, José E.; Collins, D. Louis; Robles, Montserrat
2014-01-01
Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are valuable tools which may help to obtain an objective diagnosis and followup of many neurological diseases. To estimate such regional brain volumes, the intracranial cavity volume (ICV) is often used for normalization. However, the high variability of brain shape and size due to normal intersubject variability, normal changes occurring over the lifespan, and abnormal changes due to disease makes the ICV estimation problem challenging. In this paper, we present a new approach to perform ICV extraction based on the use of a library of prelabeled brain images to capture the large variability of brain shapes. To this end, an improved nonlocal label fusion scheme based on BEaST technique is proposed to increase the accuracy of the ICV estimation. The proposed method is compared with recent state-of-the-art methods and the results demonstrate an improved performance both in terms of accuracy and reproducibility while maintaining a reduced computational burden. PMID:25328511
NASA Astrophysics Data System (ADS)
Kennedy, J. J.; Rayner, N. A.; Smith, R. O.; Parker, D. E.; Saunby, M.
2011-07-01
Changes in instrumentation and data availability have caused time-varying biases in estimates of global and regional average sea surface temperature. The size of the biases arising from these changes are estimated and their uncertainties evaluated. The estimated biases and their associated uncertainties are largest during the period immediately following the Second World War, reflecting the rapid and incompletely documented changes in shipping and data availability at the time. Adjustments have been applied to reduce these effects in gridded data sets of sea surface temperature and the results are presented as a set of interchangeable realizations. Uncertainties of estimated trends in global and regional average sea surface temperature due to bias adjustments since the Second World War are found to be larger than uncertainties arising from the choice of analysis technique, indicating that this is an important source of uncertainty in analyses of historical sea surface temperatures. Despite this, trends over the twentieth century remain qualitatively consistent.
Ground-water pumpage in the Willamette lowland regional aquifer system, Oregon and Washington, 1990
Collins, Charles A.; Broad, Tyson M.
1996-01-01
Ground-water pumpage for 1990 was estimated for an area of about 5,700 square miles in northwestern Oregon and southwestern Washington as part of the Puget-Willamette Lowland Regional Aquifer System Analysis study. The estimated total ground-water pumpage in 1990 was about 340,000 acre-feet. Ground water in the study area is pumped mainly from Quaternary sediment; lesser amounts are withdrawn from Tertiary volcanic materials. Large parts of the area are used for agriculture, and about two and one-half times as much ground water was pumped for irrigation as for either public- supply or industrial needs. Estimates of ground- water pumpage for irrigation in the central part of the Willamette Valley were generated by using image-processing techniques and Landsat Thematic Mapper data. Field data and published reports were used to estimate pumpage for irrigation in other parts of the study area. Information on public- supply and industrial pumpage was collected from Federal, State, and private organizations and individuals.
A Study on Regional Rainfall Frequency Analysis for Flood Simulation Scenarios
NASA Astrophysics Data System (ADS)
Jung, Younghun; Ahn, Hyunjun; Joo, Kyungwon; Heo, Jun-Haeng
2014-05-01
Recently, climate change has been observed in Korea as well as in the entire world. The rainstorm has been gradually increased and then the damage has been grown. It is very important to manage the flood control facilities because of increasing the frequency and magnitude of severe rain storm. For managing flood control facilities in risky regions, data sets such as elevation, gradient, channel, land use and soil data should be filed up. Using this information, the disaster situations can be simulated to secure evacuation routes for various rainfall scenarios. The aim of this study is to investigate and determine extreme rainfall quantile estimates in Uijeongbu City using index flood method with L-moments parameter estimation. Regional frequency analysis trades space for time by using annual maximum rainfall data from nearby or similar sites to derive estimates for any given site in a homogeneous region. Regional frequency analysis based on pooled data is recommended for estimation of rainfall quantiles at sites with record lengths less than 5T, where T is return period of interest. Many variables relevant to precipitation can be used for grouping a region in regional frequency analysis. For regionalization of Han River basin, the k-means method is applied for grouping regions by variables of meteorology and geomorphology. The results from the k-means method are compared for each region using various probability distributions. In the final step of the regionalization analysis, goodness-of-fit measure is used to evaluate the accuracy of a set of candidate distributions. And rainfall quantiles by index flood method are obtained based on the appropriate distribution. And then, rainfall quantiles based on various scenarios are used as input data for disaster simulations. Keywords: Regional Frequency Analysis; Scenarios of Rainfall Quantile Acknowledgements This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
Proton magnetic resonance spectroscopy for assessment of human body composition.
Kamba, M; Kimura, K; Koda, M; Ogawa, T
2001-02-01
The usefulness of magnetic resonance spectroscopy (MRS)-based techniques for assessment of human body composition has not been established. We compared a proton MRS-based technique with the total body water (TBW) method to determine the usefulness of the former technique for assessment of human body composition. Proton magnetic resonance spectra of the chest to abdomen, abdomen to pelvis, and pelvis to thigh regions were obtained from 16 volunteers by using single, free induction decay measurement with a clinical magnetic resonance system operating at 1.5 T. The MRS-derived metabolite ratio was determined as the ratio of fat methyl and methylene proton resonance to water proton resonance. The peak areas for the chest to abdomen and the pelvis to thigh regions were normalized to an external reference (approximately 2200 g benzene) and a weighted average of the MRS-derived metabolite ratios for the 2 positions was calculated. TBW for each subject was determined by the deuterium oxide dilution technique. The MRS-derived metabolite ratios were significantly correlated with the ratio of body fat to lean body mass estimated by TBW. The MRS-derived metabolite ratio for the abdomen to pelvis region correlated best with the ratio of body fat to lean body mass on simple regression analyses (r = 0.918). The MRS-derived metabolite ratio for the abdomen to pelvis region and that for the pelvis to thigh region were selected for a multivariate regression model (R = 0.947, adjusted R(2) = 0.881). This MRS-based technique is sufficiently accurate for assessment of human body composition.
Regional Ocean Data Assimilation
NASA Astrophysics Data System (ADS)
Edwards, Christopher A.; Moore, Andrew M.; Hoteit, Ibrahim; Cornuelle, Bruce D.
2015-01-01
This article reviews the past 15 years of developments in regional ocean data assimilation. A variety of scientific, management, and safety-related objectives motivate marine scientists to characterize many ocean environments, including coastal regions. As in weather prediction, the accurate representation of physical, chemical, and/or biological properties in the ocean is challenging. Models and observations alone provide imperfect representations of the ocean state, but together they can offer improved estimates. Variational and sequential methods are among the most widely used in regional ocean systems, and there have been exciting recent advances in ensemble and four-dimensional variational approaches. These techniques are increasingly being tested and adapted for biogeochemical applications.
Regional ocean data assimilation.
Edwards, Christopher A; Moore, Andrew M; Hoteit, Ibrahim; Cornuelle, Bruce D
2015-01-01
This article reviews the past 15 years of developments in regional ocean data assimilation. A variety of scientific, management, and safety-related objectives motivate marine scientists to characterize many ocean environments, including coastal regions. As in weather prediction, the accurate representation of physical, chemical, and/or biological properties in the ocean is challenging. Models and observations alone provide imperfect representations of the ocean state, but together they can offer improved estimates. Variational and sequential methods are among the most widely used in regional ocean systems, and there have been exciting recent advances in ensemble and four-dimensional variational approaches. These techniques are increasingly being tested and adapted for biogeochemical applications.
NASA Technical Reports Server (NTRS)
Barlow, N. G.
1993-01-01
This study determines crater depth through use of photoclinometric profiles. Random checks of the photoclinometric results are performed using shadow estimation techniques. The images are Viking Orbiter digital format frames; in cases where the digital image is unusable for photoclinometric analysis, shadow estimation is used to determine crater depths. The two techniques provide depth results within 2 percent of each other. Crater diameters are obtained from the photoclinometric profiles and checked against the diameters measured from the hard-copy images using a digitizer. All images used in this analysis are of approximately 40 m/pixel resolution. The sites that have been analyzed to date include areas within Arabia, Maja Valles, Memnonia, Acidalia, and Elysium. Only results for simple craters (craters less than 5 km in diameter) are discussed here because of the low numbers of complex craters presently measured in the analysis. General results indicate that impact craters are deeper than average. A single d/D relationship for fresh impact craters on Mars does not exist due to changes in target properties across the planet's surface. Within regions where target properties are approximately constant, however, d/D ratios for fresh craters can be determined. In these regions, the d/D ratios of nonpristine craters can be compared with the fresh crater d/D relationship to obtain information on relative degrees of crater degradation. This technique reveals that regional episodes of enhanced degradation have occurred. However, the lack of statistically reliable size-frequency distribution data prevents comparison of the relative ages of these events between different regions, and thus determination of a large-scale episode (or perhaps several episodes) cannot be made at this time.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.
2013-01-01
Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098
Nitzsche, E U; Choi, Y; Czernin, J; Hoh, C K; Huang, S C; Schelbert, H R
1996-06-01
[13N]Ammonia has been validated in dog studies as a myocardial blood flow tracer. Estimates of myocardial blood flow by [13N]ammonia were highly linearly correlated to those by the microsphere and blood sample techniques. However, estimates of myocardial blood flow with [13N]ammonia in humans have not yet been compared with those by an independent technique. This study therefore tested the hypothesis that the [13N]ammonia positron emission tomographic technique in humans gives estimates of myocardial blood flow comparable to those obtained with the [15O]water technique. A total of 30 pairs of positron emission tomographic flow measurements were performed in 30 healthy volunteers; 15 volunteers were studied at rest and 15 during adenosine-induced hypermia. Estimates of average and of regional myocardial blood flow by the [13N]ammonia and the [15O]water approaches correlated well (y = 0.02 + 1.02x, r = .99, P < .001 SEE = 0.023 for average and y = 0.06 + 1.00x, r = .97, P < .001, SEE = 0.025 for regional values) over a flow range of 0.45 to 4.74 mL.min-1.g-1. At rest, mean myocardial blood flow was 0.64 +/- 0.09 mL.min-1.g-1 for [13N]ammonia and 0.66 +/- 0.12 mL.min-1.g-1 for [15O]water (P = NS). For adenosine-induced hyperemia, mean myocardial blood flow was 2.63 +/- 0.75 mL.min-1.g-1 for [13N]ammonia and 2.73 +/- 0.77 mL.min-1.g-1 for [15O]water (P = NS). The coefficient of variation as an index of the observed heterogeneity of myocardial blood flow averaged, for [13N]ammonia, 9 +/- 4% at rest and 12 +/- 7% during stress and, for [15O]water, 14 +/- 11% at rest and 16 +/- 9% during stress. The coefficients of variation for [15O]water were significantly higher than those for [13N]ammonia (P = .004 at rest and P = .03 during stress). The two approaches yield comparable estimates of myocardial blood flow in humans, which supports the validity of the [13N]ammonia method in human myocardium previously shown only in animals. However, the [15O]water approach reveals a greater heterogeneity (presumably method-related), which might limit the accuracy of sectorial myocardial blood flow estimates in humans.
Estimating the number of double-strand breaks formed during meiosis from partial observation.
Toyoizumi, Hiroshi; Tsubouchi, Hideo
2012-12-01
Analyzing the basic mechanism of DNA double-strand breaks (DSB) formation during meiosis is important for understanding sexual reproduction and genetic diversity. The location and amount of meiotic DSBs can be examined by using a common molecular biological technique called Southern blotting, but only a subset of the total DSBs can be observed; only DSB fragments still carrying the region recognized by a Southern blot probe are detected. With the assumption that DSB formation follows a nonhomogeneous Poisson process, we propose two estimators of the total number of DSBs on a chromosome: (1) an estimator based on the Nelson-Aalen estimator, and (2) an estimator based on a record value process. Further, we compared their asymptotic accuracy.
Heer, D M; Passel, J F
1987-01-01
This article compares 2 different methods for estimating the number of undocumented Mexican adults in Los Angeles County. The 1st method, the survey-based method, uses a combination of 1980 census data and the results of a survey conducted in Los Angeles County in 1980 and 1981. A sample was selected from babies born in Los Angeles County who had a mother or father of Mexican origin. The survey included questions about the legal status of the baby's parents and certain other relatives. The resulting estimates of undocumented Mexican immigrants are for males aged 18-44 and females aged 18-39. The 2nd method, the residual method, involves comparison of census figures for aliens counted with estimates of legally-resident aliens developed principally with data from the Immigration and Naturalization Service (INS). For this study, estimates by age, sex, and period of entry were produced for persons born in Mexico and living in Los Angeles County. The results of this research indicate that it is possible to measure undocumented immigration with different techniques, yet obtain results that are similar. Both techniques presented here are limited in that they represent estimates of undocumented aliens based on the 1980 census. The number of additional undocumented aliens not counted remains a subject of conjecture. The fact that the proportions undocumented shown in the survey (228,700) are quite similar to the residual estimates (317,800) suggests that the number of undocumented aliens not counted in the census may not be an extremely large fraction of the undocumented population. The survey-based estimates have some significant advantages over the residual estimates. The survey provides tabulations of the undocumented population by characteristics other than the limited demographic information provided by the residual technique. On the other hand, the survey-based estimates require that a survey be conducted and, if national or regional estimates are called for, they may require a number of surveys. The residual technique, however, also requires a data source other than the census. However, the INS discontinued the annual registration of aliens after 1981. Thus, estimates of undocumented aliens based on the residual technique will probably not be possible for subnational areas using the 1990 census unless the registration program is reinstituted. Perhaps the best information on the undocumented population in the 1990 census will come from an improved version of the survey-based technique described here applied in selected local areas.
Geothermal Reservoir Temperatures in Southeastern Idaho using Multicomponent Geothermometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neupane, Ghanashyam; Mattson, Earl D.; McLing, Travis L.
Southeastern Idaho exhibits numerous warm springs, warm water from shallow wells, and hot water within oil and gas test wells that indicate a potential for geothermal development in the area. Although the area exhibits several thermal expressions, the measured geothermal gradients vary substantially (19 – 61 ºC/km) within this area, potentially suggesting a redistribution of heat in the overlying ground water from deeper geothermal reservoirs. We have estimated reservoir temperatures from measured water compositions using an inverse modeling technique (Reservoir Temperature Estimator, RTEst) that calculates the temperature at which multiple minerals are simultaneously at equilibrium while explicitly accounting for themore » possible loss of volatile constituents (e.g., CO2), boiling and/or water mixing. Compositions of a selected group of thermal waters representing southeastern Idaho hot/warm springs and wells were used for the development of temperature estimates. The temperature estimates in the the region varied from moderately warm (59 ºC) to over 175 ºC. Specifically, hot springs near Preston, Idaho resulted in the highest temperature estimates in the region.« less
Stellar population in star formation regions of galaxies
NASA Astrophysics Data System (ADS)
Gusev, Alexander S.; Shimanovskaya, Elena V.; Shatsky, Nikolai I.; Sakhibov, Firouz; Piskunov, Anatoly E.; Kharchenko, Nina V.
2018-05-01
We developed techniques for searching young unresolved star groupings (clusters, associations, and their complexes) and of estimating their physical parameters. Our study is based on spectroscopic, spectrophotometric, and UBVRI photometric observations of 19 spiral galaxies. In the studied galaxies, we found 1510 objects younger than 10 Myr and present their catalogue. Having combined photometric and spectroscopic data, we derived extinctions, chemical abundances, sizes, ages, and masses of these groupings. We discuss separately the specific cases, when the gas extinction does not agree with the interstellar one. We assume that this is due to spatial offset of Hii clouds with respect to the related stellar population.We developed a method to estimate age of stellar population of the studied complexes using their morphology and the relation with associated H emission region. In result we obtained the estimates of chemical abundances for 80, masses for 63, and ages for 57 young objects observed in seven galaxies.
Optimal Background Estimators in Single-Molecule FRET Microscopy.
Preus, Søren; Hildebrandt, Lasse L; Birkedal, Victoria
2016-09-20
Single-molecule total internal reflection fluorescence (TIRF) microscopy constitutes an umbrella of powerful tools that facilitate direct observation of the biophysical properties, population heterogeneities, and interactions of single biomolecules without the need for ensemble synchronization. Due to the low signal/noise ratio in single-molecule TIRF microscopy experiments, it is important to determine the local background intensity, especially when the fluorescence intensity of the molecule is used quantitatively. Here we compare and evaluate the performance of different aperture-based background estimators used particularly in single-molecule Förster resonance energy transfer. We introduce the general concept of multiaperture signatures and use this technique to demonstrate how the choice of background can affect the measured fluorescence signal considerably. A new, to our knowledge, and simple background estimator is proposed, called the local statistical percentile (LSP). We show that the LSP background estimator performs as well as current background estimators at low molecular densities and significantly better in regions of high molecular densities. The LSP background estimator is thus suited for single-particle TIRF microscopy of dense biological samples in which the intensity itself is an observable of the technique. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-01-01
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS. PMID:27420066
Zhu, Bangyan; Li, Jiancheng; Chu, Zhengwei; Tang, Wei; Wang, Bin; Li, Dawei
2016-07-12
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Ries(compiler), Kernell G.; With sections by Atkins, J. B.; Hummel, P.R.; Gray, Matthew J.; Dusenbury, R.; Jennings, M.E.; Kirby, W.H.; Riggs, H.C.; Sauer, V.B.; Thomas, W.O.
2007-01-01
The National Streamflow Statistics (NSS) Program is a computer program that should be useful to engineers, hydrologists, and others for planning, management, and design applications. NSS compiles all current U.S. Geological Survey (USGS) regional regression equations for estimating streamflow statistics at ungaged sites in an easy-to-use interface that operates on computers with Microsoft Windows operating systems. NSS expands on the functionality of the USGS National Flood Frequency Program, and replaces it. The regression equations included in NSS are used to transfer streamflow statistics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, the equations were developed on a statewide or metropolitan-area basis as part of cooperative study programs. Equations are available for estimating rural and urban flood-frequency statistics, such as the 1 00-year flood, for every state, for Puerto Rico, and for the island of Tutuila, American Samoa. Equations are available for estimating other statistics, such as the mean annual flow, monthly mean flows, flow-duration percentiles, and low-flow frequencies (such as the 7-day, 0-year low flow) for less than half of the states. All equations available for estimating streamflow statistics other than flood-frequency statistics assume rural (non-regulated, non-urbanized) conditions. The NSS output provides indicators of the accuracy of the estimated streamflow statistics. The indicators may include any combination of the standard error of estimate, the standard error of prediction, the equivalent years of record, or 90 percent prediction intervals, depending on what was provided by the authors of the equations. The program includes several other features that can be used only for flood-frequency estimation. These include the ability to generate flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals, estimates of the probable maximum flood, extrapolation of the 500-year flood when an equation for estimating it is not available, and weighting techniques to improve flood-frequency estimates for gaging stations and ungaged sites on gaged streams. This report describes the regionalization techniques used to develop the equations in NSS and provides guidance on the applicability and limitations of the techniques. The report also includes a users manual and a summary of equations available for estimating basin lagtime, which is needed by the program to generate flood hydrographs. The NSS software and accompanying database, and the documentation for the regression equations included in NSS, are available on the Web at http://water.usgs.gov/software/.
High-resolution studies of the HF ionospheric modification interaction region
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Sheerin, J. P.
1985-01-01
The use of the pulse edge analysis technique to explain ionospheric modifications caused by high-power HF radio waves is discussed. The technique, implemented at the Arecibo Observatory, uses long radar pulses and very rapid data sampling. A comparison of the pulse leading and trailing edge characteristics is obtained and the comparison is used to estimate the relative changes in the interaction region height and layer width; an example utilizing this technique is provided. Main plasma line overshoot and miniovershoot were studied from the pulse edge observations; the observations at various HF pulsings and radar resolutions are graphically presented. From the pulse edge data the development and the occurrence of main plasma line overshoot and miniovershoot are explained. The theories of soliton formation and collapse, wave ducting, profile modification, and parametric instabilities are examined as a means of explaining main plasma line overshoots and miniovershoots.
Solar flare ionization in the mesosphere observed by coherent-scatter radar
NASA Technical Reports Server (NTRS)
Parker, J. W.; Bowhill, S. A.
1986-01-01
The coherent-scatter technique, as used with the Urbana radar, is able to measure relative changes in electron density at one altitude during the progress of a solar flare when that altitude contains a statistically steady turbulent layer. This work describes the analysis of Urbana coherent-scatter data from the times of 13 solar flares in the period from 1978 to 1983. Previous methods of measuring electron density changes in the D-region are summarized. Models of X-ray spectra, photoionization rates, and ion-recombination reaction schemes are reviewed. The coherent-scatter technique is briefly described, and a model is developed which relates changes in scattered power to changes in electron density. An analysis technique is developed using X-ray flux data from geostationary satellites and coherent scatter data from the Urbana radar which empirically distinguishes between proposed D-region ion-chemical schemes, and estimates the nonflare ion-pair production rate.
Techniques for estimating flood-depth frequency relations for streams in West Virginia
Wiley, J.B.
1987-01-01
Multiple regression analyses are applied to data from 119 U.S. Geological Survey streamflow stations to develop equations that estimate baseline depth (depth of 50% flow duration) and 100-yr flood depth on unregulated streams in West Virginia. Drainage basin characteristics determined from the 100-yr flood depth analysis were used to develop 2-, 10-, 25-, 50-, and 500-yr regional flood depth equations. Two regions with distinct baseline depth equations and three regions with distinct flood depth equations are delineated. Drainage area is the most significant independent variable found in the central and northern areas of the state where mean basin elevation also is significant. The equations are applicable to any unregulated site in West Virginia where values of independent variables are within the range evaluated for the region. Examples of inapplicable sites include those in reaches below dams, within and directly upstream from bridge or culvert constrictions, within encroached reaches, in karst areas, and where streams flow through lakes or swamps. (Author 's abstract)
Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode
2010-05-01
The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.
NASA Astrophysics Data System (ADS)
Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.
2015-12-01
Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.
Multiscale corner detection and classification using local properties and semantic patterns
NASA Astrophysics Data System (ADS)
Gallo, Giovanni; Giuoco, Alessandro L.
2002-05-01
A new technique to detect, localize and classify corners in digital closed curves is proposed. The technique is based on correct estimation of support regions for each point. We compute multiscale curvature to detect and to localize corners. As a further step, with the aid of some local features, it's possible to classify corners into seven distinct types. Classification is performed using a set of rules, which describe corners according to preset semantic patterns. Compared with existing techniques, the proposed approach inscribes itself into the family of algorithms that try to explain the curve, instead of simple labeling. Moreover, our technique works in manner similar to what is believed are typical mechanisms of human perception.
Martian impact crater degradation studies: Implications for localized obliteration episodes
NASA Technical Reports Server (NTRS)
Barlow, N. G.
1992-01-01
Early spacecraft missions to Mars revealed that impact craters display a range of degradational states, but full appreciation of the range of preservational characteristics was not revealed until the Mariner 9 and Viking missions in the 1970's. Many studies have described the spatial and temporal distribution of obliteration episodes based on qualitative descriptions of crater degradation. Recent advances in photoclinometric techniques have led to improved estimates of crater morphometric characteristics. The present study is using photoclinometry to determine crater profiles and is comparing these results with the crater geometry expected for pristine craters of identical size. The result is an estimate of the degree of degradation suffered by Martian impact craters in selected regions of the planet. Size-frequency distribution analyses of craters displaying similar degrees of degradation within localized regions of the planet may provide information about the timing of obliteration episodes in these regions.
Green, W. Reed; Haggard, Brian E.
2001-01-01
Water-quality sampling consisting of every other month (bimonthly) routine sampling and storm event sampling (six storms annually) is used to estimate annual phosphorus and nitrogen loads at Illinois River south of Siloam Springs, Arkansas. Hydrograph separation allowed assessment of base-flow and surfacerunoff nutrient relations and yield. Discharge and nutrient relations indicate that water quality at Illinois River south of Siloam Springs, Arkansas, is affected by both point and nonpoint sources of contamination. Base-flow phosphorus concentrations decreased with increasing base-flow discharge indicating the dilution of phosphorus in water from point sources. Nitrogen concentrations increased with increasing base-flow discharge, indicating a predominant ground-water source. Nitrogen concentrations at higher base-flow discharges often were greater than median concentrations reported for ground water (from wells and springs) in the Springfield Plateau aquifer. Total estimated phosphorus and nitrogen annual loads for calendar year 1997-1999 using the regression techniques presented in this paper (35 samples) were similar to estimated loads derived from integration techniques (1,033 samples). Flow-weighted nutrient concentrations and nutrient yields at the Illinois River site were about 10 to 100 times greater than national averages for undeveloped basins and at North Sylamore Creek and Cossatot River (considered to be undeveloped basins in Arkansas). Total phosphorus and soluble reactive phosphorus were greater than 10 times and total nitrogen and dissolved nitrite plus nitrate were greater than 10 to 100 times the national and regional averages for undeveloped basins. These results demonstrate the utility of a strategy whereby samples are collected every other month and during selected storm events annually, with use of regression models to estimate nutrient loads. Annual loads of phosphorus and nitrogen estimated using regression techniques could provide similar results to estimates using integration techniques, with much less investment.
Estimating Parameters for the Earth-Ionosphere Waveguide Using VLF Narrowband Transmitters
NASA Astrophysics Data System (ADS)
Gross, N. C.; Cohen, M.
2017-12-01
Estimating the D-region (60 to 90 km altitude) ionospheric electron density profile has always been a challenge. The D-region's altitude is too high for aircraft and balloons to reach but is too low for satellites to orbit at. Sounding rocket measurements have been a useful tool for directly measuring the ionosphere, however, these types of measurements are infrequent and costly. A more sustainable type of measurement, for characterizing the D-region, is remote sensing with very low frequency (VLF) waves. Both the lower ionosphere and Earth's ground strongly reflect VLF waves. These two spherical reflectors form what is known as the Earth-ionosphere waveguide. As VLF waves propagate within the waveguide, they interact with the D-region ionosphere, causing amplitude and phase changes that are polarization dependent. These changes can be monitored with a spatially distributed array of receivers and D-region properties can be inferred from these measurements. Researchers have previously used VLF remote sensing techniques, from either narrowband transmitters or sferics, to estimate the density profile, but these estimations are typically during a short time frame and over a narrow propagation region. We report on an effort to improve the understanding of VLF wave propagation by estimating the commonly known h' and beta two parameter exponential electron density profile. Measurements from multiple narrowband transmitters at multiple receivers are taken, concurrently, and input into an algorithm. The cornerstone of the algorithm is an artificial neural network (ANN), where input values are the received narrowband amplitude and phase and the outputs are the estimated h' and beta parameters. Training data for the ANN is generated using the Navy's Long-Wavelength Propagation Capability (LWPC) model. Emphasis is placed on profiling the daytime ionosphere, which has a more stable and predictable profile than the nighttime. Daytime ionospheric disturbances, from high solar activity, are also analyzed.
Choy, G.L.; Boatwright, J.
2009-01-01
We examine two closely located earthquakes in Japan that had identical moment magnitudes Mw but significantly different energy magnitudes Me. We use teleseismic data from the Global Seismograph Network and strong-motion data from the National Research Institute for Earth Science and Disaster Prevention's K-Net to analyze the 19 October 1996 Kyushu earthquake (Mw 6.7, Me 6.6) and the 6 October 2000 Tottori earthquake (Mw 6.7, Me 7.4). To obtain regional estimates of radiated energy ES we apply a spectral technique to regional (<200 km) waveforms that are dominated by S and Lg waves. For the thrust-fault Kyushu earthquake, we estimate an average regional attenuation Q(f) 230f0:65. For the strike-slip Tottori earthquake, the average regional attenuation is Q(f) 180f0:6. These attenuation functions are similar to those derived from studies of both California and Japan earthquakes. The regional estimate of ES for the Kyushu earthquake, 3:8 ?? 1014 J, is significantly smaller than that for the Tottori earthquake, ES 1:3 ?? 1015 J. These estimates correspond well with the teleseismic estimates of 3:9 ?? 1014 J and 1:8 ?? 1015 J, respectively. The apparent stress (Ta = ??Es/M0 with ?? equal to rigidity) for the Kyushu earthquake is 4 times smaller than the apparent stress for the Tottori earthquake. In terms of the fault maturity model, the significantly greater release of energy by the strike-slip Tottori earthquake can be related to strong deformation in an immature intraplate setting. The relatively lower energy release of the thrust-fault Kyushu earthquake can be related to rupture on mature faults at a subduction environment. The consistence between teleseismic and regional estimates of ES is particularly significant as teleseismic data for computing ES are routinely available for all large earthquakes whereas often there are no near-field data.
Shin, Jaemin; Ahn, Sinyeob; Hu, Xiaoping
2015-01-01
Purpose To develop an improved and generalized technique for correcting T1-related signal fluctuations (T1 effect) in cardiac-gated functional magnetie resonance imaging (fMRI) data with flip angle estimation. Theory and Methods Spatial maps of flip angle and T1 are jointly estimated from cardiac-gated time series using a Kalman filter. These maps are subsequently used for removing the T1 effect in the presence of B1 inhomogeneity. The new technique was compared with a prior technique that uses T1 only while assuming a homogeneous flip angle of 90°. The robustness of the new technique is demonstrated with simulated and experimental data. Results Simulation results revealed that the new method led to increased temporal signal-to-noise ratio across a large range of flip angles, T1s, and stimulus onset asynchrony means compared to the T1 only approach. With the experimental data, the new approach resulted in higher average gray matter temporal signal-to-noise ratio of seven subjects (84 vs. 48). The new approach also led to a higher statistical score of activation in the lateral geniculate nucleus (P < 0.002). Conclusion The new technique is able to remove the T1 effect robustly and is a promising tool for improving the ability to map activation in fMRI, especially in subcortical regions. PMID:23390029
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Relations between Precipitation and Shallow Groundwater in Illinois.
NASA Astrophysics Data System (ADS)
Changnon, Stanley A.; Huff, Floyd A.; Hsu, Chin-Fei
1988-12-01
The statistical relationships between monthly precipitation (P) and shallow groundwater levels (GW) in 20 wells scattered across Illinois with data for 1960-84 were defined using autoregressive integrated moving average (ARIMA) modeling. A lag of 1 month between P to GW was the strongest temporal relationship found across Illinois, followed by no (0) lag in the northern two-thirds of Illinois where mollisols predominate, and a lag of 2 months in the alfisols of southern Illinois. Spatial comparison of the 20 P-GW correlations with several physical conditions (aquifer types, soils, and physiography) revealed that the parent soil materials of outwash alluvium, glacial till, thick loess (2.1 m), and thin loess (>2.1) best defined regional relationships for drought assessment.Equations developed from ARTMA using 1960-79 data for each region were used to estimate GW levels during the 1980-81 drought, and estimates averaged between 25 to 45 cm of actual levels. These estimates are considered adequate to allow a useful assessment of drought onset, severity, and termination in other parts of the state. The techniques and equations should be transferrable to regions of comparable soils and climate.
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.
1995-01-01
The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.
[Microbiological Surveillance of Measles and Rubella in Spain. Laboratory Network].
Echevarría, Juan Emilio; Fernández García, Aurora; de Ory, Fernando
2015-01-01
The Laboratory is a fundamental component on the surveillance of measles and rubella. Cases need to be properly confirmed to ensure an accurate estimation of the incidence. Strains should be genetically characterized to know the transmission pattern of these viruses and frequently, outbreaks and transmission chains can be totally discriminated only after that. Finally, the susceptibility of the population is estimated on the basis of sero-prevalence surveys. Detection of specific IgM response is the base of the laboratory diagnosis of these diseases. It should be completed with genomic detection by RT-PCR to reach an optimal efficiency, especially when sampling is performed early in the course of the disease. Genotyping is performed by genomic sequencing according to reference protocols of the WHO. Laboratory surveillance of measles and rubella in Spain is organized as a net of regional laboratories with different capabilities. The National Center of Microbiology as National Reference Laboratory (NRL), supports regional laboratories ensuring the availability of all required techniques in the whole country and watching for the quality of the results. The NRL is currently working in the implementation of new molecular techniques based on the analysis of genomic hypervariable regions for the strain characterization at sub-genotypic levels and use them in the surveillance.
Lindquist, Martin A.; Xu, Yuting; Nebel, Mary Beth; Caffo, Brain S.
2014-01-01
To date, most functional Magnetic Resonance Imaging (fMRI) studies have assumed that the functional connectivity (FC) between time series from distinct brain regions is constant across time. However, recently, there has been increased interest in quantifying possible dynamic changes in FC during fMRI experiments, as it is thought this may provide insight into the fundamental workings of brain networks. In this work we focus on the specific problem of estimating the dynamic behavior of pair-wise correlations between time courses extracted from two different regions of the brain. We critique the commonly used sliding-windows technique, and discuss some alternative methods used to model volatility in the finance literature that could also prove useful in the neuroimaging setting. In particular, we focus on the Dynamic Conditional Correlation (DCC) model, which provides a model-based approach towards estimating dynamic correlations. We investigate the properties of several techniques in a series of simulation studies and find that DCC achieves the best overall balance between sensitivity and specificity in detecting dynamic changes in correlations. We also investigate its scalability beyond the bivariate case to demonstrate its utility for studying dynamic correlations between more than two brain regions. Finally, we illustrate its performance in an application to test-retest resting state fMRI data. PMID:24993894
A TRMM-Calibrated Infrared Technique for Convective and Stratiform Rainfall: Analysis and Validation
NASA Technical Reports Server (NTRS)
Negri, Andrew; Starr, David OC. (Technical Monitor)
2001-01-01
A satellite infrared technique with passive microwave calibration has been developed for estimating convective and stratiform rainfall. The Convective-Stratiform Technique, calibrated by coincident, physically retrieved rain rates from the TRMM Microwave Imager (TMI), has been applied to 30 min interval GOES infrared data and aggregated over seasonal and yearly periods over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. For the period Jan-April 1999, analysis revealed significant effects of local circulations (river breeze, land/sea breeze, mountain/valley) on both the total rainfall and it's diurnal cycle. Results compared well (a one-hour lag) with the diurnal cycle derived from TOGA radar-estimated rainfall in Rondonia. The satellite estimates revealed that the convective rain constituted 24% of the rain area while accounting for 67% of the rain volume. Estimates of the diurnal cycle (both total rainfall and convective/stratiform) for an area encompassing the Amazon Basin (3 x 10(exp 6) sq km) were in phase with those from the TRMM Precipitation Radar, despite the latter's limited sampling. Results will be presented comparing the yearly (2000) diurnal cycle for large regions (including the Amazon Basin), and an intercomparison of January-March estimates for three years, (1999-2001). We hope to demonstrate the utility of using the TRMM PR observations as verification for infrared estimates of the diurnal cycle, and as verification of the apportionment of rainfall into convective and stratiform components.
A TRMM-Calibrated Infrared Technique for Convective and Stratiform Rainfall: Analysis and Validation
NASA Technical Reports Server (NTRS)
Negri, Andrew; Starr, David OC. (Technical Monitor)
2001-01-01
A satellite infrared technique with passive microwave calibration has been developed for estimating convective and stratiform. rainfall. The Convective-Stratiform Technique, calibrated by coincident, physically retrieved rain rates from the TRMM Microwave Imager (TMI), has been applied to 30 min interval GOES infrared data and aggregated over seasonal and yearly periods over northern South America. The diurnal cycle of rainfall, as well as the division between convective and stratiform rainfall is presented. For the period Jan-April 1999, analysis revealed significant effects of local circulations (river breeze, land/sea breeze, mountain/valley) on both the total rainfall and it's diurnal cycle. Results compared well (a one-hour lag) with the diurnal cycle derived from TOGA radar-estimated rainfall in Rondonia. The satellite estimates revealed that the convective rain constituted 24% of the rain area while accounting for 67% of the rain volume. Estimates of the diurnal cycle (both total rainfall and convective/stratiform) for an area encompassing the Amazon Basin (3 x 10(exp 6) square km) were in phase with those from the TRMM Precipitation Radar, despite the latter's limited sampling. Results will be presented comparing the yearly (2000) diurnal cycle for large regions (including the Amazon Basin), and an intercomparison of January-March estimates for three years, 1999-2001. We hope to demonstrate the utility of using the TRMM PR observations as verification for infrared estimates of the diurnal cycle, and as verification of the apportionment of rainfall into convective and stratiform components.
Evaluating imputation and modeling in the North Central region
Ronald E. McRoberts
2000-01-01
The objectives of the North Central Research Station, USDA Forest Service, in developing procedures for annual forest inventories include establishing the capability of producing annual estimates of timber volume and related variables. The inventory system developed to accomplish these objectives features an annual sample of measured field plots and techniques for...
This presentation explains the importance of the fine-scale features for air toxics exposure modeling. The paper presents a new approach to combine local-scale and regional model results for the National Air Toxic Assessment. The technique has been evaluated with a chemical tra...
Magnitude and Frequency of Floods on Nontidal Streams in Delaware
Ries, Kernell G.; Dillow, Jonathan J.A.
2006-01-01
Reliable estimates of the magnitude and frequency of annual peak flows are required for the economical and safe design of transportation and water-conveyance structures. This report, done in cooperation with the Delaware Department of Transportation (DelDOT) and the Delaware Geological Survey (DGS), presents methods for estimating the magnitude and frequency of floods on nontidal streams in Delaware at locations where streamgaging stations monitor streamflow continuously and at ungaged sites. Methods are presented for estimating the magnitude of floods for return frequencies ranging from 2 through 500 years. These methods are applicable to watersheds exhibiting a full range of urban development conditions. The report also describes StreamStats, a web application that makes it easy to obtain flood-frequency estimates for user-selected locations on Delaware streams. Flood-frequency estimates for ungaged sites are obtained through a process known as regionalization, using statistical regression analysis, where information determined for a group of streamgaging stations within a region forms the basis for estimates for ungaged sites within the region. One hundred and sixteen streamgaging stations in and near Delaware with at least 10 years of non-regulated annual peak-flow data available were used in the regional analysis. Estimates for gaged sites are obtained by combining the station peak-flow statistics (mean, standard deviation, and skew) and peak-flow estimates with regional estimates of skew and flood-frequency magnitudes. Example flood-frequency estimate calculations using the methods presented in the report are given for: (1) ungaged sites, (2) gaged locations, (3) sites upstream or downstream from a gaged location, and (4) sites between gaged locations. Regional regression equations applicable to ungaged sites in the Piedmont and Coastal Plain Physiographic Provinces of Delaware are presented. The equations incorporate drainage area, forest cover, impervious area, basin storage, housing density, soil type A, and mean basin slope as explanatory variables, and have average standard errors of prediction ranging from 28 to 72 percent. Additional regression equations that incorporate drainage area and housing density as explanatory variables are presented for use in defining the effects of urbanization on peak-flow estimates throughout Delaware for the 2-year through 500-year recurrence intervals, along with suggestions for their appropriate use in predicting development-affected peak flows. Additional topics associated with the analyses performed during the study are also discussed, including: (1) the availability and description of more than 30 basin and climatic characteristics considered during the development of the regional regression equations; (2) the treatment of increasing trends in the annual peak-flow series identified at 18 gaged sites, with respect to their relations with maximum 24-hour precipitation and housing density, and their use in the regional analysis; (3) calculation of the 90-percent confidence interval associated with peak-flow estimates from the regional regression equations; and (4) a comparison of flood-frequency estimates at gages used in a previous study, highlighting the effects of various improved analytical techniques.
Recharge and groundwater models: An overview
Sanford, W.
2002-01-01
Recharge is a fundamental component of groundwater systems, and in groundwater-modeling exercises recharge is either measured and specified or estimated during model calibration. The most appropriate way to represent recharge in a groundwater model depends upon both physical factors and study objectives. Where the water table is close to the land surface, as in humid climates or regions with low topographic relief, a constant-head boundary condition is used. Conversely, where the water table is relatively deep, as in drier climates or regions with high relief, a specified-flux boundary condition is used. In most modeling applications, mixed-type conditions are more effective, or a combination of the different types can be used. The relative distribution of recharge can be estimated from water-level data only, but flux observations must be incorporated in order to estimate rates of recharge. Flux measurements are based on either Darcian velocities (e.g., stream base-flow) or seepage velocities (e.g., groundwater age). In order to estimate the effective porosity independently, both types of flux measurements must be available. Recharge is often estimated more efficiently when automated inverse techniques are used. Other important applications are the delineation of areas contributing recharge to wells and the estimation of paleorecharge rates using carbon-14.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Jacob; Wood, Eric W; Zhu, Lei
A data-driven technique for estimation of energy requirements for a proposed vehicle trip has been developed. Based on over 700,000 miles of driving data, the technique has been applied to generate a model that estimates trip energy requirements. The model uses a novel binning approach to categorize driving by road type, traffic conditions, and driving profile. The trip-level energy estimations can easily be aggregated to any higher-level transportation system network desired. The model has been tested and validated on the Austin, Texas, data set used to build this model. Ground-truth energy consumption for the data set was obtained from Futuremore » Automotive Systems Technology Simulator (FASTSim) vehicle simulation results. The energy estimation model has demonstrated 12.1 percent normalized total absolute error. The energy estimation from the model can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations, to reduce energy consumption. The model can also be used to determine more accurate energy consumption of regional or national transportation networks if trip origin and destinations are known. Additionally, this method allows the estimation tool to be tuned to a specific driver or vehicle type.« less
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
Feaster, Toby D.; Tasker, Gary D.
2002-01-01
Data from 167 streamflow-gaging stations in or near South Carolina with 10 or more years of record through September 30, 1999, were used to develop two methods for estimating the magnitude and frequency of floods in South Carolina for rural ungaged basins that are not significantly affected by regulation. Flood frequency estimates for 54 gaged sites in South Carolina were computed by fitting the water-year peak flows for each site to a log-Pearson Type III distribution. As part of the computation of flood-frequency estimates for gaged sites, new values for generalized skew coefficients were developed. Flood-frequency analyses also were made for gaging stations that drain basins from more than one physiographic province. The U.S. Geological Survey, in cooperation with the South Carolina Department of Transportation, updated these data from previous flood-frequency reports to aid officials who are active in floodplain management as well as those who design bridges, culverts, and levees, or other structures near streams where flooding is likely to occur. Regional regression analysis, using generalized least squares regression, was used to develop a set of predictive equations that can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for rural ungaged basins in the Blue Ridge, Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The predictive equations are all functions of drainage area. Average errors of prediction for these regression equations ranged from -16 to 19 percent for the 2-year recurrence-interval flow in the upper Coastal Plain to -34 to 52 percent for the 500-year recurrence interval flow in the lower Coastal Plain. A region-of-influence method also was developed that interactively estimates recurrence- interval flows for rural ungaged basins in the Blue Ridge of South Carolina. The region-of-influence method uses regression techniques to develop a unique relation between flow and basin characteristics for an individual watershed. This, then, can be used to estimate flows at ungaged sites. Because the computations required for this method are somewhat complex, a computer application was developed that performs the computations and compares the predictive errors for this method. The computer application includes the option of using the region-of-influence method, or the generalized least squares regression equations from this report to compute estimated flows and errors of prediction specific to each ungaged site. From a comparison of predictive errors using the region-of-influence method with those computed using the regional regression method, the region-of-influence method performed systematically better only in the Blue Ridge and is, therefore, not recommended for use in the other physiographic provinces. Peak-flow data for the South Carolina stations used in the regionalization study are provided in appendix A, which contains gaging station information, log-Pearson Type III statistics, information on stage-flow relations, and water-year peak stages and flows. For informational purposes, water-year peak-flow data for stations on regulated streams in South Carolina also are provided in appendix D. Other information pertaining to the regulated streams is provided in the text of the report.
NASA Astrophysics Data System (ADS)
Kim, Eng-Chan; Cho, Jae-Hwan; Kim, Min-Hye; Kim, Ki-Hong; Choi, Cheon-Woong; Seok, Jong-min; Na, Kil-Ju; Han, Man-Seok
2013-03-01
This study was conducted on 20 patients who had undergone pedicle screw fixation between March and December 2010 to quantitatively compare a conventional fat suppression technique, CHESS (chemical shift selection suppression), and a new technique, IDEAL (iterative decomposition of water and fat with echo asymmetry and least squares estimation). The general efficacy and usefulness of the IDEAL technique was also evaluated. Fat-suppressed transverse-relaxation-weighed images and longitudinal-relaxation-weighted images were obtained before and after contrast injection by using these two techniques with a 1.5T MR (magnetic resonance) scanner. The obtained images were analyzed for image distortion, susceptibility artifacts and homogenous fat removal in the target region. The results showed that the image distortion due to the susceptibility artifacts caused by implanted metal was lower in the images obtained using the IDEAL technique compared to those obtained using the CHESS technique. The results of a qualitative analysis also showed that compared to the CHESS technique, fewer susceptibility artifacts and more homogenous fat removal were found in the images obtained using the IDEAL technique in a comparative image evaluation of the axial plane images before and after contrast injection. In summary, compared to the CHESS technique, the IDEAL technique showed a lower occurrence of susceptibility artifacts caused by metal and lower image distortion. In addition, more homogenous fat removal was shown in the IDEAL technique.
NASA Astrophysics Data System (ADS)
Velicogna, I.; Sutterley, T. C.; A, G.; van den Broeke, M. R.; Ivins, E. R.
2016-12-01
We use Gravity Recovery and Climate Experiment (GRACE) monthly gravity fields to determine the regional acceleration in ice mass loss in Antarctica for 2002-2016. We find that the total mass loss is controlled by only a few regions. In Antarctica, the Amundsen Sea (AS) sector and the Antarctic Peninsula account for 65% and 18%, respectively, of the total loss (186 ± 10 Gt/yr) mainly from ice dynamics. The AS sector contributes most of the acceleration in loss (9 ± 1 Gt/yr2 ), and Queen Maud Land, East Antarctica, is the only sector with a significant mass gain due to a local increase in SMB (57 ± 5 Gt/yr). We compare GRACE regional mass balance estimates with independent estimates from ICESat-1 and Operation IceBridge laser altimetry, CryoSat-2 radar altimetry, and surface mass balance outputs from RACMO2.3. In the Amundsen Sea Embayment of West Antarctica, an area experiencing rapid retreat and mass loss to the sea, we find good agreement between GRACE and altimetry estimates. Comparison of GRACE with these independent techniques in East Antarctic shows that GIA estimates from the new regional ice deglaciation models underestimate the GIA correction in the EAIS interior, which implies larger losses of the Antarctica ice sheet by about 70 Gt/yr. Sectors where we are observing the largest losses are closest to warm circumpolar water, and with polar constriction of the westerlies enhanced by climate warming, we expect these sectors to contribute more and more to sea level as the ice shelves that protect these glaciers will melt faster in contact with more heat from the surrounding oc
A Bayesian framework for infrasound location
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.
2010-04-01
We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.
A Comparison of Techniques for Determining Mass Outflow Rates in the Type 2 Quasar Markarian 34
NASA Astrophysics Data System (ADS)
Revalski, Mitchell; Crenshaw, D. Michael; Fischer, Travis C.; Kraemer, Steven B.; Schmitt, Henrique R.; Dashtamirova, Dzhuliya; Pope, Crystal L.
2018-06-01
We present spatially resolved measurements of the mass outflow rates and energetics for the Narrow Line Region (NLR) outflows in the type 2 quasar Markarian 34. Using data from the Hubble Space Telescope and Apache point observatory, together with Cloudy photoionization models, we calculate the radial mass distribution of ionized gas and map its kinematics. We compare the results of this technique to global outflow rates that characterize NLR outflows with a single outflow rate and energetic measurement. We find that NLR mass estimates based on emission line luminosities produce more consistent results than techniques employing filling factors.
Capello, Katia; Bortolotti, Laura; Lanari, Manuela; Baioni, Elisa; Mutinelli, Franco; Vascellari, Marta
2015-01-01
The knowledge of the size and demographic structure of animal populations is a necessary prerequisite for any population-based epidemiological study, especially to ascertain and interpret prevalence data, to implement surveillance plans in controlling zoonotic diseases and, moreover, to provide accurate estimates of tumours incidence data obtained by population-based registries. The main purpose of this study was to provide an accurate estimate of the size and structure of the canine population in Veneto region (north-eastern Italy), using the Lincoln-Petersen version of the capture-recapture methodology. The Regional Canine Demographic Registry (BAC) and a sample survey of households of Veneto Region were the capture and recapture sources, respectively. The secondary purpose was to estimate the size and structure of the feline population in the same region, using the same survey applied for dog population. A sample of 2465 randomly selected households was drawn and submitted to a questionnaire using the CATI technique, in order to obtain information about the ownership of dogs and cats. If the dog was declared to be identified, owner's information was used to recapture the dog in the BAC. The study was conducted in Veneto Region during 2011, when the dog population recorded in the BAC was 605,537. Overall, 616 households declared to possess at least one dog (25%), with a total of 805 dogs and an average per household of 1.3. The capture-recapture analysis showed that 574 dogs (71.3%, 95% CI: 68.04-74.40%) had been recaptured in both sources, providing a dog population estimate of 849,229 (95% CI: 814,747-889,394), 40% higher than that registered in the BAC. Concerning cats, 455 of 2465 (18%, 95% CI: 17-20%) households declared to possess at least one cat at the time of the telephone interview, with a total of 816 cats. The mean number of cats per household was equal to 1.8, providing an estimate of the cat population in Veneto region equal to 663,433 (95% CI: 626,585-737,159). The estimate of the size and structure of owned canine and feline populations in Veneto region provide useful data to perform epidemiological studies and monitoring plans in this area. Copyright © 2014 Elsevier B.V. All rights reserved.
Ortel, Terry W.; Spies, Ryan R.
2015-11-19
Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).
Estimating NOx emissions and surface concentrations at high spatial resolution using OMI
NASA Astrophysics Data System (ADS)
Goldberg, D. L.; Lamsal, L. N.; Loughner, C.; Swartz, W. H.; Saide, P. E.; Carmichael, G. R.; Henze, D. K.; Lu, Z.; Streets, D. G.
2017-12-01
In many instances, NOx emissions are not measured at the source. In these cases, remote sensing techniques are extremely useful in quantifying NOx emissions. Using an exponential modified Gaussian (EMG) fitting of oversampled Ozone Monitoring Instrument (OMI) NO2 data, we estimate NOx emissions and lifetimes in regions where these emissions are uncertain. This work also presents a new high-resolution OMI NO2 dataset derived from the NASA retrieval that can be used to estimate surface level concentrations in the eastern United States and South Korea. To better estimate vertical profile shape factors, we use high-resolution model simulations (Community Multi-scale Air Quality (CMAQ) and WRF-Chem) constrained by in situ aircraft observations to re-calculate tropospheric air mass factors and tropospheric NO2 vertical columns during summertime. The correlation between our satellite product and ground NO2 monitors in urban areas has improved dramatically: r2 = 0.60 in new product, r2 = 0.39 in operational product, signifying that this new product is a better indicator of surface concentrations than the operational product. Our work emphasizes the need to use both high-resolution and high-fidelity models in order to re-calculate vertical column data in areas with large spatial heterogeneities in NOx emissions. The methodologies developed in this work can be applied to other world regions and other satellite data sets to produce high-quality region-specific emissions estimates.
Estimating the volatilization of ammonia from synthetic nitrogenous fertilizers used in China.
Zhang, Yisheng; Luan, Shengji; Chen, Liaoliao; Shao, Min
2011-03-01
Although it has long been recognized that significant amounts of nitrogen, typically in the form of ammonia (NH(3)) applied as fertilizer, are lost to the atmosphere, accurate estimates are lacking for many locations. In this study, a detailed, bottom-up method for estimating NH(3) emissions from synthetic fertilizers in China was used. The total amount emitted in 2005 in China was estimated to be 3.55 Tg NH(3)-N, with an uncertainty of ± 50%. This estimate was considerably lower than previously published values. Emissions from urea and ammonium bicarbonate accounted for 64.3% and 26.5%, respectively, of the 2005 total. The NH(3) emission inventory incorporated 2448 county-level data points, categorized on a monthly basis, and was developed with more accurate activity levels and emission factors than had been used in previous assessments. There was considerable variability in the emissions within a province. The NH(3) emissions generally peaked in the spring and summer, accounting for 30.1% and 48.8%, respectively, of total emissions in 2005. The peaks correlated with crop planting and fertilization schedules. The NH(3) regional distribution pattern showed strong correspondence with planting techniques and local arable land areas. The regions with the highest atmospheric losses are located in eastern China, especially the North China Plain and the Taihu region. Copyright © 2010 Elsevier Ltd. All rights reserved.
Fekkes, Stein; Swillens, Abigail E S; Hansen, Hendrik H G; Saris, Anne E C M; Nillesen, Maartje M; Iannaccone, Francesco; Segers, Patrick; de Korte, Chris L
2016-10-01
Three-dimensional (3-D) strain estimation might improve the detection and localization of high strain regions in the carotid artery (CA) for identification of vulnerable plaques. This paper compares 2-D versus 3-D displacement estimation in terms of radial and circumferential strain using simulated ultrasound (US) images of a patient-specific 3-D atherosclerotic CA model at the bifurcation embedded in surrounding tissue generated with ABAQUS software. Global longitudinal motion was superimposed to the model based on the literature data. A Philips L11-3 linear array transducer was simulated, which transmitted plane waves at three alternating angles at a pulse repetition rate of 10 kHz. Interframe (IF) radio-frequency US data were simulated in Field II for 191 equally spaced longitudinal positions of the internal CA. Accumulated radial and circumferential displacements were estimated using tracking of the IF displacements estimated by a two-step normalized cross-correlation method and displacement compounding. Least-squares strain estimation was performed to determine accumulated radial and circumferential strain. The performance of the 2-D and 3-D methods was compared by calculating the root-mean-squared error of the estimated strains with respect to the reference strains obtained from the model. More accurate strain images were obtained using the 3-D displacement estimation for the entire cardiac cycle. The 3-D technique clearly outperformed the 2-D technique in phases with high IF longitudinal motion. In fact, the large IF longitudinal motion rendered it impossible to accurately track the tissue and cumulate strains over the entire cardiac cycle with the 2-D technique.
NASA Astrophysics Data System (ADS)
Sepúlveda, J.; Hoyos Ortiz, C. D.
2017-12-01
An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic compared to observed data in spite of the many sources of uncertainty including the sampling volume, the different physical principles of the sensors, the incomplete understanding of the microphysics of precipitation and, the most important, the rapidly varying droplet size distribution.
NASA Astrophysics Data System (ADS)
Orlandi, A.; Ortolani, A.; Meneguzzo, F.; Levizzani, V.; Torricella, F.; Turk, F. J.
2004-03-01
In order to improve high-resolution forecasts, a specific method for assimilating rainfall rates into the Regional Atmospheric Modelling System model has been developed. It is based on the inversion of the Kuo convective parameterisation scheme. A nudging technique is applied to 'gently' increase with time the weight of the estimated precipitation in the assimilation process. A rough but manageable technique is explained to estimate the partition of convective precipitation from stratiform one, without requiring any ancillary measurement. The method is general purpose, but it is tuned for geostationary satellite rainfall estimation assimilation. Preliminary results are presented and discussed, both through totally simulated experiments and through experiments assimilating real satellite-based precipitation observations. For every case study, Rainfall data are computed with a rapid update satellite precipitation estimation algorithm based on IR and MW satellite observations. This research was carried out in the framework of the EURAINSAT project (an EC research project co-funded by the Energy, Environment and Sustainable Development Programme within the topic 'Development of generic Earth observation technologies', Contract number EVG1-2000-00030).
NASA Astrophysics Data System (ADS)
Corona, Roberto; Curreli, Matteo; Montaldo, Nicola; Oren, Ram
2013-04-01
Mediterranean ecosystems are commonly heterogeneous savanna-like ecosystems, with contrasting plant functional types (PFT) competing for the water use. Mediterranean regions suffer water scarcity due to the dry climate conditions. In semi-arid regions evapotranspiration (ET) is the leading loss term of the root-zone water budget with a yearly magnitude that may be roughly equal to the precipitation. Despite the attention these ecosystems are receiving, a general lack of knowledge persists about the estimate of ET and the relationship between ET and the plant survival strategies for the different PFTs under water stress. During the dry summers these water-limited heterogeneous ecosystems are mainly characterized by a simple dual PFT-landscapes with strong-resistant woody vegetation and bare soil since grass died. In these conditions due to the low signal of the land surface fluxes captured by the sonic anemometer and gas analyzer the widely used eddy covariance may fail and its ET estimate is not robust enough. In these conditions the use of the sap flow technique may have a key role, because theoretically it provides a direct estimate of the woody vegetation transpiration. Through the coupled use of the sap flow sensor observations, a 2D foot print model of the eddy covariance tower and high resolution satellite images for the estimate of the foot print land cover map, the eddy covariance measurements can be correctly interpreted, and ET components (bare soil evaporation and woody vegetation transpiration) can be separated. The case study is at the Orroli site in Sardinia (Italy). The site landscape is a mixture of Mediterranean patchy vegetation types: trees, including wild olives and cork oaks, different shrubs and herbaceous species. An extensive field campaign started in 2004. Land-surface fluxes and CO2 fluxes are estimated by an eddy covariance technique based micrometeorological tower. Soil moisture profiles were also continuously estimated using water content reflectometers and gravimetric method, and periodically leaf area index (LAI) PFTs are estimated. From 2012 sap flow sensors based on the thermal Dissipation Method are installed on numerous trees around the tower. Preliminary results show first the need of careful use sap flow sensors outputs which are affected by errors in the estimates of their main parameters, mainly allometric relationships between, for instance, sapwood area, diameter, canopy cover area, which affect the upscale of the local tree measurements to the site plot larger scale. Finally we demonstrate that the sap flow sensors are essential for the estimate of ET in such dry conditions, typical of Mediterranean ecosystems.
NASA Technical Reports Server (NTRS)
Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.
2002-01-01
The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Rotzoll, Kolja; Gingerich, Stephen B.; Jenson, John W.; El-Kadi, Aly I.
2013-01-01
Tidal-signal attenuations are analyzed to compute hydraulic diffusivities and estimate regional hydraulic conductivities of the Northern Guam Lens Aquifer, Territory of Guam (Pacific Ocean), USA. The results indicate a significant tidal-damping effect at the coastal boundary. Hydraulic diffusivities computed using a simple analytical solution for well responses to tidal forcings near the periphery of the island are two orders of magnitude lower than for wells in the island’s interior. Based on assigned specific yields of ~0.01–0.4, estimated hydraulic conductivities are ~20–800 m/day for peripheral wells, and ~2,000–90,000 m/day for interior wells. The lower conductivity of the peripheral rocks relative to the interior rocks may best be explained by the effects of karst evolution: (1) dissolutional enhancement of horizontal hydraulic conductivity in the interior; (2) case-hardening and concurrent reduction of local hydraulic conductivity in the cliffs and steeply inclined rocks of the periphery; and (3) the stronger influence of higher-conductivity regional-scale features in the interior relative to the periphery. A simple numerical model calibrated with measured water levels and tidal response estimates values for hydraulic conductivity and storage parameters consistent with the analytical solution. The study demonstrates how simple techniques can be useful for characterizing regional aquifer properties.
Mesoscale variability of the Upper Colorado River snowpack
Ling, C.-H.; Josberger, E.G.; Thorndike, A.S.
1996-01-01
In the mountainous regions of the Upper Colorado River Basin, snow course observations give local measurements of snow water equivalent, which can be used to estimate regional averages of snow conditions. We develop a statistical technique to estimate the mesoscale average snow accumulation, using 8 years of snow course observations. For each of three major snow accumulation regions in the Upper Colorado River Basin - the Colorado Rocky Mountains, Colorado, the Uinta Mountains, Utah, and the Wind River Range, Wyoming - the snow course observations yield a correlation length scale of 38 km, 46 km, and 116 km respectively. This is the scale for which the snow course data at different sites are correlated with 70 per cent correlation. This correlation of snow accumulation over large distances allows for the estimation of the snow water equivalent on a mesoscale basis. With the snow course data binned into 1/4?? latitude by 1/4?? longitude pixels, an error analysis shows the following: for no snow course data in a given pixel, the uncertainty in the water equivalent estimate reaches 50 cm; that is, the climatological variability. However, as the number of snow courses in a pixel increases the uncertainty decreases, and approaches 5-10 cm when there are five snow courses in a pixel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mistry, Nilesh N., E-mail: nmistry@som.umaryland.edu; Diwanji, Tejan; Shi, Xiutao
2013-11-15
Purpose: Current implementations of methods based on Hounsfield units to evaluate regional lung ventilation do not directly incorporate tissue-based mass changes that occur over the respiratory cycle. To overcome this, we developed a 4-dimensional computed tomography (4D-CT)-based technique to evaluate fractional regional ventilation (FRV) that uses an individualized ratio of tidal volume to end-expiratory lung volume for each voxel. We further evaluated the effect of different breathing maneuvers on regional ventilation. The results from this work will help elucidate the relationship between global and regional lung function. Methods and Materials: Eight patients underwent 3 sets of 4D-CT scans during 1more » session using free-breathing, audiovisual guidance, and active breathing control. FRV was estimated using a density-based algorithm with mass correction. Internal validation between global and regional ventilation was performed by use of the imaging data collected during the use of active breathing control. The impact of breathing maneuvers on FRV was evaluated comparing the tidal volume from 3 breathing methods. Results: Internal validation through comparison between the global and regional changes in ventilation revealed a strong linear correlation (slope of 1.01, R{sup 2} of 0.97) between the measured global lung volume and the regional lung volume calculated by use of the “mass corrected” FRV. A linear relationship was established between the tidal volume measured with the automated breathing control system and FRV based on 4D-CT imaging. Consistently larger breathing volumes were observed when coached breathing techniques were used. Conclusions: The technique presented improves density-based evaluation of lung ventilation and establishes a link between global and regional lung ventilation volumes. Furthermore, the results obtained are comparable with those of other techniques of functional evaluation such as spirometry and hyperpolarized-gas magnetic resonance imaging. These results were demonstrated on retrospective analysis of patient data, and further research using prospective data is under way to validate this technique against established clinical tests.« less
Assessing estimation techniques for missing plot observations in the U.S. forest inventory
Grant M. Domke; Christopher W. Woodall; Ronald E. McRoberts; James E. Smith; Mark A. Hatfield
2012-01-01
The U.S. Forest Service, Forest Inventory and Analysis Program made a transition from state-by-state periodic forest inventories--with reporting standards largely tailored to regional requirements--to a nationally consistent, annual inventory tailored to large-scale strategic requirements. Lack of measurements on all forest land during the periodic inventory, along...
Operations Research techniques in the management of large-scale reforestation programs
Joseph Buongiorno; D.E. Teeguarden
1978-01-01
A reforestation planning system for the Douglas-fir region of the Western United States is described. Part of the system is a simulation model to predict plantation growth and to determine economic thinning regimes and rotation ages as a function of site characteristics, initial density, reforestation costs, and management constraints. A second model estimates the...
Ring Current Pressure Estimation withRAM-SCB using Data Assimilation and VanAllen Probe Flux Data
NASA Astrophysics Data System (ADS)
Godinez, H. C.; Yu, Y.; Henderson, M. G.; Larsen, B.; Jordanova, V.
2015-12-01
Capturing and subsequently modeling the influence of tail plasma injections on the inner magnetosphere is particularly important for understanding the formation and evolution of Earth's ring current. In this study, the ring current distribution is estimated with the Ring Current-Atmosphere Interactions Model with Self-Consistent Magnetic field (RAM-SCB) using, for the first time, data assimilation techniques and particle flux data from the Van Allen Probes. The state of the ring current within the RAM-SCB is corrected via an ensemble based data assimilation technique by using proton flux from one of the Van Allen Probes, to capture the enhancement of ring current following an isolated substorm event on July 18 2013. The results show significant improvement in the estimation of the ring current particle distributions in the RAM-SCB model, leading to better agreement with observations. This newly implemented data assimilation technique in the global modeling of the ring current thus provides a promising tool to better characterize the effect of substorm injections in the near-Earth regions. The work is part of the Space Hazards Induced near Earth by Large, Dynamic Storms (SHIELDS) project in Los Alamos National Laboratory.
Cooperative Position Aware Mobility Pattern of AUVs for Avoiding Void Zones in Underwater WSNs.
Javaid, Nadeem; Ejaz, Mudassir; Abdul, Wadood; Alamri, Atif; Almogren, Ahmad; Niaz, Iftikhar Azim; Guizani, Nadra
2017-03-13
In this paper, we propose two schemes; position-aware mobility pattern (PAMP) and cooperative PAMP (Co PAMP). The first one is an optimization scheme that avoids void hole occurrence and minimizes the uncertainty in the position estimation of glider's. The second one is a cooperative routing scheme that reduces the packet drop ratio by using the relay cooperation. Both techniques use gliders that stay at sojourn positions for a predefined time, at sojourn position self-confidence (s-confidence) and neighbor-confidence (n-confidence) regions that are estimated for balanced energy consumption. The transmission power of a glider is adjusted according to those confidence regions. Simulation results show that our proposed schemes outperform the compared existing one in terms of packet delivery ratio, void zones and energy consumption.
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.
New estimates of changes in snow cover over Russia in recent decades
NASA Astrophysics Data System (ADS)
Bulygina, O.; Korshunova, N.; Razuvaev, V.; Groisman, P. Y.
2017-12-01
Snow covers plays critical roles in the energy and water balance of the Earth through its unique physical properties (high reflectivity and low thermal conductivity) and water storage. The main objective of this research is to monitoring snow cover change in Russia. The estimates of changes of major snow characteristics (snow cover duration, maximum winter snow depth, snow water equivalent) are described. Apart from the description of long-term averages of snow characteristics, the estimates of their change that are averaged over quasi-homogeneous climatic regions are derived and regional differences in the change of snow characteristics are studied. We used in our study daily snow observations for 820 Russian meteorological station from 1966 to 2017. All of these meteorological stations are of unprotected type. The water equivalent is analyzed from snow course survey data at 958 meteorological stations from 1966 to 2017. The time series are prepared by RIHMI-WDC. Regional analysis of snow cover data was carried out using quasi-homogeneous climatic regions. The area-averaging technique using station values converted to anomalies with respect to a common reference period (in this study, 1981-2010). Anomalies were arithmetically averaged first within 1°N x 2°E grid cells and thereafter by a weighted average value derived over the quasi-homogeneous climatic regions. This approach provides a more uniform spatial field for averaging. By using a denser network of meteorological stations, bringing into consideration snow course data and, we managed to specify changes in all observed major snow characteristics and to obtain estimates generalized for quasi-homogeneous climatic regions. The detected changes in the dates of the establishment and disappearance of the snow cover.
Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline
2014-01-01
In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248
A volumetric technique for fossil body mass estimation applied to Australopithecus afarensis.
Brassey, Charlotte A; O'Mahoney, Thomas G; Chamberlain, Andrew T; Sellers, William I
2018-02-01
Fossil body mass estimation is a well established practice within the field of physical anthropology. Previous studies have relied upon traditional allometric approaches, in which the relationship between one/several skeletal dimensions and body mass in a range of modern taxa is used in a predictive capacity. The lack of relatively complete skeletons has thus far limited the potential application of alternative mass estimation techniques, such as volumetric reconstruction, to fossil hominins. Yet across vertebrate paleontology more broadly, novel volumetric approaches are resulting in predicted values for fossil body mass very different to those estimated by traditional allometry. Here we present a new digital reconstruction of Australopithecus afarensis (A.L. 288-1; 'Lucy') and a convex hull-based volumetric estimate of body mass. The technique relies upon identifying a predictable relationship between the 'shrink-wrapped' volume of the skeleton and known body mass in a range of modern taxa, and subsequent application to an articulated model of the fossil taxa of interest. Our calibration dataset comprises whole body computed tomography (CT) scans of 15 species of modern primate. The resulting predictive model is characterized by a high correlation coefficient (r 2 = 0.988) and a percentage standard error of 20%, and performs well when applied to modern individuals of known body mass. Application of the convex hull technique to A. afarensis results in a relatively low body mass estimate of 20.4 kg (95% prediction interval 13.5-30.9 kg). A sensitivity analysis on the articulation of the chest region highlights the sensitivity of our approach to the reconstruction of the trunk, and the incomplete nature of the preserved ribcage may explain the low values for predicted body mass here. We suggest that the heaviest of previous estimates would require the thorax to be expanded to an unlikely extent, yet this can only be properly tested when more complete fossils are available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Regional Frequency Analysis of Ocean Hazard
NASA Astrophysics Data System (ADS)
Bernardara, Pietro; Weiss, Jerome; Benoit, Michel; Andreewsky, Marc
2015-04-01
The estimation of the extreme return level (up to 10-4 annual probability of exceedence) of natural phenomena is a very uncertain exercise, when extrapolating using the information and the measure collected in a single site. The aim of the Regional Frequency Analysis (RFA) is to benefit from the information contained in observations and data collected not only on the site of interested but in a larger set of sites, located in the same region of the site of interest or sharing with it similar characteristics. This technique was introduced in the '60 and widely used in various domains including hydrology and meteorology. The RFA was recently acknowledge as a potential choice for the estimation of flooding hazard in the Methodological Guide for flooding hazard estimation [1], published in 2013 by the French Nuclear Safety Autority. The aim of this presentation is to bring in the main concepts of the RFA and illustrate the latest innovation on its application, delivered by EDF R&D. They concerns the statistical definition of storms, the formation of homogeneous regions and a new approach for filtering the redundant information linked to the spatial correlation of natural phenomena. Application to skew surges and waves will be shown 1. ASN, Guide pour la Protection des installations nucléaires de base contre les inondations externes. 2013, ASN. p. 44.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less
NASA Astrophysics Data System (ADS)
Brown, M. G. L.; He, T.; Liang, S.
2016-12-01
Satellite-derived estimates of incident photosynthetically active radiation (PAR) can be used to monitor global change, are required by most terrestrial ecosystem models, and can be used to estimate primary production according to the theory of light use efficiency. Compared with parametric approaches, non-parametric techniques that include an artificial neural network (ANN), support vector machine regression (SVM), an artificial bee colony (ABC), and a look-up table (LUT) do not require many ancillary data as inputs for the estimation of PAR from satellite data. In this study, a selection of machine learning methods to estimate PAR from MODIS top of atmosphere (TOA) radiances are compared to a LUT approach to determine which techniques might best handle the nonlinear relationship between TOA radiance and incident PAR. Evaluation of these methods (ANN, SVM, and LUT) is performed with ground measurements at seven SURFRAD sites. Due to the design of the ANN, it can handle the nonlinear relationship between TOA radiance and PAR better than linearly interpolating between the values in the LUT; however, training the ANN has to be carried out on an angular-bin basis, which results in a LUT of ANNs. The SVM model may be better for incorporating multiple viewing angles than the ANN; however, both techniques require a large amount of training data, which may introduce a regional bias based on where the most training and validation data are available. Based on the literature, the ABC is a promising alternative to an ANN, SVM regression and a LUT, but further development for this application is required before concrete conclusions can be drawn. For now, the LUT method outperforms the machine-learning techniques, but future work should be directed at developing and testing the ABC method. A simple, robust method to estimate direct and diffuse incident PAR, with minimal inputs and a priori knowledge, would be very useful for monitoring global change of primary production, particularly of pastures and rangeland, which have implications for livestock and food security. Future work will delve deeper into the utility of satellite-derived PAR estimation for monitoring primary production in pasture and rangelands.
NASA Technical Reports Server (NTRS)
Veselovskii, I.; Whiteman, D. N.; Korenskiy, M.; Kolgotin, A.; Dubovik, O.; Perez-Ramirez, D.; Suvorina, A.
2013-01-01
The results of the application of the linear estimation technique to multiwavelength Raman lidar measurements performed during the summer of 2011 in Greenbelt, MD, USA, are presented. We demonstrate that multiwavelength lidars are capable not only of providing vertical profiles of particle properties but also of revealing the spatio-temporal evolution of aerosol features. The nighttime 3 Beta + 1 alpha lidar measurements on 21 and 22 July were inverted to spatio-temporal distributions of particle microphysical parameters, such as volume, number density, effective radius and the complex refractive index. The particle volume and number density show strong variation during the night, while the effective radius remains approximately constant. The real part of the refractive index demonstrates a slight decreasing tendency in a region of enhanced extinction coefficient. The linear estimation retrievals are stable and provide time series of particle parameters as a function of height at 4 min resolution. AERONET observations are compared with multiwavelength lidar retrievals showing good agreement.
The estimation of the Earth's gravity field
NASA Astrophysics Data System (ADS)
Szabo, Bela
1986-06-01
The various methods for the description of the Earth's gravity field from direct and/or indirect observations are reviewed. Geopotential models produced by various organizations and in use during the past 15 years are discussed in detail. Recent and future programs for the improvement of global gravity fields are reviewed and the expected improvements from new observation and data processing techniques are estimated. The regional and local gravity field is also reviewed. The various data types and their spectral properties, the sensitivities of the different gravimetric quantities to datatypes are discussed. The techniques for the estimation of gravimetric quantities and the achievable accuracies are presented (e.g., integral formulae, collocation). The results of recent works in this area by prominent authors are reviewed. The prediction of gravity outside the earth from surface data is discussed in two forms: a) prediction of gravity disturbance at high altitudes and b) upward continuation of gravity anomalies. The achievable improvements of the high frequency field by airborne gradiometry are summarized utilizing recent investigations.
Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)
2003-01-01
The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
A time series deformation estimation in the NW Himalayas using SBAS InSAR technique
NASA Astrophysics Data System (ADS)
Kumar, V.; Venkataraman, G.
2012-12-01
A time series land deformation studies in north western Himalayan region has been presented in this study. Synthetic aperture radar (SAR) interferometry (InSAR) is an important tool for measuring the land displacement caused by different geological processes [1]. Frequent spatial and temporal decorrelation in the Himalayan region is a strong impediment in precise deformation estimation using conventional interferometric SAR approach. In such cases, advanced DInSAR approaches PSInSAR as well as Small base line subset (SBAS) can be used to estimate earth surface deformation. The SBAS technique [2] is a DInSAR approach which uses a twelve or more number of repeat SAR acquisitions in different combinations of a properly chosen data (subsets) for generation of DInSAR interferograms using two pass interferometric approach. Finally it leads to the generation of mean deformation velocity maps and displacement time series. Herein, SBAS algorithm has been used for time series deformation estimation in the NW Himalayan region. ENVISAT ASAR IS2 swath data from 2003 to 2008 have been used for quantifying slow deformation. Himalayan region is a very active tectonic belt and active orogeny play a significant role in land deformation process [3]. Geomorphology in the region is unique and reacts to the climate change adversely bringing with land slides and subsidence. Settlements on the hill slopes are prone to land slides, landslips, rockslides and soil creep. These hazardous features have hampered the over all progress of the region as they obstruct the roads and flow of traffic, break communication, block flowing water in stream and create temporary reservoirs and also bring down lot of soil cover and thus add enormous silt and gravel to the streams. It has been observed that average deformation varies from -30.0 mm/year to 10 mm/year in the NW Himalayan region . References [1] Massonnet, D., Feigl, K.L.,Rossi, M. and Adragna, F. (1994) Radar interferometry mapping of deformation in the year after the Landers earthquake. Nature 1994, 369, 227-230. [2] Berardino, P., Fornaro, G., Lanari, R., Sansosti, E. (2002). A new algorithm for surface deformation Monitoring based on Small Baseline Differential SAR Interferograms. IEEE Transactions on Geoscience and Remote Sensing, 40 (11), 2375-2383. [3] GEOLOGICAL SURVEY OF INDIA (GSI), (1999) Inventory of the Himalayan glaciers. Special publication, vol. 34, pp. 165-168. [4] Chen, C.W., and Zebker, H. A., (2000). Network approaches to two-dimensional phase unwrapping: intractability and two new algorithms. Journal of the Optical Society of America, A, 17, 401-414.
Prediction of hydrocarbons in sedimentary basins
Harff, J.E.; Davis, J.C.; Eiserbeck, W.
1993-01-01
To estimate the undiscovered hydrocarbon potential of sedimentary basins, quantitative play assessments specific for each location in a region may be obtained using geostatistical methods combined with the theory of classification of geological objects, a methodology referred to as regionalization. The technique relies on process modeling and measured borehole data as well as probabilistic methods to exploit the relationship between geology (the "predictor") and known hydrocarbon productivity (the "target") to define prospective stratigraphic intervals within a basin. It is demonstrated in case studies from the oil-producing region of the western Kansas Pennsylvanian Shelf and the gas-bearing Rotliegend sediments of the Northeast German Basin. ?? 1993 International Association for Mathematical Geology.
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Goetz, Alexander F. H.
1992-01-01
Over the last decade, technological advances in airborne imaging spectrometers, having spectral resolution comparable with laboratory spectrometers, have made it possible to estimate biochemical constituents of vegetation canopies. Wessman estimated lignin concentration from data acquired with NASA's Airborne Imaging Spectrometer (AIS) over Blackhawk Island in Wisconsin. A stepwise linear regression technique was used to determine the single spectral channel or channels in the AIS data that best correlated with measured lignin contents using chemical methods. The regression technique does not take advantage of the spectral shape of the lignin reflectance feature as a diagnostic tool nor the increased discrimination among other leaf components with overlapping spectral features. A nonlinear least squares spectral matching technique was recently reported for deriving both the equivalent water thicknesses of surface vegetation and the amounts of water vapor in the atmosphere from contiguous spectra measured with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The same technique was applied to a laboratory reflectance spectrum of fresh, green leaves. The result demonstrates that the fresh leaf spectrum in the 1.0-2.5 microns region consists of spectral components of dry leaves and the spectral component of liquid water. A linear least squares spectral matching technique for retrieving equivalent water thickness and biochemical components of green vegetation is described.
NASA Astrophysics Data System (ADS)
Beskow, Samuel; de Mello, Carlos Rogério; Vargas, Marcelle M.; Corrêa, Leonardo de L.; Caldeira, Tamara L.; Durães, Matheus F.; de Aguiar, Marilton S.
2016-10-01
Information on stream flows is essential for water resources management. The stream flow that is equaled or exceeded 90% of the time (Q90) is one the most used low stream flow indicators in many countries, and its determination is made from the frequency analysis of stream flows considering a historical series. However, stream flow gauging network is generally not spatially sufficient to meet the necessary demands of technicians, thus the most plausible alternative is the use of hydrological regionalization. The objective of this study was to couple the artificial intelligence techniques (AI) K-means, Partitioning Around Medoids (PAM), K-harmonic means (KHM), Fuzzy C-means (FCM) and Genetic K-means (GKA), with measures of low stream flow seasonality, for verification of its potential to delineate hydrologically homogeneous regions for the regionalization of Q90. For the performance analysis of the proposed methodology, location attributes from 108 watersheds situated in southern Brazil, and attributes associated with their seasonality of low stream flows were considered in this study. It was concluded that: (i) AI techniques have the potential to delineate hydrologically homogeneous regions in the context of Q90 in the study region, especially the FCM method based on fuzzy logic, and GKA, based on genetic algorithms; (ii) the attributes related to seasonality of low stream flows added important information that increased the accuracy of the grouping; and (iii) the adjusted mathematical models have excellent performance and can be used to estimate Q90 in locations lacking monitoring.
Adaptive enhanced sampling by force-biasing using neural networks
NASA Astrophysics Data System (ADS)
Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.
2018-04-01
A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.
Poynton, Clare; Jenkinson, Mark; Adalsteinsson, Elfar; Sullivan, Edith V.; Pfefferbaum, Adolf; Wells, William
2015-01-01
There is increasing evidence that iron deposition occurs in specific regions of the brain in normal aging and neurodegenerative disorders such as Parkinson's, Huntington's, and Alzheimer's disease. Iron deposition changes the magnetic susceptibility of tissue, which alters the MR signal phase, and allows estimation of susceptibility differences using quantitative susceptibility mapping (QSM). We present a method for quantifying susceptibility by inversion of a perturbation model, or ‘QSIP’. The perturbation model relates phase to susceptibility using a kernel calculated in the spatial domain, in contrast to previous Fourier-based techniques. A tissue/air susceptibility atlas is used to estimate B0 inhomogeneity. QSIP estimates in young and elderly subjects are compared to postmortem iron estimates, maps of the Field-Dependent Relaxation Rate Increase (FDRI), and the L1-QSM method. Results for both groups showed excellent agreement with published postmortem data and in-vivo FDRI: statistically significant Spearman correlations ranging from Rho = 0.905 to Rho = 1.00 were obtained. QSIP also showed improvement over FDRI and L1-QSM: reduced variance in susceptibility estimates and statistically significant group differences were detected in striatal and brainstem nuclei, consistent with age-dependent iron accumulation in these regions. PMID:25248179
Estimation of wave phase speed and nearshore bathymetry from video imagery
Stockdon, H.F.; Holman, R.A.
2000-01-01
A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.
Inverse Regional Modeling with Adjoint-Free Technique
NASA Astrophysics Data System (ADS)
Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.
2016-02-01
The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.
Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia
de Oliveira, Gabriel; Brunsell, Nathaniel A.; Moraes, Elisabete C.; Bertani, Gabriel; dos Santos, Thiago V.; Shimabukuro, Yosio E.; Aragão, Luiz E. O. C.
2016-01-01
In the Amazon region, the estimation of radiation fluxes through remote sensing techniques is hindered by the lack of ground measurements required as input in the models, as well as the difficulty to obtain cloud-free images. Here, we assess an approach to estimate net radiation (Rn) and its components under all-sky conditions for the Amazon region through the Surface Energy Balance Algorithm for Land (SEBAL) model utilizing only remote sensing and reanalysis data. The study period comprised six years, between January 2001–December 2006, and images from MODIS sensor aboard the Terra satellite and GLDAS reanalysis products were utilized. The estimates were evaluated with flux tower measurements within the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) project. Comparison between estimates obtained by the proposed method and observations from LBA towers showed errors between 12.5% and 16.4% and 11.3% and 15.9% for instantaneous and daily Rn, respectively. Our approach was adequate to minimize the problem related to strong cloudiness over the region and allowed to map consistently the spatial distribution of net radiation components in Amazonia. We conclude that the integration of reanalysis products and satellite data, eliminating the need for surface measurements as input model, was a useful proposition for the spatialization of the radiation fluxes in the Amazon region, which may serve as input information needed by algorithms that aim to determine evapotranspiration, the most important component of the Amazon hydrological balance. PMID:27347957
Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia.
de Oliveira, Gabriel; Brunsell, Nathaniel A; Moraes, Elisabete C; Bertani, Gabriel; Dos Santos, Thiago V; Shimabukuro, Yosio E; Aragão, Luiz E O C
2016-06-24
In the Amazon region, the estimation of radiation fluxes through remote sensing techniques is hindered by the lack of ground measurements required as input in the models, as well as the difficulty to obtain cloud-free images. Here, we assess an approach to estimate net radiation (Rn) and its components under all-sky conditions for the Amazon region through the Surface Energy Balance Algorithm for Land (SEBAL) model utilizing only remote sensing and reanalysis data. The study period comprised six years, between January 2001-December 2006, and images from MODIS sensor aboard the Terra satellite and GLDAS reanalysis products were utilized. The estimates were evaluated with flux tower measurements within the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) project. Comparison between estimates obtained by the proposed method and observations from LBA towers showed errors between 12.5% and 16.4% and 11.3% and 15.9% for instantaneous and daily Rn, respectively. Our approach was adequate to minimize the problem related to strong cloudiness over the region and allowed to map consistently the spatial distribution of net radiation components in Amazonia. We conclude that the integration of reanalysis products and satellite data, eliminating the need for surface measurements as input model, was a useful proposition for the spatialization of the radiation fluxes in the Amazon region, which may serve as input information needed by algorithms that aim to determine evapotranspiration, the most important component of the Amazon hydrological balance.
Regional maximum rainfall analysis using L-moments at the Titicaca Lake drainage, Peru
NASA Astrophysics Data System (ADS)
Fernández-Palomino, Carlos Antonio; Lavado-Casimiro, Waldo Sven
2017-08-01
The present study investigates the application of the index flood L-moments-based regional frequency analysis procedure (RFA-LM) to the annual maximum 24-h rainfall (AM) of 33 rainfall gauge stations (RGs) to estimate rainfall quantiles at the Titicaca Lake drainage (TL). The study region was chosen because it is characterised by common floods that affect agricultural production and infrastructure. First, detailed quality analyses and verification of the RFA-LM assumptions were conducted. For this purpose, different tests for outlier verification, homogeneity, stationarity, and serial independence were employed. Then, the application of RFA-LM procedure allowed us to consider the TL as a single, hydrologically homogeneous region, in terms of its maximum rainfall frequency. That is, this region can be modelled by a generalised normal (GNO) distribution, chosen according to the Z test for goodness-of-fit, L-moments (LM) ratio diagram, and an additional evaluation of the precision of the regional growth curve. Due to the low density of RG in the TL, it was important to produce maps of the AM design quantiles estimated using RFA-LM. Therefore, the ordinary Kriging interpolation (OK) technique was used. These maps will be a useful tool for determining the different AM quantiles at any point of interest for hydrologists in the region.
Mapping land use changes in the carboniferous region of Santa Catarina, report 2
NASA Technical Reports Server (NTRS)
Valeriano, D. D. (Principal Investigator); Bitencourtpereira, M. D.
1983-01-01
The techniques applied to MSS-LANDSAT data in the land-use mapping of Criciuma region (Santa Catarina state, Brazil) are presented along with the results of a classification accuracy estimate tested on the resulting map. The MSS-LANDSAT data digital processing involves noise suppression, features selection and a hybrid classifier. The accuracy test is made through comparisons with aerial photographs of sampled points. The utilization of digital processing to map the classes agricultural lands, forest lands and urban areas is recommended, while the coal refuse areas should be mapped visually.
Humidity estimate for the middle Eocene Arctic rain forest
NASA Astrophysics Data System (ADS)
Jahren, A. Hope; Silveira Lobo Sternberg, Leonel
2003-05-01
The exquisite preservation of fossilized Metasequoia trees that grew near 80°N latitude during the middle Eocene (ca. 45 Ma) in Nunavut, Canada, allowed for δD and δ18O analyses of cellulose, techniques previously restricted to wood <30,000 yr old. From the isotopic results, we determined that the middle Eocene Arctic atmosphere contained ˜2× the water found in the region's atmosphere today. This water vapor contributed to a middle Eocene greenhouse effect that insulated the polar region during dark polar winters.
A Fourier-based textural feature extraction procedure
NASA Technical Reports Server (NTRS)
Stromberg, W. D.; Farr, T. G.
1986-01-01
A procedure is presented to discriminate and characterize regions of uniform image texture. The procedure utilizes textural features consisting of pixel-by-pixel estimates of the relative emphases of annular regions of the Fourier transform. The utility and derivation of the features are described through presentation of a theoretical justification of the concept followed by a heuristic extension to a real environment. Two examples are provided that validate the technique on synthetic images and demonstrate its applicability to the discrimination of geologic texture in a radar image of a tropical vegetated area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busuioc, A.; Storch, H. von; Schnur, R.
Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less
Global observations of tropospheric BrO columns using GOME-2 satellite data
NASA Astrophysics Data System (ADS)
Theys, N.; van Roozendael, M.; Hendrick, F.; Yang, X.; de Smedt, I.; Richter, A.; Begoin, M.; Errera, Q.; Johnston, P. V.; Kreher, K.; de Mazière, M.
2011-02-01
Measurements from the GOME-2 satellite instrument have been analyzed for tropospheric BrO using a residual technique that combines measured BrO columns and estimates of the stratospheric BrO content from a climatological approach driven by O3 and NO2 observations. Comparisons between the GOME-2 results and BrO vertical columns derived from correlative ground-based and SCIAMACHY nadir observations, present a good level of consistency. We show that the adopted technique enables separation of stratospheric and tropospheric fractions of the measured total BrO columns and allows quantitative study of the BrO plumes in polar regions. While some satellite observed plumes of enhanced BrO can be explained by stratospheric descending air, we show that most BrO hotspots are of tropospheric origin, although they are often associated to regions with low tropopause heights as well. Elaborating on simulations using the p-TOMCAT tropospheric chemical transport model, this result is found to be consistent with the mechanism of bromine release through sea salt aerosols production during blowing snow events. No definitive conclusion can be drawn however on the importance of blowing snow sources in comparison to other bromine release mechanisms. Outside polar regions, evidence is provided for a global tropospheric BrO background with column of 1-3 × 1013 molec cm-2, consistent with previous estimates.
NASA Astrophysics Data System (ADS)
Chuter, S. J.; Martín-Español, A.; Wouters, B.; Bamber, J. L.
2017-07-01
We present a reassessment of input-output method ice mass budget estimates for the Abbot and Getz regions of West Antarctica using CryoSat-2-derived ice thickness estimates. The mass budget is 8 ± 6 Gt yr-1 and 5 ± 17 Gt yr-1 for the Abbot and Getz sectors, respectively, for the period 2006-2008. Over the Abbot region, our results resolve a previous discrepancy with elevation rates from altimetry, due to a previous 30% overestimation of ice thickness. For the Getz sector, our results are at the more positive bound of estimates from other techniques. Grounding line velocity increases up to 20% between 2007 and 2014 alongside mean elevation rates of -0.67 ± 0.13 m yr-1 between 2010 and 2013 indicate the onset of a dynamic thinning signal. Mean snowfall trends of -0.33 m yr-1 water equivalent since 2006 indicate recent mass trends are driven by both ice dynamics and surface processes.
Adaptive Local Realignment of Protein Sequences.
DeBlasio, Dan; Kececioglu, John
2018-06-11
While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.
Estimated Perennial Streams of Idaho and Related Geospatial Datasets
Rea, Alan; Skinner, Kenneth D.
2009-01-01
The perennial or intermittent status of a stream has bearing on many regulatory requirements. Because of changing technologies over time, cartographic representation of perennial/intermittent status of streams on U.S. Geological Survey (USGS) topographic maps is not always accurate and (or) consistent from one map sheet to another. Idaho Administrative Code defines an intermittent stream as one having a 7-day, 2-year low flow (7Q2) less than 0.1 cubic feet per second. To establish consistency with the Idaho Administrative Code, the USGS developed regional regression equations for Idaho streams for several low-flow statistics, including 7Q2. Using these regression equations, the 7Q2 streamflow may be estimated for naturally flowing streams anywhere in Idaho to help determine perennial/intermittent status of streams. Using these equations in conjunction with a Geographic Information System (GIS) technique known as weighted flow accumulation allows for an automated and continuous estimation of 7Q2 streamflow at all points along a stream, which in turn can be used to determine if a stream is intermittent or perennial according to the Idaho Administrative Code operational definition. The selected regression equations were applied to create continuous grids of 7Q2 estimates for the eight low-flow regression regions of Idaho. By applying the 0.1 ft3/s criterion, the perennial streams have been estimated in each low-flow region. Uncertainty in the estimates is shown by identifying a 'transitional' zone, corresponding to flow estimates of 0.1 ft3/s plus and minus one standard error. Considerable additional uncertainty exists in the model of perennial streams presented in this report. The regression models provide overall estimates based on general trends within each regression region. These models do not include local factors such as a large spring or a losing reach that may greatly affect flows at any given point. Site-specific flow data, assuming a sufficient period of record, generally would be considered to represent flow conditions better at a given site than flow estimates based on regionalized regression models. The geospatial datasets of modeled perennial streams are considered a first-cut estimate, and should not be construed to override site-specific flow data.
Sparse estimation of model-based diffuse thermal dust emission
NASA Astrophysics Data System (ADS)
Irfan, Melis O.; Bobin, Jérôme
2018-03-01
Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.
NASA Astrophysics Data System (ADS)
Scuderi, Louis A.
2017-04-01
Erosion rates derived using dendrogeomorphology have been used to quantify slope degradation in many localities globally. However, with the exception of the western United States, most of these estimates are derived from short-lived trees whose lifetimes may not adequately reflect the complete range of slope processes which can include erosion, deposition, impacts of extreme events and even long-term hiatuses. Erosion rate estimates at a given site using standard techniques therefore reflect censored local point erosion estimates rather than long-term rates. We applied a modified dendrogeomorphic approach to rapidly estimate erosion rates from dbh/age relationships to assess the difference between short and long-term rates and found that the mean short-term rate was 0.13 cm/yr with high variability, while the uncensored long-term rate was 0.06 cm/yr. The results indicate that rates calculated from short-lived trees, while possibly appropriate for local short-term point estimates of erosion, are highly variable and may overestimate regional long-term rates by > 50%. While these findings do not invalidate the use of dendrogeomorphology to estimate erosion rates they do suggest that care must be taken to select older trees that incorporate a range of slope histories in order to best approximate regional long-term rates.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.
2013-01-01
In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
Assessing D-Region Ionospheric Electron Densities with Transionospheric VLF Signals
NASA Astrophysics Data System (ADS)
Worthington, E. R.; Cohen, M.
2016-12-01
Very Low Frequency (VLF, 3-30 kHz) electromagnetic radiation emitted from ground-based sources, such as VLF transmitters or lightning strokes, is generally confined between the Earth's surface and the base of the ionosphere. These boundaries result in waveguide-like propagation modes that travel away from the source, often over great distances. In the vicinity of the source, a unique interference pattern exists that is largely determined by the D-region of the ionosphere which forms the upper boundary. A small portion of this VLF radiation escapes the ionosphere allowing the waveguide interference pattern to be observable to satellites in low-earth orbit (LEO). Techniques for estimating D-region electron densities using VLF satellite measurements are presented. These techniques are then validated using measurements taken by the satellite DEMETER. During its six-year mission, DEMETER completed hundreds of passes above well-characterized VLF transmitters while taking measurements of electric and magnetic field strengths. The waveguide interference pattern described above is clearly visible in these measurements, and features from the interference pattern are used to derive D-region electron density profiles.
Evaluating Satellite Rainfall Estimates for Agro-hydrological Applications in Africa
NASA Astrophysics Data System (ADS)
Senay, G. B.; Verdin, J. P.; Korecha, D.; Asfaw, A.
2004-12-01
Regional water balance techniques are used to monitor and forecast crop performance and flooding potentials around the world. In the last few years, satellite rainfall estimates (RFE) have become available at continental scales, which made it possible to develop operational regional water balance models for the monitoring of crops performance and flooding potentials in Africa and other regions of the world as part of an environmental early warning system . The accuracy of RFE in absolute terms and importantly as it relates to agricultural and hydrological applications have not been evaluated systematically. This study evaluated a subset of the Africa-wide RFE product by comparing station-rainfall data and RFE from 1996 to 2002 using over 100 rain-gauge stations from Ethiopia at a dekadal (~10-day) time step. The results showed a general under-estimation of RFE compared to station rainfall values. The correlation between station rainfall data and RFE varied highly from place to place and between seasons. On the other hand, the correlation improved significantly when comparison was made between RFE-derived crop water satisfaction index (WRSI) and station-rainfall-derived WRSI, indicating the usefulness of the RFE for agro-hydrological applications.
Ivorra, Eugenio; Verdu, Samuel; Sánchez, Antonio J; Grau, Raúl; Barat, José M
2016-10-19
A technique that combines the spatial resolution of a 3D structured-light (SL) imaging system with the spectral analysis of a hyperspectral short-wave near infrared system was developed for freshness predictions of gilthead sea bream on the first storage days (Days 0-6). This novel approach allows the hyperspectral analysis of very specific fish areas, which provides more information for freshness estimations. The SL system obtains a 3D reconstruction of fish, and an automatic method locates gilthead's pupils and irises. Once these regions are positioned, the hyperspectral camera acquires spectral information and a multivariate statistical study is done. The best region is the pupil with an R² of 0.92 and an RMSE of 0.651 for predictions. We conclude that the combination of 3D technology with the hyperspectral analysis offers plenty of potential and is a very promising technique to non destructively predict gilthead freshness.
Ivorra, Eugenio; Verdu, Samuel; Sánchez, Antonio J.; Grau, Raúl; Barat, José M.
2016-01-01
A technique that combines the spatial resolution of a 3D structured-light (SL) imaging system with the spectral analysis of a hyperspectral short-wave near infrared system was developed for freshness predictions of gilthead sea bream on the first storage days (Days 0–6). This novel approach allows the hyperspectral analysis of very specific fish areas, which provides more information for freshness estimations. The SL system obtains a 3D reconstruction of fish, and an automatic method locates gilthead’s pupils and irises. Once these regions are positioned, the hyperspectral camera acquires spectral information and a multivariate statistical study is done. The best region is the pupil with an R2 of 0.92 and an RMSE of 0.651 for predictions. We conclude that the combination of 3D technology with the hyperspectral analysis offers plenty of potential and is a very promising technique to non destructively predict gilthead freshness. PMID:27775556
NREPS Applications for Water Supply and Management in California and Tennessee
NASA Technical Reports Server (NTRS)
Gatlin, P.; Scott, M.; Carery, L. D.; Petersen, W. A.
2011-01-01
Management of water resources is a balancing act between temporally and spatially limited sources and competitive needs which can often exceed the supply. In order to manage water resources over a region such as the San Joaquin Valley or the Tennessee River Valley, it is pertinent to know the amount of water that has fallen in the watershed and where the water is going within it. Since rain gauge networks are typically sparsely spaced, it is typical that the majority of rainfall on the region may not be measured. To mitigate this under-sampling of rainfall, weather radar has long been employed to provide areal rainfall estimates. The Next-Generation Weather Radars (NEXRAD) make it possible to estimate rainfall over the majority of the conterminous United States. The NEXRAD Rainfall Estimation Processing System (NREPS) was developed specifically for the purpose of using weather radar to estimate rainfall for water resources management. The NREPS is tailored to meet customer needs on spatial and temporal scales relevant to the hydrologic or land-surface models of the end-user. It utilizes several techniques to mitigate artifacts in the NEXRAD data from contaminating the rainfall field. These techniques include clutter filtering, correction for occultation by topography as well as accounting for the vertical profile of reflectivity. This presentation will focus on improvements made to the NREPS system to map rainfall in the San Joaquin Valley for NASA s Water Supply and Management Project in California, but also ongoing rainfall mapping work in the Tennessee River watershed for the Tennessee Valley Authority and possible future applications in other areas of the continent.
NASA Astrophysics Data System (ADS)
Divine, Dmitry; Granskog, Mats A.; Hudson, Stephen R.; Pedersen, Christina A.; Karlsen, Tor I.; Gerland, Sebastian
2014-05-01
The paper presents the results of analysis of the radiative properties of first year sea ice in advanced stages of melt. The presented technique is based on the upscaling in situ point measurements of surface albedo to the regional (150 km) spatial scale using aerial photographs of sea ice captured by a helicopter borne camera setup. The sea ice imagery as well as in situ snow and ice data were collected during the eight day ICE12 drift experiment carried out by the Norwegian Polar Institute in the Arctic north of Svalbard at 83.5 N during 27 July-03 August 2012. In total some 100 ground albedo measurements were made on melting sea ice in locations representative of the four main types of sea ice surface identified using the discriminant analysis -based classification technique. Some 11000 images from a total of six ice survey flights adding up to some 770 km of flight tracks covering about 28 km2 of sea ice surface were classified to yield the along-track distributions of four major surface classes: bare ice, dark melt ponds, bright melt ponds and open water. Results demonstrated a relative homogeneity of sea ice cover in the study area allowing for upscaling the local optical measurements to the regional scale. For the typical 10% open water fraction and 25% melt pond coverage, with a ratio of dark to bright ponds of 2 identified from selected images, the aggregate scale surface albedo of the area was estimated to be 0.42(0.40;0.44). The confidence intervals on the estimate were derived using the moving block bootstrap approach applied to the sequences of classified sea ice images and albedo of the four surface classes treated as random variables. Uncertainty in the mean estimates of local albedo from in situ measurements contributed some 65% to the variance of the estimated regional albedo with the remaining variance to be associated with the spatial inhomogeneity of sea ice cover. The results of the study are of relevance for the modeling of sea ice processes in climate simulations. It particularly concerns the period of summer melt when the optical properties of sea ice undergo substantial changes which the existing sea ice models experience most difficulties to accurately reproduce. That phase of a season is especially crucial for climate and ecosystem processes in the polar regions.
An investigation of the marine boundary layer during cold air outbreak
NASA Technical Reports Server (NTRS)
Stage, S. A.
1986-01-01
Methods for use in the remote estimation of ocean surface sensible and latent heat fluxes were developed and evaluated. Three different techniques were developed for determining these fluxes. These methods are: (1) Obtaining surface sensible and latent heat fluxes from satellite measurements; (2)Obtaining surface sensible and latent heat fluxes from an MABL model; (3) A method using horizontal transfer coefficients. These techniques are not very sensitive to errors in the data and therefore appear to hold promise of producing useful answers. Questions remain about how closely the structure of the real atmosphere agrees with the assumptions made for each of these techniques, and, therefore about how well these techniques can perform in actual use. The value of these techniques is that they promise to provide methods for the determination of fluxes over regions where very few traditional measurement exist.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.
Aylward, C.M.; Murdoch, J.D.; Donovan, Therese M.; Kilpatrick, C.W.; Bernier, C.; Katz, J.
2018-01-01
The American marten Martes americana is a species of conservation concern in the northeastern United States due to widespread declines from over‐harvesting and habitat loss. Little information exists on current marten distribution and how landscape characteristics shape patterns of occupancy across the region, which could help develop effective recovery strategies. The rarity of marten and lack of historical distribution records are also problematic for region‐wide conservation planning. Expert opinion can provide a source of information for estimating species–landscape relationships and is especially useful when empirical data are sparse. We created a survey to elicit expert opinion and build a model that describes marten occupancy in the northeastern United States as a function of landscape conditions. We elicited opinions from 18 marten experts that included wildlife managers, trappers and researchers. Each expert estimated occupancy probability at 30 sites in their geographic region of expertise. We, then, fit the response data with a set of 58 models that incorporated the effects of covariates related to forest characteristics, climate, anthropogenic impacts and competition at two spatial scales (1.5 and 5 km radii), and used model selection techniques to determine the best model in the set. Three top models had strong empirical support, which we model averaged based on AIC weights. The final model included effects of five covariates at the 5‐km scale: percent canopy cover (positive), percent spruce‐fir land cover (positive), winter temperature (negative), elevation (positive) and road density (negative). A receiver operating characteristic curve indicated that the model performed well based on recent occurrence records. We mapped distribution across the region and used circuit theory to estimate movement corridors between isolated core populations. The results demonstrate the effectiveness of expert‐opinion data at modeling occupancy for rare species and provide tools for planning marten recovery in the northeastern United States.
NASA Technical Reports Server (NTRS)
Safaeinili, Ali; Kofman, Wlodek; Mouginot, Jeremie; Gim, Yonggyu; Herique, Alain; Ivanov, Anton B.; Plaut, Jeffrey J.; Picardi, Giovanni
2007-01-01
The Martian ionosphere's local total electron content (TEC) and the neutral atmosphere scale height can be derived from radar echoes reflected from the surface of the planet. We report the global distribution of the TEC by analyzing more than 750,000 echoes of the Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS). This is the first direct measurement of the TEC of the Martian ionosphere. The technique used in this paper is a novel 'transmission-mode' sounding of the ionosphere of Mars in contrast to the Active Ionospheric Sounding experiment (AIS) on MARSIS, which generally operates in the reflection mode. This technique yields a global map of the TEC for the Martian ionosphere. The radar transmits a wideband chirp signal that travels through the ionosphere before and after being reflected from the surface. The received waves are attenuated, delayed and dispersed, depending on the electron density in the column directly below the spacecraft. In the process of correcting the radar signal, we are able to estimate the TEC and its global distribution with an unprecedented resolution of about 0.1 deg in latitude (5 km footprint). The mapping of the relative geographical variations in the estimated nightside TEC data reveals an intricate web of high electron density regions that correspond to regions where crustal magnetic field lines are connected to the solar wind. Our data demonstrates that these regions are generally but not exclusively associated with areas that have magnetic field lines perpendicular to the surface of Mars. As a result, the global TEC map provides a high-resolution view of where the Martian crustal magnetic field is connected to the solar wind. We also provide an estimate of the neutral atmospheric scale height near the ionospheric peak and observe temporal fluctuations in peak electron density related to solar activity.
Spatio-temporal distribution of energy radiation from low frequency tremor
NASA Astrophysics Data System (ADS)
Maeda, T.; Obara, K.
2007-12-01
Recent fine-scale hypocenter locations of low frequency tremors (LFTs) estimated by cross-correlation technique (Shelly et al. 2006; Maeda et al. 2006) and new finding of very low frequency earthquake (Ito et al. 2007) suggest that these slow events occur at the plate boundary associated with slow slip events (Obara and Hirose, 2006). However, the number of tremor detected by above technique is limited since continuous tremor waveforms are too complicated. Although an envelope correlation method (ECM) (Obara, 2002) enables us to locate epicenters of LFT without arrival time picks, however, ECM fails to locate LFTs precisely especially on the most active stage of tremor activity because of the low-correlation of envelope amplitude. To reveal total energy release of LFT, here we propose a new method for estimating the location of LFTs together with radiated energy from the tremor source by using envelope amplitude. The tremor amplitude observed at NIED Hi-net stations in western Shikoku simply decays in proportion to the reciprocal of the source-receiver distance after the correction of site- amplification factor even though the phases of the tremor are very complicated. So, we model the observed mean square envelope amplitude by time-dependent energy radiation with geometrical spreading factor. In the model, we do not have origin time of the tremor since we assume that the source of the tremor continuously radiates the energy. Travel-time differences between stations estimated by the ECM technique also incorporated in our locating algorithm together with the amplitude information. Three-component 1-hour Hi-net velocity continuous waveforms with a pass-band of 2-10 Hz are used for the inversion after the correction of site amplification factors at each station estimated by coda normalization method (Takahashi et al. 2005) applied to normal earthquakes in the region. The source location and energy are estimated by applying least square inversion to the 1-min window iteratively. As a first application of our method, we estimated the spatio-temporal distribution of energy radiation for 2006 May episodic tremor and slip event occurred in western Shikoku, Japan, region. Tremor location and their radiated energy are estimated for every 1 minute. We counted the number of located LFTs and summed up their total energy at each grid having 0.05-degree spacing at each day to figure out the spatio-temporal distribution of energy release of tremors. The resultant spatial distribution of radiated energy is concentrated at a specific region. Additionally, we see the daily change of released energy, both of location and amount, which corresponds to the migration of tremor activity. The spatio-temporal distribution of energy radiation of tremors is in good agreement with a spatio-temporal slip distribution of slow slip event estimated from Hi-net tiltmeter record (Hirose et al. 2007). This suggests that small continuous tremors occur associated with a rupture process of slow slip.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieira, M.; Fantoni, A.; Martins, R.
1994-12-31
Using the Flying Spot Technique (FST) the authors have studied minority carrier transport parallel and perpendicular to the surface of amorphous silicon films (a-Si:H). To reduce slow transients due to charge redistribution in low resistivity regions during the measurement they have applied a strong homogeneously absorbed bias light. The defect density was estimated from Constant Photocurrent Method (CPM) measurements. The steady-state photocarrier grating technique (SSPG) is a 1-dimensional approach. However, the modulation depth of the carrier profile is also dependent on film surface properties, like surface recombination velocity. Both methods yield comparable diffusion lengths when applied to a-Si:H.
NASA Astrophysics Data System (ADS)
Riabkov, Dmitri
Compartment modeling of dynamic medical image data implies that the concentration of the tracer over time in a particular region of the organ of interest is well-modeled as a convolution of the tissue response with the tracer concentration in the blood stream. The tissue response is different for different tissues while the blood input is assumed to be the same for different tissues. The kinetic parameters characterizing the tissue responses can be estimated by blind identification methods. These algorithms use the simultaneous measurements of concentration in separate regions of the organ; if the regions have different responses, the measurement of the blood input function may not be required. In this work it is shown that the blind identification problem has a unique solution for two-compartment model tissue response. For two-compartment model tissue responses in dynamic cardiac MRI imaging conditions with gadolinium-DTPA contrast agent, three blind identification algorithms are analyzed here to assess their utility: Eigenvector-based Algorithm for Multichannel Blind Deconvolution (EVAM), Cross Relations (CR), and Iterative Quadratic Maximum Likelihood (IQML). Comparisons of accuracy with conventional (not blind) identification techniques where the blood input is known are made as well. The statistical accuracies of estimation for the three methods are evaluated and compared for multiple parameter sets. The results show that the IQML method gives more accurate estimates than the other two blind identification methods. A proof is presented here that three-compartment model blind identification is not unique in the case of only two regions. It is shown that it is likely unique for the case of more than two regions, but this has not been proved analytically. For the three-compartment model the tissue responses in dynamic FDG PET imaging conditions are analyzed with the blind identification algorithms EVAM and Separable variables Least Squares (SLS). A method of identification that assumes that FDG blood input in the brain can be modeled as a function of time and several parameters (IFM) is analyzed also. Nonuniform sampling SLS (NSLS) is developed due to the rapid change of the FDG concentration in the blood during the early postinjection stage. Comparisons of accuracy of EVAM, SLS, NSLS and IFM identification techniques are made.
Estimation of cold plasma outflow during geomagnetic storms
NASA Astrophysics Data System (ADS)
Haaland, S.; Eriksson, A.; André, M.; Maes, L.; Baddeley, L.; Barakat, A.; Chappell, R.; Eccles, V.; Johnsen, C.; Lybekk, B.; Li, K.; Pedersen, A.; Schunk, R.; Welling, D.
2015-12-01
Low-energy ions of ionospheric origin constitute a significant contributor to the magnetospheric plasma population. Measuring cold ions is difficult though. Observations have to be done at sufficiently high altitudes and typically in regions of space where spacecraft attain a positive charge due to solar illumination. Cold ions are therefore shielded from the satellite particle detectors. Furthermore, spacecraft can only cover key regions of ion outflow during segments of their orbit, so additional complications arise if continuous longtime observations, such as during a geomagnetic storm, are needed. In this paper we suggest a new approach, based on a combination of synoptic observations and a novel technique to estimate the flux and total outflow during the various phases of geomagnetic storms. Our results indicate large variations in both outflow rates and transport throughout the storm. Prior to the storm main phase, outflow rates are moderate, and the cold ions are mainly emanating from moderately sized polar cap regions. Throughout the main phase of the storm, outflow rates increase and the polar cap source regions expand. Furthermore, faster transport, resulting from enhanced convection, leads to a much larger supply of cold ions to the near-Earth region during geomagnetic storms.
Estimation of cold plasma outflow during geomagnetic storms
NASA Astrophysics Data System (ADS)
Haaland, S.; Eriksson, A. I.; Andre, M.; Maes, L.; Baddeley, L. J.; Barakat, A. R.; Chappell, C. R.; Eccles, V.; Johnsen, C.; Lybekk, B.; Li, K.; Pedersen, A.; Schunk, R. W.; Welling, D. T.
2015-12-01
Low energy ions of ionospheric origin provide a significant contributon to the magnetospheric plasmapopulation. Measuring cold ions is difficult though. Observations have to be done at sufficiently high altitudes and typically in regions of space where spacecraft attain a positive charge due to solar illumination. Cold ions are therefore shielded from the satellite particle detectors. Furthermore, spacecraft can only cover key regions of ion outflow during segments of their orbit, so additional complications arise arise if continuous longtime observations such as the during a geomagnetic storms are needed. In this paper we suggest a new approach, based on a combination of synoptic observations and a novel technique to estimate the flux and total outflow during the various phases of geomagnetic storms. Our results indicate large variations in both outflow rates and transport throughout the storm. Prior to the storm main phase, outflow rates are moderate, and the cold ions are mainly emanating from moderately sized polar cap regions. Throughout the main phase of the storm, outflow rates increase and the polar cap source regions expand. Furthermore, faster transport, resulting from enhanced convection, leads to a much larger supply of cold ions to the near Earth region during gemagnetic storms.
Assessment of mangrove forests in the Pacific region using Landsat imagery
NASA Astrophysics Data System (ADS)
Bhattarai, Bibek; Giri, Chandra
2011-01-01
The information on the mangrove forests for the Pacific region is scarce or outdated. A regional assessment based on a consistent methodology and data sources was needed to understand their true extent. Our investigation offers a regionally consistent, high resolution (30 m), and the most comprehensive mapping of mangrove forests on the islands of American Samoa, Fiji, French Polynesia, Guam, Hawaii, Kiribati, Marshall Islands, Micronesia, Nauru, New Caledonia, Northern Mariana Islands, Palau, Papua New Guinea, Samoa, Solomon Islands, Tonga, Tuvalu, Vanuatu, and Wallis and Futuna Islands for the year 2000. We employed a hybrid supervised and unsupervised image classification technique on a total of 128 Landsat scenes gathered between 1999 and 2004, and validated the results using existing geographic information science (GIS) datasets, high resolution imagery, and published literature. We also draw a comparative analysis with the mangrove forests inventory published by the Food and Agriculture Association (FAO) of the United Nations. Our estimate shows a total of 623755 hectares of mangrove forests in the Pacific region; an increase of 18% from FAO's estimates. Although mangrove forests are disproportionately distributed toward a few larger islands on the western Pacific, they are also significant in many smaller islands.
A unified approach for EIT imaging of regional overdistension and atelectasis in acute lung injury.
Gómez-Laberge, Camille; Arnold, John H; Wolf, Gerhard K
2012-03-01
Patients with acute lung injury or acute respiratory distress syndrome (ALI/ARDS) are vulnerable to ventilator-induced lung injury. Although this syndrome affects the lung heterogeneously, mechanical ventilation is not guided by regional indicators of potential lung injury. We used electrical impedance tomography (EIT) to estimate the extent of regional lung overdistension and atelectasis during mechanical ventilation. Techniques for tidal breath detection, lung identification, and regional compliance estimation were combined with the Graz consensus on EIT lung imaging (GREIT) algorithm. Nine ALI/ARDS patients were monitored during stepwise increases and decreases in airway pressure. Our method detected individual breaths with 96.0% sensitivity and 97.6% specificity. The duration and volume of tidal breaths erred on average by 0.2 s and 5%, respectively. Respiratory system compliance from EIT and ventilator measurements had a correlation coefficient of 0.80. Stepwise increases in pressure could reverse atelectasis in 17% of the lung. At the highest pressures, 73% of the lung became overdistended. During stepwise decreases in pressure, previously-atelectatic regions remained open at sub-baseline pressures. We recommend that the proposed approach be used in collaborative research of EIT-guided ventilation strategies for ALI/ARDS.
Global Source Parameters from Regional Spectral Ratios for Yield Transportability Studies
NASA Astrophysics Data System (ADS)
Phillips, W. S.; Fisk, M. D.; Stead, R. J.; Begnaud, M. L.; Rowe, C. A.
2016-12-01
We use source parameters such as moment, corner frequency and high frequency rolloff as constraints in amplitude tomography, ensuring that spectra of well-studied earthquakes are recovered using the ensuing attenuation and site term model. We correct explosion data for path and site effects using such models, which allows us to test transportability of yield estimation techniques based on our best source spectral estimates. To develop a background set of source parameters, we applied spectral ratio techniques to envelopes of a global set of regional distance recordings from over 180,000 crustal events. Corner frequencies and moment ratios were determined via inversion using all event pairs within predetermined clusters, shifting to absolute levels using independently determined regional and teleseismic moments. The moment and corner frequency results can be expressed as stress drop, which has considerable scatter, yet shows dramatic regional patterns. We observe high stress in subduction zones along S. America, S. Mexico, the Banda Sea, and associated with the Yakutat Block in Alaska. We also observe high stress at the Himalayan syntaxes, the Pamirs, eastern Iran, the Caspian, the Altai-Sayan, and the central African rift. Low stress is observed along mid ocean spreading centers, the Afar rift, patches of convergence zones such as Nicaragua, the Zagros, Tibet, and the Tien Shan, among others. Mine blasts appear as low stress events due to their low corners and steep rolloffs. Many of these anomalies have been noted by previous studies, and we plan to compare results directly. As mentioned, these results will be used to constrain tomographic imaging, but can also be used in model validation procedures similar to the use of ground truth in location problems, and, perhaps most importantly, figure heavily in quality control of local and regional distance amplitude measurements.
Wang, Shuihua; Zhang, Yudong; Liu, Ge; Phillips, Preetha; Yuan, Ti-Fei
2016-01-01
Within the past decade, computer scientists have developed many methods using computer vision and machine learning techniques to detect Alzheimer's disease (AD) in its early stages. However, some of these methods are unable to achieve excellent detection accuracy, and several other methods are unable to locate AD-related regions. Hence, our goal was to develop a novel AD brain detection method. In this study, our method was based on the three-dimensional (3D) displacement-field (DF) estimation between subjects in the healthy elder control group and AD group. The 3D-DF was treated with AD-related features. The three feature selection measures were used in the Bhattacharyya distance, Student's t-test, and Welch's t-test (WTT). Two non-parallel support vector machines, i.e., generalized eigenvalue proximal support vector machine and twin support vector machine (TSVM), were then used for classification. A 50 × 10-fold cross validation was implemented for statistical analysis. The results showed that "3D-DF+WTT+TSVM" achieved the best performance, with an accuracy of 93.05 ± 2.18, a sensitivity of 92.57 ± 3.80, a specificity of 93.18 ± 3.35, and a precision of 79.51 ± 2.86. This method also exceled in 13 state-of-the-art approaches. Additionally, we were able to detect 17 regions related to AD by using the pure computer-vision technique. These regions include sub-gyral, inferior parietal lobule, precuneus, angular gyrus, lingual gyrus, supramarginal gyrus, postcentral gyrus, third ventricle, superior parietal lobule, thalamus, middle temporal gyrus, precentral gyrus, superior temporal gyrus, superior occipital gyrus, cingulate gyrus, culmen, and insula. These regions were reported in recent publications. The 3D-DF is effective in AD subject and related region detection.
Biases in measuring the brain: the trouble with the telencephalon.
LaDage, Lara D; Roth, Timothy C; Pravosudov, Vladimir V
2009-01-01
When correlating behavior with particular brain regions thought responsible for the behavior, a different region of the brain is usually measured as a control region. This technique is often used to relate spatial processes with the hippocampus, while concomitantly controlling for overall brain changes by measuring the remainder of the telencephalon. We have identified two methods in the literature (the HOM and TTM) that estimate the volume of the telencephalon, although the majority of studies are ambiguous regarding the method employed in measuring the telencephalon. Of these two methods, the HOM might produce an artificial correlation between the telencephalon and the hippocampus, and this bias could result in a significant overestimation of the relative hippocampal volume and a significant underestimation of the telencephalon volume, both of which are regularly used in large comparative analyses. We suggest that future studies should avoid this method and all studies should explicitly delineate the procedures used when estimating brain volumes. Copyright 2009 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Huang, Xiaokun; Zhang, You; Wang, Jing
2017-03-01
Four-dimensional (4D) cone-beam computed tomography (CBCT) enables motion tracking of anatomical structures and removes artifacts introduced by motion. However, the imaging time/dose of 4D-CBCT is substantially longer/higher than traditional 3D-CBCT. We previously developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, to reconstruct high-quality 4D-CBCT from limited number of projections to reduce the imaging time/dose. However, the accuracy of SMEIR is limited in reconstructing low-contrast regions with fine structure details. In this study, we incorporate biomechanical modeling into the SMEIR algorithm (SMEIR-Bio), to improve the reconstruction accuracy at low-contrast regions with fine details. The efficacy of SMEIR-Bio is evaluated using 11 lung patient cases and compared to that of the original SMEIR algorithm. Qualitative and quantitative comparisons showed that SMEIR-Bio greatly enhances the accuracy of reconstructed 4D-CBCT volume in low-contrast regions, which can potentially benefit multiple clinical applications including the treatment outcome analysis.
Multi Seasonal and Diurnal Characterization of Sensible Heat Flux in an Arid Land Environment
NASA Astrophysics Data System (ADS)
Al-Mashharawi, S.; Aragon, B.; McCabe, M.
2017-12-01
In sparsely vegetated arid and semi-arid regions, the available energy is transformed primarily into sensible heat, with little to no energy partitioned into latent heat. The characterization of bare soil arid environments are rather poorly understood in the context of both local, regional and global energy budgets. Using data from a long-term surface layer scintillometer and co-located meteorological installation, we examine the diurnal and seasonal patterns of sensible heat flux and the net radiation to soil heat flux ratio. We do this over a bare desert soil located adjacent to an irrigated agricultural field in the central region of Saudi Arabia. The results of this exploratory analysis can be used to inform upon remote sensing techniques for surface flux estimation, to derive and monitor soil heat flux dynamics, estimate the heat transfer resistance and the thermal roughness length over bare soils, and to better inform efforts that model the advective effects that complicate the accurate representation of agricultural energy budgets in the arid zone.
Gravity anomaly map of Mars and Moon and analysis of Venus gravity field: New analysis procedures
NASA Technical Reports Server (NTRS)
1984-01-01
The technique of harmonic splines allows direct estimation of a complete planetary gravity field (geoid, gravity, and gravity gradients) everywhere over the planet's surface. Harmonic spline results of Venus are presented as a series of maps at spacecraft and constant altitudes. Global (except for polar regions) and local relations of gravity to topography are described.
Southwest Alaska Regional Geothermal Energy Projec
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holdmann, Gwen
2015-04-30
Drilling and temperature logging campaigns between the late 1970's and early 1980’s measured temperatures at Pilgrim Hot Springs in excess of 90°C. Between 2010 and 2014 the University of Alaska used a variety of methods including geophysical surveys, remote sensing techniques, heat budget modeling, and additional drilling to better understand the resource and estimate the available geothermal energy.
NASA Technical Reports Server (NTRS)
Rigney, Matt; Jedlovec, Gary; LaFontaine, Frank; Shafer, Jaclyn
2010-01-01
Heat and moisture exchange between ocean surface and atmosphere plays an integral role in short-term, regional NWP. Current SST products lack both spatial and temporal resolution to accurately capture small-scale features that affect heat and moisture flux. NASA satellite is used to produce high spatial and temporal resolution SST analysis using an OI technique.
Dynamic connectivity regression: Determining state-related changes in brain connectivity
Cribben, Ivor; Haraldsdottir, Ragnheidur; Atlas, Lauren Y.; Wager, Tor D.; Lindquist, Martin A.
2014-01-01
Most statistical analyses of fMRI data assume that the nature, timing and duration of the psychological processes being studied are known. However, often it is hard to specify this information a priori. In this work we introduce a data-driven technique for partitioning the experimental time course into distinct temporal intervals with different multivariate functional connectivity patterns between a set of regions of interest (ROIs). The technique, called Dynamic Connectivity Regression (DCR), detects temporal change points in functional connectivity and estimates a graph, or set of relationships between ROIs, for data in the temporal partition that falls between pairs of change points. Hence, DCR allows for estimation of both the time of change in connectivity and the connectivity graph for each partition, without requiring prior knowledge of the nature of the experimental design. Permutation and bootstrapping methods are used to perform inference on the change points. The method is applied to various simulated data sets as well as to an fMRI data set from a study (N=26) of a state anxiety induction using a socially evaluative threat challenge. The results illustrate the method’s ability to observe how the networks between different brain regions changed with subjects’ emotional state. PMID:22484408
Determining the source characteristics of explosions near the Earth's surface
Pasyanos, Michael E.; Ford, Sean R.
2015-04-09
We present a method to determine the source characteristics of explosions near the airearth interface. The technique is an extension of the regional amplitude envelope method and now accounts for the reduction of seismic amplitudes as the depth of the explosion approaches the free surface and less energy is coupled into the ground. We first apply the method to the Humming Roadrunner series of shallow explosions in New Mexico where the yields and depths are known. From these tests, we find an appreciation of knowing the material properties for both source coupling/excitation and the free surface effect. Although there ismore » the expected tradeoff between depth and yield due to coupling effects, the estimated yields are generally close to the known values when the depth is constrained to the free surface. We then apply the method to a regionally recorded explosion in Syria. We estimate an explosive yield less than the 60 tons claimed by sources in the open press. The modifications to the method allow us to apply the technique to new classes of events, but we will need a better understanding of explosion source models and properties of additional geologic materials.« less
NASA Astrophysics Data System (ADS)
Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.
2013-12-01
The crux of carbon redistribution over the deglaciation centers on the ocean, where the isotopic signature of terrestrial carbon (δ13C terrestrial carbon = -25‰) is observed as a 0.3-0.7‰ shift in benthic foraminiferal δ13C. Deglacial mean-ocean δ13C estimates vary due to different subsets of benthic δ13C data and different methods of weighting the mean δ13C by volume. Here, we present a detailed 1-to-1 comparison of two methods of calculating mean δ13C change and uncertainty estimates using the same set of 493 benthic Cibicidoides spp. δ13C measurements for the LGM and Late Holocene. The first method divides the ocean into 8 regions, and uses simple line fits to describe the distribution of δ13C data for each timeslice over 0.5-5 km depth. With these line fits, we estimate the δ13C value at 100-meter intervals and weight those estimates by the regional volume at each depth slice. The mean-ocean δ13C is the sum of these volume-weighted regional δ13C estimates and the uncertainty of these mean-ocean δ13C estimates is computed using Monte Carlo simulations. The whole-ocean δ13C change is estimated using extrapolated surface- and deep-ocean δ13C estimates, and an assumed δ13C value for the Southern Ocean. This method yields an estimated LGM-to-Holocene change of 0.38×0.07‰ for 0.5-5km and 0.35×0.16‰ for the whole ocean (Peterson et al., 2013, submitted to Paleoceanography). The second method reconstructs glacial and modern δ13C by combining the same data compilation as above with a steady-state ocean circulation model (Gebbie, 2013, submitted to Paleoceanography). The result is a tracer distribution on a 4-by-4 degree horizontal resolution grid with 23 vertical levels, and an estimate of the distribution's uncertainty that accounts for the distinct modern and glacial water-mass geometries. From both methods, we compare the regional δ13C estimates (0.5-5 km), surface δ13C estimates (0-0.5 km), deep δ13C estimates (>5 km), Southern Ocean δ13C estimates, and finally whole-ocean δ13C estimates. Additionally, we explore the sensitivity of our mean δ13C estimates to our region and depth boundaries. Such a detailed comparison broadens our understanding of the limitations of sparse geologic data sets and deepens our understanding of deglacial δ13C changes.
ÖGRO survey on radiotherapy capacity in Austria : Status quo and estimation of future demands.
Zurl, Brigitte; Bayerl, Anja; De Vries, Alexander; Geinitz, Hans; Hawliczek, Robert; Knocke-Abulesz, Tomas-Henrik; Lukas, Peter; Pötter, Richard; Raunik, Wolfgang; Scholz, Brigitte; Schratter-Sehn, Annemarie; Sedlmayer, Felix; Seewald, Dietmar; Selzer, Edgar; Kapp, Karin S
2018-04-01
A comprehensive evaluation of the current national and regional radiotherapy capacity in Austria with an estimation of demands for 2020 and 2030 was performed by the Austrian Society for Radiation Oncology, Radiobiology and Medical Radiophysics (ÖGRO). All Austrian centers provided data on the number of megavoltage (MV) units, treatment series, fractions, percentage of retreatments and complex treatment techniques as well as the daily operating hours for the year 2014. In addition, waiting times until the beginning of radiotherapy were prospectively recorded over the first quarter of 2015. National and international epidemiological prediction data were used to estimate future demands. For a population of 8.51 million, 43 MV units were at disposal. In 14 radiooncological centers, a total of 19,940 series with a mean number of 464 patients per MV unit/year and a mean fraction number of 20 (range 16-24) per case were recorded. The average re-irradiation ratio was 14%. The survey on waiting times until start of treatment showed provision shortages in 40% of centers with a mean waiting time of 13.6 days (range 0.5-29.3 days) and a mean maximum waiting time of 98.2 days. Of all centers, 21% had no or only a limited ability to deliver complex treatment techniques. Predictions for 2020 and 2030 indicate an increased need in the overall number of MV units to a total of 63 and 71, respectively. This ÖGRO survey revealed major regional differences in radiooncological capacity. Considering epidemiological developments, an aggravation of the situation can be expected shortly. This analysis serves as a basis for improved public regional health care planning.
Plantar fascia segmentation and thickness estimation in ultrasound images.
Boussouar, Abdelhafid; Meziane, Farid; Crofts, Gillian
2017-03-01
Ultrasound (US) imaging offers significant potential in diagnosis of plantar fascia (PF) injury and monitoring treatment. In particular US imaging has been shown to be reliable in foot and ankle assessment and offers a real-time effective imaging technique that is able to reliably confirm structural changes, such as thickening, and identify changes in the internal echo structure associated with diseased or damaged tissue. Despite the advantages of US imaging, images are difficult to interpret during medical assessment. This is partly due to the size and position of the PF in relation to the adjacent tissues. It is therefore a requirement to devise a system that allows better and easier interpretation of PF ultrasound images during diagnosis. This study proposes an automatic segmentation approach which for the first time extracts ultrasound data to estimate size across three sections of the PF (rearfoot, midfoot and forefoot). This segmentation method uses artificial neural network module (ANN) in order to classify small overlapping patches as belonging or not-belonging to the region of interest (ROI) of the PF tissue. Features ranking and selection techniques were performed as a post-processing step for features extraction to reduce the dimension and number of the extracted features. The trained ANN classifies the image overlapping patches into PF and non-PF tissue, and then it is used to segment the desired PF region. The PF thickness was calculated using two different methods: distance transformation and area-length calculation algorithms. This new approach is capable of accurately segmenting the PF region, differentiating it from surrounding tissues and estimating its thickness. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Estimating Canopy Water Content of Chaparral Shrubs Using Optical Methods
NASA Technical Reports Server (NTRS)
Ustin, Susan L.; Scheer, George; Castaneda, Claudia M.; Jacquemoud, Stephane; Roberts, Dar; Green, Robert O.
1996-01-01
California chaparral ecosystems are exceptionally fire adapted and typically are subject to wildfire at decadal to century frequencies. The hot dry Mediterranean climate summers and the chaparral communities of the Santa Monica Mountains make wildfire one of the most serious economic and life-threatening natural disasters faced by the region. Additionally, the steep fire-burned hillsides are subject to erosion, slumpage, and mud slides during the winter rains. The Santa Monica Mountain Zone (SMMZ) is a 104,000 ha eastwest trending range with 607 m of vertical relief and located in the center of the greater Los Angeles region. A series of fires in the fall of 1993 burned from Simi Valley to Santa Monica within a few hours. Developing techniques to monitor fire hazard and predict the spread of fire is of major concern to the region. One key factor in the susceptibility to fire is the water content of the vegetation canopy. The development of imaging spectrometry and remote sensing techniques may constitute a tool to provide this information.
Mapping of the Land Cover Spatiotemporal Characteristics in Northern Russia Caused by Climate Change
NASA Astrophysics Data System (ADS)
Panidi, E.; Tsepelev, V.; Torlopova, N.; Bobkov, A.
2016-06-01
The study is devoted to the investigation of regional climate change in Northern Russia. Due to sparseness of the meteorological observation network in northern regions, we investigate the application capabilities of remotely sensed vegetation cover as indicator of climate change at the regional scale. In previous studies, we identified statistically significant relationship between the increase of surface air temperature and increase of the shrub vegetation productivity. We verified this relationship using ground observation data collected at the meteorological stations and Normalised Difference Vegetation Index (NDVI) data produced from Terra/MODIS satellite imagery. Additionally, we designed the technique of growing seasons separation for detailed investigation of the land cover (shrub cover) dynamics. Growing seasons are the periods when the temperature exceeds +5°C and +10°C. These periods determine the vegetation productivity conditions (i.e., conditions that allow growth of the phytomass). We have discovered that the trend signs for the surface air temperature and NDVI coincide on planes and river floodplains. On the current stage of the study, we are working on the automated mapping technique, which allows to estimate the direction and magnitude of the climate change in Northern Russia. This technique will make it possible to extrapolate identified relationship between land cover and climate onto territories with sparse network of meteorological stations. We have produced the gridded maps of NDVI and NDWI for the test area in European part of Northern Russia covered with the shrub vegetation. Basing on these maps, we may determine the frames of growing seasons for each grid cell. It will help us to obtain gridded maps of the NDVI linear trend for growing seasons on cell-by-cell basis. The trend maps can be used as indicative maps for estimation of the climate change on the studied areas.
Error-Rate Bounds for Coded PPM on a Poisson Channel
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2009-01-01
Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.
Effects of trade openness and market scale on different regions
NASA Astrophysics Data System (ADS)
Tian, Renqu; Yang, Zisheng
2017-04-01
This paper revisits the relationship between growth, trade openness and market scale. Empirical studies have provided that area develops lopsided problem in China is increasingly serious, while large trade openness and market scale bring about more economic growth. We use a number of data set from province-level’s gross domestic product and socio-economic, as well as statistical methods panel ordinary least squares and instrumental variables estimation techniques to explore the effects of trade openness and regional market scale on the three major economic regions. The results indicate: Firstly, the impact of market scale and trade openness on economic growth is found to be positive. Secondly, the overall regional disparity is owing to the trade openness, market scale and macroeconomic policies. Thirdly, midland and western region should take advantage of regional geographical location and resource to expand exports and narrow the regional difference.
Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data
NASA Astrophysics Data System (ADS)
Das, S. B.; Mitra, S.
2015-12-01
We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.
Shi, Zhonglin; Wen, Anbang; Zhang, Xinbao; Yan, Dongchun
2011-10-01
The potential for using (7)Be measurements to document soil redistribution associated with a heavy rainfall was estimated using (7)Be method on a bare purple soil plot in the Three Gorges Reservoir region of China. The results were compared with direct measurement from traditional approaches of erosion pins and runoff plots. The study shows that estimation of soil losses from (7)Be are comparable with the monitoring results provided by erosion pins and runoff plots, and are also in agreement with the existing knowledge provided by 137Cs measurements. The results obtained from this study demonstrated the potential for using (7)Be technique to quantify short-term erosion rates in these areas. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cooperative Position Aware Mobility Pattern of AUVs for Avoiding Void Zones in Underwater WSNs
Javaid, Nadeem; Ejaz, Mudassir; Abdul, Wadood; Alamri, Atif; Almogren, Ahmad; Niaz, Iftikhar Azim; Guizani, Nadra
2017-01-01
In this paper, we propose two schemes; position-aware mobility pattern (PAMP) and cooperative PAMP (Co PAMP). The first one is an optimization scheme that avoids void hole occurrence and minimizes the uncertainty in the position estimation of glider’s. The second one is a cooperative routing scheme that reduces the packet drop ratio by using the relay cooperation. Both techniques use gliders that stay at sojourn positions for a predefined time, at sojourn position self-confidence (s-confidence) and neighbor-confidence (n-confidence) regions that are estimated for balanced energy consumption. The transmission power of a glider is adjusted according to those confidence regions. Simulation results show that our proposed schemes outperform the compared existing one in terms of packet delivery ratio, void zones and energy consumption. PMID:28335377
Improving satellite-based post-fire evapotranspiration estimates in semi-arid regions
NASA Astrophysics Data System (ADS)
Poon, P.; Kinoshita, A. M.
2017-12-01
Climate change and anthropogenic factors contribute to the increased frequency, duration, and size of wildfires, which can alter ecosystem and hydrological processes. The loss of vegetation canopy and ground cover reduces interception and alters evapotranspiration (ET) dynamics in riparian areas, which can impact rainfall-runoff partitioning. Previous research evaluated the spatial and temporal trends of ET based on burn severity and observed an annual decrease of 120 mm on average for three years after fire. Building upon these results, this research focuses on the Coyote Fire in San Diego, California (USA), which burned a total of 76 km2 in 2003 to calibrate and improve satellite-based ET estimates in semi-arid regions affected by wildfire. The current work utilizes satellite-based products and techniques such as the Google Earth Engine Application programming interface (API). Various ET models (ie. Operational Simplified Surface Energy Balance Model (SSEBop)) are compared to the latent heat flux from two AmeriFlux eddy covariance towers, Sky Oaks Young (US-SO3), and Old Stand (US-SO2), from 2000 - 2015. The Old Stand tower has a low burn severity and the Young Stand tower has a moderate to high burn severity. Both towers are used to validate spatial ET estimates. Furthermore, variables and indices, such as Enhanced Vegetation Index (EVI), Normalized Difference Moisture Index (NDMI), and the Normalized Burn Ratio (NBR) are utilized to evaluate satellite-based ET through a multivariate statistical analysis at both sites. This point-scale study will able to improve ET estimates in spatially diverse regions. Results from this research will contribute to the development of a post-wildfire ET model for semi-arid regions. Accurate estimates of post-fire ET will provide a better representation of vegetation and hydrologic recovery, which can be used to improve hydrologic models and predictions.
NASA Astrophysics Data System (ADS)
Kromskii, S. D.; Pavlenko, O. V.; Gabsatarova, I. P.
2018-03-01
Based on the Anapa (ANN) seismic station records of 40 earthquakes ( M W > 3.9) that occurred within 300 km of the station since 2002 up to the present time, the source parameters and quality factor of the Earth's crust ( Q( f)) and upper mantle are estimated for the S-waves in the 1-8 Hz frequency band. The regional coda analysis techniques which allow separating the effects associated with seismic source (source effects) and with the propagation path of seismic waves (path effects) are employed. The Q-factor estimates are obtained in the form Q( f) = 90 × f 0.7 for the epicentral distances r < 120 km and in the form Q( f) = 90 × f1.0 for r > 120 km. The established Q( f) and source parameters are close to the estimates for Central Japan, which is probably due to the similar tectonic structure of the regions. The shapes of the source parameters are found to be independent of the magnitude of the earthquakes in the magnitude range 3.9-5.6; however, the radiation of the high-frequency components ( f > 4-5 Hz) is enhanced with the depth of the source (down to h 60 km). The estimates Q( f) of the quality factor determined from the records by the Sochi, Anapa, and Kislovodsk seismic stations allowed a more accurate determination of the seismic moments and magnitudes of the Caucasian earthquakes. The studies will be continued for obtaining the Q( f) estimates, geometrical spreading functions, and frequency-dependent amplification of seismic waves in the Earth's crust in the other regions of the Northern Caucasus.
NASA Technical Reports Server (NTRS)
Dickey, Jean O.
1999-01-01
Uncertainty over the response of the atmospheric hydrological cycle (particularly the distribution of water vapor and cloudiness) to anthropogenic forcing is a primary source of doubt in current estimates of global climate sensitivity, which raises severe difficulties in evaluating its likely societal impact. Fortunately, a variety of advanced techniques and sensors are beginning to shed new light on the atmospheric hydrological cycle. One of the most promising makes use of the sensitivity of the Global Positioning System (GPS) to the thermodynamic state, and in particular the water vapor content, of the atmosphere through which the radio signals propagate. Our strategy to derive the maximum benefit for hydrological studies from the rapidly increasing GPS data stream will proceed in three stages: (1) systematically analyze and archive quality-controlled retrievals using state-of-the-art techniques; (2) employ both currently available and innovative assimilation procedures to incorporate these determinations into advanced regional and global atmospheric models and assess their effects; and (3) apply the results to investigate selected scientific issues of relevance to regional and global hydrological studies. An archive of GPS-based estimation of total zenith delay (TZD) data and water vapor where applicable has been established with expanded automated quality control. The accuracy of the GPS estimates is being monitored; the investigation of systematic errors is ongoing using comparisons with water vapor radiometers. Meteorological packages have been implemented. The accuracy and utilization of the TZD estimates has been improved by implementing a troposphere gradient model. GPS-based gradients have been validated as real atmospheric moisture gradients, establishing a link between the estimated gradients and the passage of weather fronts. We have developed a generalized ray tracing inversion scheme that can be used to analyze occultation data acquired from space- or land-based receivers. The National Center for Atmospheric Research mesoscale model (version MM5) has been adapted for Southern California, and assimilation studies are underway. Additional information is contained in the original.
NASA Astrophysics Data System (ADS)
Ganesan, A.; Lunt, M. F.; Rigby, M. L.; Chatterjee, A.; Boesch, H.; Parker, R.; Prinn, R. G.; van der Schoot, M. V.; Krummel, P. B.; Tiwari, Y. K.; Mukai, H.; Machida, T.; Terao, Y.; Nomura, S.; Patra, P. K.
2015-12-01
We present an analysis of the regional methane (CH4) budget from South Asia, using new measurements and new modelling techniques. South Asia contains some of the largest anthropogenic CH4 sources in the world, mainly from rice agriculture and ruminants. However, emissions from this region have been highly uncertain largely due to insufficient constraints from atmospheric measurements. Compared to parts of the developed world, which have well-developed monitoring networks, South Asia is very under-sampled, particularly given its importance to the global CH4 budget. Over the past few years, data have been collected from a variety of surface sites around the region, ranging from in situ to flask-based sampling. We have used these data, in conjunction with column methane data from the GOSAT satellite, to quantify emissions at a regional scale. Using the Met Office's Lagrangian NAME model, we calculated sensitivities to surface fluxes at 12 km resolution, allowing us to simulate the high-resolution impacts of emissions on concentrations. In addition, we used a newly developed hierarchical Bayesian inverse estimation scheme to estimate regional fluxes over the period of 2012-2014 in addition to ancillary "hyper-parameters" that characterize uncertainties in the system. Through this novel approach, we have characterized the effect of "aggregation" errors, model uncertainties as well as the effects of correlated errors when using regional measurement networks. We have also assessed the effects of biases on the GOSAT CH4 retrievals, which has been made possible for the first time for this region through the expanded surface measurements. In this talk, we will discuss a) regional CH4 fluxes from South Asia, with a particular focus on the densely populated Indo-Gangetic Plains b) derived model uncertainties, including the effects of correlated errors c) the impacts of combining surface and satellite data for emissions estimation in regions where poor satellite validation exists and d) the challenges in estimating emissions for regions of the world with a sparse measurement network.
NASA Astrophysics Data System (ADS)
Kim, Beomgeun; Seo, Dong-Jun; Noh, Seong Jin; Prat, Olivier P.; Nelson, Brian R.
2018-01-01
A new technique for merging radar precipitation estimates and rain gauge data is developed and evaluated to improve multisensor quantitative precipitation estimation (QPE), in particular, of heavy-to-extreme precipitation. Unlike the conventional cokriging methods which are susceptible to conditional bias (CB), the proposed technique, referred to herein as conditional bias-penalized cokriging (CBPCK), explicitly minimizes Type-II CB for improved quantitative estimation of heavy-to-extreme precipitation. CBPCK is a bivariate version of extended conditional bias-penalized kriging (ECBPK) developed for gauge-only analysis. To evaluate CBPCK, cross validation and visual examination are carried out using multi-year hourly radar and gauge data in the North Central Texas region in which CBPCK is compared with the variant of the ordinary cokriging (OCK) algorithm used operationally in the National Weather Service Multisensor Precipitation Estimator. The results show that CBPCK significantly reduces Type-II CB for estimation of heavy-to-extreme precipitation, and that the margin of improvement over OCK is larger in areas of higher fractional coverage (FC) of precipitation. When FC > 0.9 and hourly gauge precipitation is > 60 mm, the reduction in root mean squared error (RMSE) by CBPCK over radar-only (RO) is about 12 mm while the reduction in RMSE by OCK over RO is about 7 mm. CBPCK may be used in real-time analysis or in reanalysis of multisensor precipitation for which accurate estimation of heavy-to-extreme precipitation is of particular importance.
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
NASA Astrophysics Data System (ADS)
Longuevergne, Laurent; Scanlon, Bridget R.; Wilson, Clark R.
2010-11-01
The Gravity Recovery and Climate Experiment (GRACE) satellites provide observations of water storage variation at regional scales. However, when focusing on a region of interest, limited spatial resolution and noise contamination can cause estimation bias and spatial leakage, problems that are exacerbated as the region of interest approaches the GRACE resolution limit of a few hundred km. Reliable estimates of water storage variations in small basins require compromises between competing needs for noise suppression and spatial resolution. The objective of this study was to quantitatively investigate processing methods and their impacts on bias, leakage, GRACE noise reduction, and estimated total error, allowing solution of the trade-offs. Among the methods tested is a recently developed concentration algorithm called spatiospectral localization, which optimizes the basin shape description, taking into account limited spatial resolution. This method is particularly suited to retrieval of basin-scale water storage variations and is effective for small basins. To increase confidence in derived methods, water storage variations were calculated for both CSR (Center for Space Research) and GRGS (Groupe de Recherche de Géodésie Spatiale) GRACE products, which employ different processing strategies. The processing techniques were tested on the intensively monitored High Plains Aquifer (450,000 km2 area), where application of the appropriate optimal processing method allowed retrieval of water storage variations over a portion of the aquifer as small as ˜200,000 km2.
Groupwise registration of MR brain images with tumors.
Tang, Zhenyu; Wu, Yihong; Fan, Yong
2017-08-04
A novel groupwise image registration framework is developed for registering MR brain images with tumors. Our method iteratively estimates a normal-appearance counterpart for each tumor image to be registered and constructs a directed graph (digraph) of normal-appearance images to guide the groupwise image registration. Particularly, our method maps each tumor image to its normal appearance counterpart by identifying and inpainting brain tumor regions with intensity information estimated using a low-rank plus sparse matrix decomposition based image representation technique. The estimated normal-appearance images are groupwisely registered to a group center image guided by a digraph of images so that the total length of 'image registration paths' to be the minimum, and then the original tumor images are warped to the group center image using the resulting deformation fields. We have evaluated our method based on both simulated and real MR brain tumor images. The registration results were evaluated with overlap measures of corresponding brain regions and average entropy of image intensity information, and Wilcoxon signed rank tests were adopted to compare different methods with respect to their regional overlap measures. Compared with a groupwise image registration method that is applied to normal-appearance images estimated using the traditional low-rank plus sparse matrix decomposition based image inpainting, our method achieved higher image registration accuracy with statistical significance (p = 7.02 × 10 -9 ).
Kawata, Yasuo; Arimura, Hidetaka; Ikushima, Koujirou; Jin, Ze; Morita, Kento; Tokunaga, Chiaki; Yabu-Uchi, Hidetake; Shioyama, Yoshiyuki; Sasaki, Tomonari; Honda, Hiroshi; Sasaki, Masayuki
2017-10-01
The aim of this study was to investigate the impact of pixel-based machine learning (ML) techniques, i.e., fuzzy-c-means clustering method (FCM), and the artificial neural network (ANN) and support vector machine (SVM), on an automated framework for delineation of gross tumor volume (GTV) regions of lung cancer for stereotactic body radiation therapy. The morphological and metabolic features for GTV regions, which were determined based on the knowledge of radiation oncologists, were fed on a pixel-by-pixel basis into the respective FCM, ANN, and SVM ML techniques. Then, the ML techniques were incorporated into the automated delineation framework of GTVs followed by an optimum contour selection (OCS) method, which we proposed in a previous study. The three-ML-based frameworks were evaluated for 16 lung cancer cases (six solid, four ground glass opacity (GGO), six part-solid GGO) with the datasets of planning computed tomography (CT) and 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT images using the three-dimensional Dice similarity coefficient (DSC). DSC denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those estimated using the automated framework. The FCM-based framework achieved the highest DSCs of 0.79±0.06, whereas DSCs of the ANN-based and SVM-based frameworks were 0.76±0.14 and 0.73±0.14, respectively. The FCM-based framework provided the highest segmentation accuracy and precision without a learning process (lowest calculation cost). Therefore, the FCM-based framework can be useful for delineation of tumor regions in practical treatment planning. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Canopy reflectance modeling in a tropical wooded grassland
NASA Technical Reports Server (NTRS)
Simonett, David
1988-01-01
The Li-Strahler canopy reflectance model, driven by LANDSAT Thematic Mapper (TM) data, provided regional estimates of tree size and density in two bioclimatic zones in Africa. This model exploits tree geometry in an inversion technique to predict average tree size and density from reflectance data using a few simple patameters measured in the field and in the imagery. Reflectance properties of the trees were measured in the study sites using a pole-mounted radiometer. The measurements showed that the assumptions of the simple Li-Strahler model are reasonable for these woodlands. The field radiometer measurements were used to calculate the normalized difference vegetation index (NDVI), and the integrated NDVI over the canopy was related to crown volume. Predictions of tree size and density from the canopy model were used with allometric equations from the literature to estimate woody biomass and potential foliar biomass for the sites and for the regions. Estimates were compared with independent measurements made in the Sahelian sites, and to typical values from the literature for these regions and for similar woodlands. In order to apply the inversion procedure regionally, an area must first be stratified into woodland cover classes, and dry-season TM data were used to generate a stratum map of the study areas with reasonable accuracy. The method used was unsupervised classification of multi-data principal components images.
Seasonal estimates of riparian evapotranspiration using remote and in situ measurements
Goodrich, D.C.; Scott, R.; Qi, J.; Goff, B.; Unkrich, C.L.; Moran, M.S.; Williams, D.; Schaeffer, S.; Snyder, K.; MacNish, R.; Maddock, T.; Pool, D.; Chehbouni, A.; Cooper, D.I.; Eichinger, W.E.; Shuttleworth, W.J.; Kerr, Y.; Marsett, R.; Ni, W.
2000-01-01
In many semi-arid basins during extended periods when surface snowmelt or storm runoff is absent, groundwater constitutes the primary water source for human habitation, agriculture and riparian ecosystems. Utilizing regional groundwater models in the management of these water resources requires accurate estimates of basin boundary conditions. A critical groundwater boundary condition that is closely coupled to atmospheric processes and is typically known with little certainty is seasonal riparian evapotranspiration ET). This quantity can often be a significant factor in the basin water balance in semi-arid regions yet is very difficult to estimate over a large area. Better understanding and quantification of seasonal, large-area riparian ET is a primary objective of the Semi-Arid Land-Surface-Atmosphere (SALSA) Program. To address this objective, a series of interdisciplinary experimental Campaigns were conducted in 1997 in the San Pedro Basin in southeastern Arizona. The riparian system in this basin is primarily made up of three vegetation communities: mesquite (Prosopis velutina), sacaton grasses (Sporobolus wrightii), and a cottonwood (Populus fremontii)/willow (Salix goodingii) forest gallery. Micrometeorological measurement techniques were used to estimate ET from the mesquite and grasses. These techniques could not be utilized to estimate fluxes from the cottonwood/willow (C/W) forest gallery due to the height (20-30 m) and non-uniform linear nature of the forest gallery. Short-term (2-4 days) sap flux measurements were made to estimate canopy transpiration over several periods of the riparian growing season. Simultaneous remote sensing measurements were used to spatially extrapolate tree and stand measurements. Scaled C/W stand level sap flux estimates were utilized to calibrate a Penman-Monteith model to enable temporal extrapolation between Synoptic measurement periods. With this model and set of measurements, seasonal riparian vegetation water use estimates for the riparian corridor were obtained. To validate these models, a 90-day pre-monsoon water balance over a 10 km section of the river was carried out. All components of the water balance, including riparian ET, were independently estimated. The closure of the water balance was roughly 5% of total inflows. The ET models were then used to provide riparian ET estimates over the entire corridor for the growing season. These estimates were approximately 14% less than those obtained from the most recent groundwater model of the basin for a comparable river reach.
Basin Scale Estimates of Evapotranspiration Using GRACE and other Observations
NASA Technical Reports Server (NTRS)
Rodell, M.; Famiglietti, J. S.; Chen, J.; Seneviratne, S. I.; Viterbo, P.; Holl, S.; Wilson, C. R.
2004-01-01
Evapotranspiration is integral to studies of the Earth system, yet it is difficult to measure on regional scales. One estimation technique is a terrestrial water budget, i.e., total precipitation minus the sum of evapotranspiration and net runoff equals the change in water storage. Gravity Recovery and Climate Experiment (GRACE) satellite gravity observations are now enabling closure of this equation by providing the terrestrial water storage change. Equations are presented here for estimating evapotranspiration using observation based information, taking into account the unique nature of GRACE observations. GRACE water storage changes are first substantiated by comparing with results from a land surface model and a combined atmospheric-terrestrial water budget approach. Evapotranspiration is then estimated for 14 time periods over the Mississippi River basin and compared with output from three modeling systems. The GRACE estimates generally lay in the middle of the models and may provide skill in evaluating modeled evapotranspiration.
Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model
Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.
2012-01-01
Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315
Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration
NASA Astrophysics Data System (ADS)
Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola
In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.
Primary production in the tropical continental shelf seas bordering northern Australia
NASA Astrophysics Data System (ADS)
Furnas, Miles J.; Carpenter, Edward J.
2016-10-01
Pelagic primary production (14C uptake) was measured 81 times between 1990 and 2013 at sites spanning the broad, shallow Northern Australian Shelf (NAS; 120-145°E) which borders the Australian continent. The mean of all areal production measurements was 1048±109 mg C m-2 d-1 (mean±95% CI). Estimates of areal primary production were correlated with integral upper-euphotic zone chlorophyll stocks (above the 50% and 20% light penetration depths) accessible to ocean color remote sensing and total water column chlorophyll standing crop, but not surface (0-2 m) chlorophyll concentrations. While the NAS is subject to a well characterized monsoonal climate regime (austral summer-NW monsoon -wet: austral winter- SE monsoon -dry), most seasonal differences in means of regional-scale chlorophyll standing crop (11-33 mg Chl m-2 for 12 of 15 season-region combinations) and areal primary production (700-1850 mg C m- day-1 for 12 of 15 season-region combinations) fell within a 3-fold range. Apart from the shallow waters of the Torres Strait and northern Great Barrier Reef, picoplankton (<2 μm size fraction) dominated chlorophyll standing crop and primary production with regional means of picoplankton contributions ranging from 45 to >80%. While the range of our post-1990 areal production estimates overlaps the range of production estimates made in NAS waters during 1960-62, the mean of post-1990 estimates is over 2-fold greater. We regard the difference to be due to improvements in production measurement techniques, particularly regarding the reduction of potential metal toxicity and incubations in more realistic light regimes.
National land cover monitoring using large, permanent photo plots
Raymond L. Czaplewski; Glenn P. Catts; Paul W. Snook
1987-01-01
A study in the State of North Carplina, U.S.A. demonstrated that large, permanent photo plots (400 hectares) can be used to monitor large regions of land by using remote sensing techniques. Estimates of area in a variety of land cover categories were made by photointerpretation of medium-scale aerial photography from a single month using 111 photo plots. Many of these...
Total absorption and photoionization cross sections of water vapor between 100 and 1000 A
NASA Technical Reports Server (NTRS)
Haddad, G. N.; Samson, J. A. R.
1986-01-01
Absolute photoabsorption and photoionization cross sections of water vapor are reported at a large number of discrete wavelengths between 100 and 1000 A with an estimate error of + or - 3 percent in regions free from any discrete structure. The double ionization chamber technique utilized is described. Recent calculations are shown to be in reasonable agreement with the present data.
James E. Smith; Linda S. Heath; Kenneth E. Skog; Richard A. Birdsey
2006-01-01
This study presents techniques for calculating average net annual additions to carbon in forests and in forest products. Forest ecosystem carbon yield tables, representing stand-level merchantable volume and carbon pools as a function of stand age, were developed for 51 forest types within 10 regions of the United States. Separate tables were developed for...
Mapping ionospheric observations using combined techniques for Europe region
NASA Astrophysics Data System (ADS)
Tomasik, Lukasz; Gulyaeva, Tamara; Stanislawska, Iwona; Swiatek, Anna; Pozoga, Mariusz; Dziak-Jankowska, Beata
An k nearest neighbours algorithm (KNN) was used for filling the gaps of the missing F2-layer critical frequency is proposed and applied. This method uses TEC data calculated from EGNOS Vertical Delay Estimate (VDE ≈0.78 TECU) and several GNSS stations and its spatial correlation whit data from selected ionosondes. For mapping purposes two-dimensional similarity function in KNN method was proposed.
Future forest carbon accounting challenges: the question of regionalization
Michael C. Nichols
2015-01-01
Forest carbon accounting techniques are changing. This year, a new accounting system is making its debut with the production of forest carbon data for EPAâs National Greenhouse Gas Inventory. The Forest Serviceâs annualized inventory system is being more fully integrated into estimates of forest carbon at the national and state levels both for the present and the...
Structure identification within a transitioning swept-wing boundary layer
NASA Astrophysics Data System (ADS)
Chapman, Keith Lance
1997-08-01
Extensive measurements are made in a transitioning swept-wing boundary layer using hot-film, hot-wire and cross-wire anemometry. The crossflow-dominated flow contains stationary vortices that breakdown near mid-chord. The most amplified vortex wavelength is forced by the use of artificial roughness elements near the leading edge. Two-component velocity and spanwise surface shear-stress correlation measurements are made at two constant chord locations, before and after transition. Streamwise surface shear stresses are also measured through the entire transition region. Correlation techniques are used to identify stationary structures in the laminar regime and coherent structures in the turbulent regime. Basic techniques include observation of the spatial correlations and the spatially distributed auto-spectra. The primary and secondary instability mechanisms are identified in the spectra in all measured fields. The primary mechanism is seen to grow, cause transition and produce large-scale turbulence. The secondary mechanism grows through the entire transition region and produces the small-scale turbulence. Advanced techniques use linear stochastic estimation (LSE) and proper orthogonal decomposition (POD) to identify the spatio-temporal evolutions of structures in the boundary layer. LSE is used to estimate the instantaneous velocity fields using temporal data from just two spatial locations and the spatial correlations. Reference locations are selected using maximum RMS values to provide the best available estimates. POD is used to objectively determine modes characteristic of the measured flow based on energy. The stationary vortices are identified in the first laminar modes of each velocity component and shear component. Experimental evidence suggests that neighboring vortices interact and produce large coherent structures with spanwise periodicity at double the stationary vortex wavelength. An objective transition region detection method is developed using streamwise spatial POD solutions which isolate the growth of the primary and secondary instability mechanisms in the first and second modes, respectively. Temporal evolutions of dominant POD modes in all measured fields are calculated. These scalar POD coefficients contain the integrated characteristics of the entire field, greatly reducing the amount of data to characterize the instantaneous field. These modes may then be used to train future flow control algorithms based on neural networks.
Structure Identification Within a Transitioning Swept-Wing Boundary Layer
NASA Technical Reports Server (NTRS)
Chapman, Keith; Glauser, Mark
1996-01-01
Extensive measurements are made in a transitioning swept-wing boundary layer using hot-film, hot-wire and cross-wire anemometry. The crossflow-dominated flow contains stationary vortices that breakdown near mid-chord. The most amplified vortex wavelength is forced by the use of artificial roughness elements near the leading edge. Two-component velocity and spanwise surface shear-stress correlation measurements are made at two constant chord locations, before and after transition. Streamwise surface shear stresses are also measured through the entire transition region. Correlation techniques are used to identify stationary structures in the laminar regime and coherent structures in the turbulent regime. Basic techniques include observation of the spatial correlations and the spatially distributed auto-spectra. The primary and secondary instability mechanisms are identified in the spectra in all measured fields. The primary mechanism is seen to grow, cause transition and produce large-scale turbulence. The secondary mechanism grows through the entire transition region and produces the small-scale turbulence. Advanced techniques use Linear Stochastic Estimation (LSE) and Proper Orthogonal Decomposition (POD) to identify the spatio-temporal evolutions of structures in the boundary layer. LSE is used to estimate the instantaneous velocity fields using temporal data from just two spatial locations and the spatial correlations. Reference locations are selected using maximum RMS values to provide the best available estimates. POD is used to objectively determine modes characteristic of the measured flow based on energy. The stationary vortices are identified in the first laminar modes of each velocity component and shear component. Experimental evidence suggests that neighboring vortices interact and produce large coherent structures with spanwise periodicity at double the stationary vortex wavelength. An objective transition region detection method is developed using streamwise spatial POD solutions which isolate the growth of the primary and secondary instability mechanisms in the first and second modes, respectively. Temporal evolutions of dominant POD modes in all measured fields are calculated. These scalar POD coefficients contain the integrated characteristics of the entire field, greatly reducing the amount of data to characterize the instantaneous field. These modes may then be used to train future flow control algorithms based on neural networks.
Using satellite image data to estimate soil moisture
NASA Astrophysics Data System (ADS)
Chuang, Chi-Hung; Yu, Hwa-Lung
2017-04-01
Soil moisture is considered as an important parameter in various study fields, such as hydrology, phenology, and agriculture. In hydrology, soil moisture is an significant parameter to decide how much rainfall that will infiltrate into permeable layer and become groundwater resource. Although soil moisture is a critical role in many environmental studies, so far the measurement of soil moisture is using ground instrument such as electromagnetic soil moisture sensor. Use of ground instrumentation can directly obtain the information, but the instrument needs maintenance and consume manpower to operation. If we need wide range region information, ground instrumentation probably is not suitable. To measure wide region soil moisture information, we need other method to achieve this purpose. Satellite remote sensing techniques can obtain satellite image on Earth, this can be a way to solve the spatial restriction on instrument measurement. In this study, we used MODIS data to retrieve daily soil moisture pattern estimation, i.e., crop water stress index (cwsi), over the year of 2015. The estimations are compared with the observations at the soil moisture stations from Taiwan Bureau of soil and water conservation. Results show that the satellite remote sensing data can be helpful to the soil moisture estimation. Further analysis can be required to obtain the optimal parameters for soil moisture estimation in Taiwan.
NASA Astrophysics Data System (ADS)
Lanorte, Antonio; Desantis, Fortunato; Aromando, Angelo; Lasaponara, Rosa
2013-04-01
This paper presents the results we obtained in the context of the FIRE-SAT project during the 2012 operative application of the satellite based tools for fire monitoring. FIRE_SAT project has been funded by the Civil Protection of the Basilicata Region in order to set up a low cost methodology for fire danger monitoring and fire effect estimation based on satellite Earth Observation techniques. To this aim, NASA Moderate Resolution Imaging Spectroradiometer (MODIS), ASTER, Landsat TM data were used. Novel data processing techniques have been developed by researchers of the ARGON Laboratory of the CNR-IMAA for the operative monitoring of fire. In this paper we only focus on the danger estimation model which has been fruitfully used since 2008 to 2012 as an reliable operative tool to support and optimize fire fighting strategies from the alert to the management of resources including fire attacks. The daily updating of fire danger is carried out using satellite MODIS images selected for their spectral capability and availability free of charge from NASA web site. This makes these data sets very suitable for an effective systematic (daily) and sustainable low-cost monitoring of large areas. The preoperative use of the integrated model, pointed out that the system properly monitor spatial and temporal variations of fire susceptibility and provide useful information of both fire severity and post fire regeneration capability.
On the Methods for Estimating the Corneoscleral Limbus.
Jesus, Danilo A; Iskander, D Robert
2017-08-01
The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
NASA Astrophysics Data System (ADS)
Nanteza, J.; Thomas, B. F.; Mukwaya, P. I.
2017-12-01
The general lack of knowledge about the current rates of water abstraction/use is a challenge to sustainable water resources management in many countries, including Uganda. Estimates of water abstraction/use rates over Uganda, currently available from the FAO are not disaggregated according to source, making it difficult to understand how much is taken out of individual water stores, limiting effective management. Modelling efforts have disaggregated water use rates according to source (i.e. groundwater and surface water). However, over Sub-Saharan Africa countries, these model use estimates are highly uncertain given the scale limitations in applying water use (i.e. point versus regional), thus influencing model calibration/validation. In this study, we utilize data from the water supply atlas project over Uganda to estimate current rates of groundwater abstraction across the country based on location, well type and other relevant information. GIS techniques are employed to demarcate areas served by each water source. These areas are combined with past population distributions and average daily water needed per person to estimate water abstraction/use through time. The results indicate an increase in groundwater use, and isolate regions prone to groundwater depletion where improved management is required to sustainably management groundwater use.
Shear wave velocity imaging using transient electrode perturbation: phantom and ex vivo validation.
DeWall, Ryan J; Varghese, Tomy; Madsen, Ernest L
2011-03-01
This paper presents a new shear wave velocity imaging technique to monitor radio-frequency and microwave ablation procedures, coined electrode vibration elastography. A piezoelectric actuator attached to an ablation needle is transiently vibrated to generate shear waves that are tracked at high frame rates. The time-to-peak algorithm is used to reconstruct the shear wave velocity and thereby the shear modulus variations. The feasibility of electrode vibration elastography is demonstrated using finite element models and ultrasound simulations, tissue-mimicking phantoms simulating fully (phantom 1) and partially ablated (phantom 2) regions, and an ex vivo bovine liver ablation experiment. In phantom experiments, good boundary delineation was observed. Shear wave velocity estimates were within 7% of mechanical measurements in phantom 1 and within 17% in phantom 2. Good boundary delineation was also demonstrated in the ex vivo experiment. The shear wave velocity estimates inside the ablated region were higher than mechanical testing estimates, but estimates in the untreated tissue were within 20% of mechanical measurements. A comparison of electrode vibration elastography and electrode displacement elastography showed the complementary information that they can provide. Electrode vibration elastography shows promise as an imaging modality that provides ablation boundary delineation and quantitative information during ablation procedures.
NASA Astrophysics Data System (ADS)
Joachim, Bastian; Ruzié, Lorraine; Burgess, Ray; Pawley, Alison; Clay, Patricia L.; Ballentine, Christopher J.
2016-04-01
Halogens play a key role in our understanding of volatile transport processes in the Earth's mantle. Their moderate (fluorine) to highly (iodine) incompatible and volatile behavior implies that their distribution is influenced by partial melting, fractionation and degassing processes as well as fluid mobilities. The heavy halogens, particularly bromine and iodine, are far more depleted in the Earth's mantle than expected from their condensation temperature (Palme and O'Neill 2014), so that their very low abundances in basalts and peridotites (ppb-range) make it analytically challenging to investigate their concentrations in Earth's mantle reservoirs and their behavior during transport processes (Pyle and Mather, 2009). We used a new experimental technique, which combines the irradiation technique (Johnson et al. 2000), laser ablation and conventional mass spectrometry. This enables us to present the first experimentally derived bromine partition coefficient between olivine and melt. Partitioning experiments were performed at 1500° C and 2.3 GPa, a P-T condition that is representative for partial melting processes in the OIB source region (Davis et al. 2011). The bromine partition coefficient between olivine and silicate melt at this condition has been determined to DBrol/melt = 4.37•10-4± 1.96•10-4. Results show that bromine is significantly more incompatible than chlorine (˜1.5 orders of magnitude) and fluorine (˜2 orders of magnitude) due to its larger ionic radius. We have used our bromine partitioning data to estimate minimum bromine abundances in EM1 and EM2 source regions. We used minimum bromine bulk rock concentrations determined in an EM1 (Pitcairn: 1066 ppb) and EM2 (Society: 2063 ppb) basalt (Kendrick et al. 2012), together with an estimated minimum melt fraction of 0.01 in OIB source regions (Dasgupta et al. 2007). The almost perfect bromine incompatibility results in minimum bromine abundances in EM1 and EM2 OIB source regions of 11 ppb and 20 ppb, respectively. The effect on the partitioning behaviour of other minerals such as pyroxene, mantle inhomogeneity, incongruent melting, a potential effect of iron, temperature, pressure or the presence of fluids, would be to shift the estimated bromine mantle source concentration to higher but not to lower values. Comparing our minimum bromine OIB source region estimate with the estimated primitive mantle bromine abundance (3.6 ppb; Lyubetskaya and Korenaga, 2007) implies that the OIB source mantle is enriched in bromine relative to the primitive mantle by at least a factor of 3 in EM1 source regions and a factor of 5.5 in EM2 source regions. One explanation is that bromine may be efficiently recycled into the OIB source mantle region through recycling of subducted oceanic crust. Dasgupta R, Hirschmann MM, Humayun, ND (2007) J. Petrol. 48, pp. 2093-2124. Davis FA, Hirschmann MM, Humayun M (2011) Earth Planet. Sci. Lett. 308, pp. 380-390. Johnson L, Burgess R, Turner G, Milledge JH, Harris JW (2000) Geochim. Cosmochim. Acta 64, pp. 717-732. Kendrick MA, Woodhead JD, Kamenetsky VS (2012) Geol. 32, pp. 441-444. Lyubetskaya T, Korenaga J (2007) J. Geophys. Res.-Sol. Earth 112, B03211. Palme H, O'Neill HStC (2014). Cosmochemical Estimates of Mantle Composition. Treat. Geochem. 2nd edition, 3, pp. 1-39. Pyle DM, Mather TA (2009) Chem. Geol. 263, pp. 110-121.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.
1996-01-01
Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea
Self-Tuning of Design Variables for Generalized Predictive Control
NASA Technical Reports Server (NTRS)
Lin, Chaung; Juang, Jer-Nan
2000-01-01
Three techniques are introduced to determine the order and control weighting for the design of a generalized predictive controller. These techniques are based on the application of fuzzy logic, genetic algorithms, and simulated annealing to conduct an optimal search on specific performance indexes or objective functions. Fuzzy logic is found to be feasible for real-time and on-line implementation due to its smooth and quick convergence. On the other hand, genetic algorithms and simulated annealing are applicable for initial estimation of the model order and control weighting, and final fine-tuning within a small region of the solution space, Several numerical simulations for a multiple-input and multiple-output system are given to illustrate the techniques developed in this paper.
A Study on Regional Frequency Analysis using Artificial Neural Network - the Sumjin River Basin
NASA Astrophysics Data System (ADS)
Jeong, C.; Ahn, J.; Ahn, H.; Heo, J. H.
2017-12-01
Regional frequency analysis means to make up for shortcomings in the at-site frequency analysis which is about a lack of sample size through the regional concept. Regional rainfall quantile depends on the identification of hydrologically homogeneous regions, hence the regional classification based on hydrological homogeneous assumption is very important. For regional clustering about rainfall, multidimensional variables and factors related geographical features and meteorological figure are considered such as mean annual precipitation, number of days with precipitation in a year and average maximum daily precipitation in a month. Self-Organizing Feature Map method which is one of the artificial neural network algorithm in the unsupervised learning techniques solves N-dimensional and nonlinear problems and be shown results simply as a data visualization technique. In this study, for the Sumjin river basin in South Korea, cluster analysis was performed based on SOM method using high-dimensional geographical features and meteorological factor as input data. then, for the results, in order to evaluate the homogeneity of regions, the L-moment based discordancy and heterogeneity measures were used. Rainfall quantiles were estimated as the index flood method which is one of regional rainfall frequency analysis. Clustering analysis using SOM method and the consequential variation in rainfall quantile were analyzed. This research was supported by a grant(2017-MPSS31-001) from Supporting Technology Development Program for Disaster Management funded by Ministry of Public Safety and Security(MPSS) of the Korean government.
Taking Stock of Circumboreal Forest Carbon With Ground Measurements, Airborne and Spaceborne LiDAR
NASA Technical Reports Server (NTRS)
Neigh, Christopher S. R.; Nelson, Ross F.; Ranson, K. Jon; Margolis, Hank A.; Montesano, Paul M.; Sun, Guoqing; Kharuk, Viacheslav; Naesset, Erik; Wulder, Michael A.; Andersen, Hans-Erik
2013-01-01
The boreal forest accounts for one-third of global forests, but remains largely inaccessible to ground-based measurements and monitoring. It contains large quantities of carbon in its vegetation and soils, and research suggests that it will be subject to increasingly severe climate-driven disturbance. We employ a suite of ground-, airborne- and space-based measurement techniques to derive the first satellite LiDAR-based estimates of aboveground carbon for the entire circumboreal forest biome. Incorporating these inventory techniques with uncertainty analysis, we estimate total aboveground carbon of 38 +/- 3.1 Pg. This boreal forest carbon is mostly concentrated from 50 to 55degN in eastern Canada and from 55 to 60degN in eastern Eurasia. Both of these regions are expected to warm >3 C by 2100, and monitoring the effects of warming on these stocks is important to understanding its future carbon balance. Our maps establish a baseline for future quantification of circumboreal carbon and the described technique should provide a robust method for future monitoring of the spatial and temporal changes of the aboveground carbon content.
NASA Astrophysics Data System (ADS)
Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.
2001-06-01
We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.
Estimation of Solar Radiation on Building Roofs in Mountainous Areas
NASA Astrophysics Data System (ADS)
Agugiaro, G.; Remondino, F.; Stevanato, G.; De Filippi, R.; Furlanello, C.
2011-04-01
The aim of this study is estimating solar radiation on building roofs in complex mountain landscape areas. A multi-scale solar radiation estimation methodology is proposed that combines 3D data ranging from regional scale to the architectural one. Both the terrain and the nearby building shadowing effects are considered. The approach is modular and several alternative roof models, obtained by surveying and modelling techniques at varying level of detail, can be embedded in a DTM, e.g. that of an Alpine valley surrounded by mountains. The solar radiation maps obtained from raster models at different resolutions are compared and evaluated in order to obtain information regarding the benefits and disadvantages tied to each roof modelling approach. The solar radiation estimation is performed within the open-source GRASS GIS environment using r.sun and its ancillary modules.
Abad-Franch, Fernando; Ferraz, Gonçalo; Campos, Ciro; Palomeque, Francisco S.; Grijalva, Mario J.; Aguilar, H. Marcelo; Miles, Michael A.
2010-01-01
Background Failure to detect a disease agent or vector where it actually occurs constitutes a serious drawback in epidemiology. In the pervasive situation where no sampling technique is perfect, the explicit analytical treatment of detection failure becomes a key step in the estimation of epidemiological parameters. We illustrate this approach with a study of Attalea palm tree infestation by Rhodnius spp. (Triatominae), the most important vectors of Chagas disease (CD) in northern South America. Methodology/Principal Findings The probability of detecting triatomines in infested palms is estimated by repeatedly sampling each palm. This knowledge is used to derive an unbiased estimate of the biologically relevant probability of palm infestation. We combine maximum-likelihood analysis and information-theoretic model selection to test the relationships between environmental covariates and infestation of 298 Amazonian palm trees over three spatial scales: region within Amazonia, landscape, and individual palm. Palm infestation estimates are high (40–60%) across regions, and well above the observed infestation rate (24%). Detection probability is higher (∼0.55 on average) in the richest-soil region than elsewhere (∼0.08). Infestation estimates are similar in forest and rural areas, but lower in urban landscapes. Finally, individual palm covariates (accumulated organic matter and stem height) explain most of infestation rate variation. Conclusions/Significance Individual palm attributes appear as key drivers of infestation, suggesting that CD surveillance must incorporate local-scale knowledge and that peridomestic palm tree management might help lower transmission risk. Vector populations are probably denser in rich-soil sub-regions, where CD prevalence tends to be higher; this suggests a target for research on broad-scale risk mapping. Landscape-scale effects indicate that palm triatomine populations can endure deforestation in rural areas, but become rarer in heavily disturbed urban settings. Our methodological approach has wide application in infectious disease research; by improving eco-epidemiological parameter estimation, it can also significantly strengthen vector surveillance-control strategies. PMID:20209149
A Bayesian hierarchical model for accident and injury surveillance.
MacNab, Ying C
2003-01-01
This article presents a recent study which applies Bayesian hierarchical methodology to model and analyse accident and injury surveillance data. A hierarchical Poisson random effects spatio-temporal model is introduced and an analysis of inter-regional variations and regional trends in hospitalisations due to motor vehicle accident injuries to boys aged 0-24 in the province of British Columbia, Canada, is presented. The objective of this article is to illustrate how the modelling technique can be implemented as part of an accident and injury surveillance and prevention system where transportation and/or health authorities may routinely examine accidents, injuries, and hospitalisations to target high-risk regions for prevention programs, to evaluate prevention strategies, and to assist in health planning and resource allocation. The innovation of the methodology is its ability to uncover and highlight important underlying structure of the data. Between 1987 and 1996, British Columbia hospital separation registry registered 10,599 motor vehicle traffic injury related hospitalisations among boys aged 0-24 who resided in British Columbia, of which majority (89%) of the injuries occurred to boys aged 15-24. The injuries were aggregated by three age groups (0-4, 5-14, and 15-24), 20 health regions (based of place-of-residence), and 10 calendar years (1987 to 1996) and the corresponding mid-year population estimates were used as 'at risk' population. An empirical Bayes inference technique using penalised quasi-likelihood estimation was implemented to model both rates and counts, with spline smoothing accommodating non-linear temporal effects. The results show that (a) crude rates and ratios at health region level are unstable, (b) the models with spline smoothing enable us to explore possible shapes of injury trends at both the provincial level and the regional level, and (c) the fitted models provide a wealth of information about the patterns (both over space and time) of the injury counts, rates and ratios. During the 10-year period, high injury risk ratios evolved from northwest to central-interior and the southeast [corrected].
Lee, Chung-Hao; Amini, Rouzbeh; Gorman, Robert C.; Gorman, Joseph H.; Sacks, Michael S.
2013-01-01
Estimation of regional tissue stresses in the functioning heart valve remains an important goal in our understanding of normal valve function and in developing novel engineered tissue strategies for valvular repair and replacement. Methods to accurately estimate regional tissue stresses are thus needed for this purpose, and in particular to develop accurate, statistically informed means to validate computational models of valve function. Moreover, there exists no currently accepted method to evaluate engineered heart valve tissues and replacement heart valve biomaterials undergoing valvular stresses in blood contact. While we have utilized mitral valve anterior leaflet valvuloplasty as an experimental approach to address this limitation, robust computational techniques to estimate implant stresses are required. In the present study, we developed a novel numerical analysis approach for estimation of the in-vivo stresses of the central region of the mitral valve anterior leaflet (MVAL) delimited by a sonocrystal transducer array. The in-vivo material properties of the MVAL were simulated using an inverse FE modeling approach based on three pseudo-hyperelastic constitutive models: the neo-Hookean, exponential-type isotropic, and full collagen-fiber mapped transversely isotropic models. A series of numerical replications with varying structural configurations were developed by incorporating measured statistical variations in MVAL local preferred fiber directions and fiber splay. These model replications were then used to investigate how known variations in the valve tissue microstructure influence the estimated ROI stresses and its variation at each time point during a cardiac cycle. Simulations were also able to include estimates of the variation in tissue stresses for an individual specimen dataset over the cardiac cycle. Of the three material models, the transversely anisotropic model produced the most accurate results, with ROI averaged stresses at the fully-loaded state of 432.6±46.5 kPa and 241.4±40.5 kPa in the radial and circumferential directions, respectively. We conclude that the present approach can provide robust instantaneous mean and variation estimates of tissue stresses of the central regions of the MVAL. PMID:24275434
Deb, Dibyendu; Singh, J P; Deb, Shovik; Datta, Debajit; Ghosh, Arunava; Chaurasia, R S
2017-10-20
Determination of above ground biomass (AGB) of any forest is a longstanding scientific endeavor, which helps to estimate net primary productivity, carbon stock and other biophysical parameters of that forest. With advancement of geospatial technology in last few decades, AGB estimation now can be done using space-borne and airborne remotely sensed data. It is a well-established, time saving and cost effective technique with high precision and is frequently applied by the scientific community. It involves development of allometric equations based on correlations of ground-based forest biomass measurements with vegetation indices derived from remotely sensed data. However, selection of the best-fit and explanatory models of biomass estimation often becomes a difficult proposition with respect to the image data resolution (spatial and spectral) as well as the sensor platform position in space. Using Resourcesat-2 satellite data and Normalized Difference Vegetation Index (NDVI), this pilot scale study compared traditional linear and nonlinear models with an artificial intelligence-based non-parametric technique, i.e. artificial neural network (ANN) for formulation of the best-fit model to determine AGB of forest of the Bundelkhand region of India. The results confirmed the superiority of ANN over other models in terms of several statistical significance and reliability assessment measures. Accordingly, this study proposed the use of ANN instead of traditional models for determination of AGB and other bio-physical parameters of any dry deciduous forest of tropical sub-humid or semi-arid area. In addition, large numbers of sampling sites with different quadrant sizes for trees, shrubs, and herbs as well as application of LiDAR data as predictor variable were recommended for very high precision modelling in ANN for a large scale study.
Genome-wide heterogeneity of nucleotide substitution model fit.
Arbiza, Leonardo; Patricio, Mateus; Dopazo, Hernán; Posada, David
2011-01-01
At a genomic scale, the patterns that have shaped molecular evolution are believed to be largely heterogeneous. Consequently, comparative analyses should use appropriate probabilistic substitution models that capture the main features under which different genomic regions have evolved. While efforts have concentrated in the development and understanding of model selection techniques, no descriptions of overall relative substitution model fit at the genome level have been reported. Here, we provide a characterization of best-fit substitution models across three genomic data sets including coding regions from mammals, vertebrates, and Drosophila (24,000 alignments). According to the Akaike Information Criterion (AIC), 82 of 88 models considered were selected as best-fit models at least in one occasion, although with very different frequencies. Most parameter estimates also varied broadly among genes. Patterns found for vertebrates and Drosophila were quite similar and often more complex than those found in mammals. Phylogenetic trees derived from models in the 95% confidence interval set showed much less variance and were significantly closer to the tree estimated under the best-fit model than trees derived from models outside this interval. Although alternative criteria selected simpler models than the AIC, they suggested similar patterns. All together our results show that at a genomic scale, different gene alignments for the same set of taxa are best explained by a large variety of different substitution models and that model choice has implications on different parameter estimates including the inferred phylogenetic trees. After taking into account the differences related to sample size, our results suggest a noticeable diversity in the underlying evolutionary process. All together, we conclude that the use of model selection techniques is important to obtain consistent phylogenetic estimates from real data at a genomic scale.
Madu, C N; Quint, D J; Normolle, D P; Marsh, R B; Wang, E Y; Pierce, L J
2001-11-01
To delineate with computed tomography (CT) the anatomic regions containing the supraclavicular (SCV) and infraclavicular (IFV) nodal groups, to define the course of the brachial plexus, to estimate the actual radiation dose received by these regions in a series of patients treated in the traditional manner, and to compare these doses to those received with an optimized dosimetric technique. Twenty patients underwent contrast material-enhanced CT for the purpose of radiation therapy planning. CT scans were used to study the location of the SCV and IFV nodal regions by using outlining of readily identifiable anatomic structures that define the nodal groups. The brachial plexus was also outlined by using similar methods. Radiation therapy doses to the SCV and IFV were then estimated by using traditional dose calculations and optimized planning. A repeated measures analysis of covariance was used to compare the SCV and IFV depths and to compare the doses achieved with the traditional and optimized methods. Coverage by the 90% isodose surface was significantly decreased with traditional planning versus conformal planning as the depth to the SCV nodes increased (P < .001). Significantly decreased coverage by using the 90% isodose surface was demonstrated for traditional planning versus conformal planning with increasing IFV depth (P = .015). A linear correlation was found between brachial plexus depth and SCV depth up to 7 cm. Conformal optimized planning provided improved dosimetric coverage compared with standard techniques.
NASA Astrophysics Data System (ADS)
Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.; Rodríguez-Bouza, M.
2017-04-01
Global Navigation Satellite Systems (GNSS) have become a powerful tool use in surveying and mapping, air and maritime navigation, ionospheric/space weather research and other applications. However, in some cases, its maximum efficiency could not be attained due to some uncorrelated errors associated with the system measurements, which is caused mainly by the dispersive nature of the ionosphere. Ionosphere has been represented using the total number of electrons along the signal path at a particular height known as Total Electron Content (TEC). However, there are many methods to estimate TEC but the outputs are not uniform, which could be due to the peculiarity in characterizing the biases inside the observables (measurements), and sometimes could be associated to the influence of mapping function. The errors in TEC estimation could lead to wrong conclusion and this could be more critical in case of safety-of-life application. This work investigated the performance of Ciraolo's and Gopi's GNSS-TEC calibration techniques, during 5 geomagnetic quiet and disturbed conditions in the month of October 2013, at the grid points located in low and middle latitudes. The data used are obtained from the GNSS ground-based receivers located at Borriana in Spain (40°N, 0°E; mid latitude) and Accra in Ghana (5.50°N, -0.20°E; low latitude). The results of the calibrated TEC are compared with the TEC obtained from European Geostationary Navigation Overlay System Processing Set (EGNOS PS) TEC algorithm, which is considered as a reference data. The TEC derived from Global Ionospheric Maps (GIM) through International GNSS service (IGS) was also examined at the same grid points. The results obtained in this work showed that Ciraolo's calibration technique (a calibration technique based on carrier-phase measurements only) estimates TEC better at middle latitude in comparison to Gopi's technique (a calibration technique based on code and carrier-phase measurements). At the same time, Gopi's calibration was also found more reliable in low latitude than Ciraolo's technique. In addition, the TEC derived from IGS GIM seems to be much reliable in middle-latitude than in low-latitude region.
Comparison of monthly rain rates derived from GPI and SSM/I using probability distribution functions
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Chang, Alfred T. C.; Janowiak, John
1993-01-01
Three years of monthly rain rates over 5 deg x 5 deg latitude-longitude boxes have been calculated for oceanic regions 50 deg N-50 deg S from measurements taken by the Special Sensor Microwave/Imager on board the Defense Meteorological Satellite Program satellites using the technique developed by Wilheit et al. (1987, 1991). The annual and seasonal zonal-mean rain rates are larger than Jaeger's (1983) climatological estimates but are smaller than those estimated from the GOES precipitation index (GPI) for the same period. Regional comparison with the GPI showed that these rain rates are smaller in the north Indian Ocean and in the southern extratropics where the GPI is known to overestimate. The differences are also dominated by a jump at 170 deg W in the GPI rain rates across the mid-Pacific Ocean. This jump is attributed to the fusion of different satellite measurements in producing the GPI.
NASA Astrophysics Data System (ADS)
Chen, Z.; Chen, J.; Zheng, X.; Jiang, F.; Zhang, S.; Ju, W.; Yuan, W.; Mo, G.
2014-12-01
In this study, we explore the feasibility of optimizing ecosystem photosynthetic and respiratory parameters from the seasonal variation pattern of the net carbon flux. An optimization scheme is proposed to estimate two key parameters (Vcmax and Q10) by exploiting the seasonal variation in the net ecosystem carbon flux retrieved by an atmospheric inversion system. This scheme is implemented to estimate Vcmax and Q10 of the Boreal Ecosystem Productivity Simulator (BEPS) to improve its NEP simulation in the Boreal North America (BNA) region. Simultaneously, in-situ NEE observations at six eddy covariance sites are used to evaluate the NEE simulations. The results show that the performance of the optimized BEPS is superior to that of the BEPS with the default parameter values. These results have the implication on using atmospheric CO2 data for optimizing ecosystem parameters through atmospheric inversion or data assimilation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghanashyam Neupane; Earl D. Mattson; Travis L. McLing
2014-02-01
The U.S. Geological survey has estimated that there are up to 4,900 MWe of undiscovered geothermal resources and 92,000 MWe of enhanced geothermal potential within the state of Idaho. Of particular interest are the resources of the Eastern Snake River Plain (ESRP) which was formed by volcanic activity associated with the relative movement of the Yellowstone Hot Spot across the state of Idaho. This region is characterized by a high geothermal gradient and thermal springs occurring along the margins of the ESRP. Masking much of the deep thermal potential of the ESRP is a regionally extensive and productive cold-water aquifer.more » We have undertaken a study to infer the temperature of the geothermal system hidden beneath the cold-water aquifer of the ESRP. Our approach is to estimate reservoir temperatures from measured water compositions using an inverse modeling technique (RTEst) that calculates the temperature at which multiple minerals are simultaneously at equilibrium while explicitly accounting for the possible loss of volatile constituents (e.g., CO2), boiling and/or water mixing. In the initial stages of this study, we apply the RTEst model to water compositions measured from a limited number of wells and thermal springs to estimate the regionally extensive geothermal system in the ESRP.« less
Global and regional cause-of-death patterns in 1990.
Murray, C. J.; Lopez, A. D.
1994-01-01
Demographic estimation techniques suggest that worldwide about 50 million deaths occur each year, of which about 39 million are in the developing countries. In countries with adequate registration of vital statistics, the age at death and the cause can be reliably determined. Only about 30-35% of all deaths are captured by vital registration (excluding sample registration schemes); for the remainder, cause-of-death estimation procedures are required. Indirect methods which model the cause-of-death structure as a function of the level of mortality can provide reasonable estimates for broad cause-of-death groups. Such methods are generally unreliable for more specific causes. In this case, estimates can be constructed from community-level mortality surveillance systems or from epidemiological evidence on specific diseases. Some check on the plausibility of the estimates is possible in view of the hierarchical structure of cause-of-death lists and the well-known age-specific patterns of diseases and injuries. The results of applying these methods to estimate the cause of death for over 120 diseases or injuries, by age, sex and region, are described. The estimates have been derived in order to calculate the years of life lost due to premature death, one of the two components of overall disability-adjusted life years (DALYs) calculated for the 1993 World development report. Previous attempts at cause-of-death estimation have been limited to a few diseases only, with little age-specific detail. The estimates reported in detail here should serve as a useful reference for further public health research to support the determination of health sector priorities. PMID:8062402
A random-censoring Poisson model for underreported data.
de Oliveira, Guilherme Lopes; Loschi, Rosangela Helena; Assunção, Renato Martins
2017-12-30
A major challenge when monitoring risks in socially deprived areas of under developed countries is that economic, epidemiological, and social data are typically underreported. Thus, statistical models that do not take the data quality into account will produce biased estimates. To deal with this problem, counts in suspected regions are usually approached as censored information. The censored Poisson model can be considered, but all censored regions must be precisely known a priori, which is not a reasonable assumption in most practical situations. We introduce the random-censoring Poisson model (RCPM) which accounts for the uncertainty about both the count and the data reporting processes. Consequently, for each region, we will be able to estimate the relative risk for the event of interest as well as the censoring probability. To facilitate the posterior sampling process, we propose a Markov chain Monte Carlo scheme based on the data augmentation technique. We run a simulation study comparing the proposed RCPM with 2 competitive models. Different scenarios are considered. RCPM and censored Poisson model are applied to account for potential underreporting of early neonatal mortality counts in regions of Minas Gerais State, Brazil, where data quality is known to be poor. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cannon, Alex
2017-04-01
Estimating historical trends in short-duration rainfall extremes at regional and local scales is challenging due to low signal-to-noise ratios and the limited availability of homogenized observational data. In addition to being of scientific interest, trends in rainfall extremes are of practical importance, as their presence calls into question the stationarity assumptions that underpin traditional engineering and infrastructure design practice. Even with these fundamental challenges, increasingly complex questions are being asked about time series of extremes. For instance, users may not only want to know whether or not rainfall extremes have changed over time, they may also want information on the modulation of trends by large-scale climate modes or on the nonstationarity of trends (e.g., identifying hiatus periods or periods of accelerating positive trends). Efforts have thus been devoted to the development and application of more robust and powerful statistical estimators for regional and local scale trends. While a standard nonparametric method like the regional Mann-Kendall test, which tests for the presence of monotonic trends (i.e., strictly non-decreasing or non-increasing changes), makes fewer assumptions than parametric methods and pools information from stations within a region, it is not designed to visualize detected trends, include information from covariates, or answer questions about the rate of change in trends. As a remedy, monotone quantile regression (MQR) has been developed as a nonparametric alternative that can be used to estimate a common monotonic trend in extremes at multiple stations. Quantile regression makes efficient use of data by directly estimating conditional quantiles based on information from all rainfall data in a region, i.e., without having to precompute the sample quantiles. The MQR method is also flexible and can be used to visualize and analyze the nonlinearity of the detected trend. However, it is fundamentally a univariate technique, and cannot incorporate information from additional covariates, for example ENSO state or physiographic controls on extreme rainfall within a region. Here, the univariate MQR model is extended to allow the use of multiple covariates. Multivariate monotone quantile regression (MMQR) is based on a single hidden-layer feedforward network with the quantile regression error function and partial monotonicity constraints. The MMQR model is demonstrated via Monte Carlo simulations and the estimation and visualization of regional trends in moderate rainfall extremes based on homogenized sub-daily precipitation data at stations in Canada.
Regional estimation of base recharge to ground water using water balance and a base-flow index.
Szilagyi, Jozsef; Harvey, F Edwin; Ayers, Jerry F
2003-01-01
Naturally occurring long-term mean annual base recharge to ground water in Nebraska was estimated with the help of a water-balance approach and an objective automated technique for base-flow separation involving minimal parameter-optimization requirements. Base recharge is equal to total recharge minus the amount of evapotranspiration coming directly from ground water. The estimation of evapotranspiration in the water-balance equation avoids the need to specify a contributing drainage area for ground water, which in certain cases may be considerably different from the drainage area for surface runoff. Evapotranspiration was calculated by the WREVAP model at the Solar and Meteorological Surface Observation Network (SAMSON) sites. Long-term mean annual base recharge was derived by determining the product of estimated long-term mean annual runoff (the difference between precipitation and evapotranspiration) and the base-flow index (BFI). The BFI was calculated from discharge data obtained from the U.S. Geological Survey's gauging stations in Nebraska. Mapping was achieved by using geographic information systems (GIS) and geostatistics. This approach is best suited for regional-scale applications. It does not require complex hydrogeologic modeling nor detailed knowledge of soil characteristics, vegetation cover, or land-use practices. Long-term mean annual base recharge rates in excess of 110 mm/year resulted in the extreme eastern part of Nebraska. The western portion of the state expressed rates of only 15 to 20 mm annually, while the Sandhills region of north-central Nebraska was estimated to receive twice as much base recharge (40 to 50 mm/year) as areas south of it.
Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study
Lai, Chao-Jen; Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2015-01-01
Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm2 field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical measurements were 0.97 ± 0.03 and 1.10 ± 0.13, respectively, indicating that the accuracy of the Monte Carlo simulation was adequate. The normalized AGD with VOI field scans was substantially reduced by a factor of about 2 over the VOI region and by a factor of 18 over the entire breast for both 25% and 50% VGF simulated breasts compared with the normalized AGD with full field scans. The normalized AGD for the VOI breast CT technique can be kept the same as or lower than that for a full field scan with the exposure level for the VOI field scan increased by a factor of as much as 12. Conclusions: The authors’ Monte Carlo estimates of normalized AGDs for the VOI breast CT technique show that this technique can be used to markedly increase the dose to the breast and thus the visibility of the VOI region without increasing the dose to the breast. The results of this investigation should be helpful for those interested in using VOI breast CT technique to image small calcifications with dose concern. PMID:26127058
Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas
NASA Astrophysics Data System (ADS)
Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing
2017-12-01
Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.
Evaluation of macrozone dimensions by ultrasound and EBSD techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreau, Andre, E-mail: Andre.Moreau@cnrc-nrc.gc.ca; Toubal, Lotfi; Ecole de technologie superieure, 1100, rue Notre-Dame Ouest, Montreal, QC, Canada H3C 1K3
2013-01-15
Titanium alloys are known to have texture heterogeneities, i.e. regions much larger than the grain dimensions, where the local orientation distribution of the grains differs from one region to the next. The electron backscattering diffraction (EBSD) technique is the method of choice to characterize these macro regions, which are called macrozones. Qualitatively, the images obtained by EBSD show that these macrozones may be larger or smaller, elongated or equiaxed. However, often no well-defined boundaries are observed between the macrozones and it is very hard to obtain objective and quantitative estimates of the macrozone dimensions from these data. In the presentmore » work, we present a novel, non-destructive ultrasonic technique that provides objective and quantitative characteristic dimensions of the macrozones. The obtained dimensions are based on the spatial autocorrelation function of fluctuations in the sound velocity. Thus, a pragmatic definition of macrozone dimensions naturally arises from the ultrasonic measurement. This paper has three objectives: 1) to disclose the novel, non-destructive ultrasonic technique to measure macrozone dimensions, 2) to propose a quantitative and objective definition of macrozone dimensions adapted to and arising from the ultrasonic measurement, and which is also applicable to the orientation data obtained by EBSD, and 3) to compare the macrozone dimensions obtained using the two techniques on two samples of the near-alpha titanium alloy IMI834. In addition, it was observed that macrozones may present a semi-periodical arrangement. - Highlights: Black-Right-Pointing-Pointer Discloses a novel, ultrasonic NDT technique to measure macrozone dimensions Black-Right-Pointing-Pointer Proposes a quantitative and objective definition of macrozone dimensions Black-Right-Pointing-Pointer Compares macrozone dimensions obtained using EBSD and ultrasonics on 2 Ti samples Black-Right-Pointing-Pointer Observes that macrozones may have a semi-periodical arrangement.« less
NASA Astrophysics Data System (ADS)
Ruggeri, Paolo; Irving, James; Gloaguen, Erwan; Holliger, Klaus
2013-04-01
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches to the regional scale still represents a major challenge, yet is critically important for the development of groundwater flow and contaminant transport models. To address this issue, we have developed a regional-scale hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure. The objective is to simulate the regional-scale distribution of a hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, our approach first involves linking the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. We present the application of this methodology to a pertinent field scenario, where we consider collocated high-resolution measurements of the electrical conductivity, measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, estimated from EM flowmeter and slug test measurements, in combination with low-resolution exhaustive electrical conductivity estimates obtained from dipole-dipole ERT meausurements.
NASA Astrophysics Data System (ADS)
Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.
2010-12-01
Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.
Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar
Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...
2016-10-18
Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less
NASA Astrophysics Data System (ADS)
Magga, Zoi; Tzovolou, Dimitra N.; Theodoropoulou, Maria A.; Tsakiroglou, Christos D.
2012-03-01
The risk assessment of groundwater pollution by pesticides may be based on pesticide sorption and biodegradation kinetic parameters estimated with inverse modeling of datasets from either batch or continuous flow soil column experiments. In the present work, a chemical non-equilibrium and non-linear 2-site sorption model is incorporated into solute transport models to invert the datasets of batch and soil column experiments, and estimate the kinetic sorption parameters for two pesticides: N-phosphonomethyl glycine (glyphosate) and 2,4-dichlorophenoxy-acetic acid (2,4-D). When coupling the 2-site sorption model with the 2-region transport model, except of the kinetic sorption parameters, the soil column datasets enable us to estimate the mass-transfer coefficients associated with solute diffusion between mobile and immobile regions. In order to improve the reliability of models and kinetic parameter values, a stepwise strategy that combines batch and continuous flow tests with adequate true-to-the mechanism analytical of numerical models, and decouples the kinetics of purely reactive steps of sorption from physical mass-transfer processes is required.
Kodera, Sachiko; Gomez-Tames, Jose; Hirata, Akimasa; Masuda, Hiroshi; Arima, Takuji; Watanabe, Soichi
2017-01-01
The rapid development of wireless technology has led to widespread concerns regarding adverse human health effects caused by exposure to electromagnetic fields. Temperature elevation in biological bodies is an important factor that can adversely affect health. A thermophysiological model is desired to quantify microwave (MW) induced temperature elevations. In this study, parameters related to thermophysiological responses for MW exposures were estimated using an electromagnetic-thermodynamics simulation technique. To the authors’ knowledge, this is the first study in which parameters related to regional cerebral blood flow in a rat model were extracted at a high degree of accuracy through experimental measurements for localized MW exposure at frequencies exceeding 6 GHz. The findings indicate that the improved modeling parameters yield computed results that match well with the measured quantities during and after exposure in rats. It is expected that the computational model will be helpful in estimating the temperature elevation in the rat brain at multiple observation points (that are difficult to measure simultaneously) and in explaining the physiological changes in the local cortex region. PMID:28358345
Determination of the coronal magnetic field from vector magnetograph data
NASA Technical Reports Server (NTRS)
Mikic, Zoran
1991-01-01
A new algorithm was developed, tested, and applied to determine coronal magnetic fields above solar active regions. The coronal field above NOAA active region AR5747 was successfully estimated on 20 Oct. 1989 from data taken at the Mees Solar Observatory of the Univ. of Hawaii. It was shown that observational data can be used to obtain realistic estimates of coronal magnetic fields. The model has significantly extended the realism with which the coronal magnetic field can be inferred from observations. The understanding of coronal phenomena will be greatly advanced by a reliable technique, such as the one presented, for deducing the detailed spatial structure of the coronal field. The payoff from major current and proposed NASA observational efforts is heavily dependent on the success with which the coronal field can be inferred from vector magnetograms. In particular, the present inability to reliably obtain the coronal field has been a major obstacle to the theoretical advancement of solar flare theory and prediction. The results have shown that the evolutional algorithm can be used to estimate coronal magnetic fields.
Challenges of DNA-based mark-recapture studies of American black bears
Settlage, K.E.; Van Manen, F.T.; Clark, J.D.; King, T.L.
2008-01-01
We explored whether genetic sampling would be feasible to provide a region-wide population estimate for American black bears (Ursus americanus) in the southern Appalachians, USA. Specifically, we determined whether adequate capture probabilities (p >0.20) and population estimates with a low coefficient of variation (CV <20%) could be achieved given typical agency budget and personnel constraints. We extracted DNA from hair collected from baited barbed-wire enclosures sampled over a 10-week period on 2 study areas: a high-density black bear population in a portion of Great Smoky Mountains National Park and a lower density population on National Forest lands in North Carolina, South Carolina, and Georgia. We identified individual bears by their unique genotypes obtained from 9 microsatellite loci. We sampled 129 and 60 different bears in the National Park and National Forest study areas, respectively, and applied closed mark–recapture models to estimate population abundance. Capture probabilities and precision of the population estimates were acceptable only for sampling scenarios for which we pooled weekly sampling periods. We detected capture heterogeneity biases, probably because of inadequate spatial coverage by the hair-trapping grid. The logistical challenges of establishing and checking a sufficiently high density of hair traps make DNA-based estimates of black bears impractical for the southern Appalachian region. Alternatives are to estimate population size for smaller areas, estimate population growth rates or survival using mark–recapture methods, or use independent marking and recapturing techniques to reduce capture heterogeneity.
Estimating the global incidence of traumatic spinal cord injury.
Fitzharris, M; Cripps, R A; Lee, B B
2014-02-01
Population modelling--forecasting. To estimate the global incidence of traumatic spinal cord injury (TSCI). An initiative of the International Spinal Cord Society (ISCoS) Prevention Committee. Regression techniques were used to derive regional and global estimates of TSCI incidence. Using the findings of 31 published studies, a regression model was fitted using a known number of TSCI cases as the dependent variable and the population at risk as the single independent variable. In the process of deriving TSCI incidence, an alternative TSCI model was specified in an attempt to arrive at an optimal way of estimating the global incidence of TSCI. The global incidence of TSCI was estimated to be 23 cases per 1,000,000 persons in 2007 (179,312 cases per annum). World Health Organization's regional results are provided. Understanding the incidence of TSCI is important for health service planning and for the determination of injury prevention priorities. In the absence of high-quality epidemiological studies of TSCI in each country, the estimation of TSCI obtained through population modelling can be used to overcome known deficits in global spinal cord injury (SCI) data. The incidence of TSCI is context specific, and an alternative regression model demonstrated how TSCI incidence estimates could be improved with additional data. The results highlight the need for data standardisation and comprehensive reporting of national level TSCI data. A step-wise approach from the collation of conventional epidemiological data through to population modelling is suggested.
Dubsky, Stephen; Hooper, Stuart B.; Siu, Karen K. W.; Fouras, Andreas
2012-01-01
During breathing, lung inflation is a dynamic process involving a balance of mechanical factors, including trans-pulmonary pressure gradients, tissue compliance and airway resistance. Current techniques lack the capacity for dynamic measurement of ventilation in vivo at sufficient spatial and temporal resolution to allow the spatio-temporal patterns of ventilation to be precisely defined. As a result, little is known of the regional dynamics of lung inflation, in either health or disease. Using fast synchrotron-based imaging (up to 60 frames s−1), we have combined dynamic computed tomography (CT) with cross-correlation velocimetry to measure regional time constants and expansion within the mammalian lung in vivo. Additionally, our new technique provides estimation of the airflow distribution throughout the bronchial tree during the ventilation cycle. Measurements of lung expansion and airflow in mice and rabbit pups are shown to agree with independent measures. The ability to measure lung function at a regional level will provide invaluable information for studies into normal and pathological lung dynamics, and may provide new pathways for diagnosis of regional lung diseases. Although proof-of-concept data were acquired on a synchrotron, the methodology developed potentially lends itself to clinical CT scanning and therefore offers translational research opportunities. PMID:22491972
Ground Motion Prediction Model Using Artificial Neural Network
NASA Astrophysics Data System (ADS)
Dhanya, J.; Raghukanth, S. T. G.
2018-03-01
This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.
Remote Monitoring of Groundwater Overdraft Using GRACE and InSAR
NASA Astrophysics Data System (ADS)
Scher, C.; Saah, D.
2017-12-01
Gravity Recovery and Climate Experiment (GRACE) data paired with radar-derived analyses of volumetric changes in aquifer storage capacity present a viable technique for remote monitoring of aquifer depletion. Interferometric Synthetic Aperture Radar (InSAR) analyses of ground level subsidence can account for a significant portion of mass loss observed in GRACE data and provide information on point-sources of overdraft. This study summed one water-year of GRACE monthly mass change grids and delineated regions with negative water storage anomalies for further InSAR analyses. Magnitude of water-storage anomalies observed by GRACE were compared to InSAR-derived minimum volumetric changes in aquifer storage capacity as a result of measurable compaction at the surface. Four major aquifers were selected within regions where GRACE observed a net decrease in water storage (Central Valley, California; Mekong Delta, Vietnam; West Bank, occupied Palestinian Territory; and the Indus Basin, South Asia). Interferogram imagery of the extent and magnitude of subsidence within study regions provided estimates for net minimum volume of groundwater extracted between image acquisitions. These volumetric estimates were compared to GRACE mass change grids to resolve a percent contribution of mass change observed by GRACE likely due to groundwater overdraft. Interferograms revealed characteristic cones of depression within regions of net mass loss observed by GRACE, suggesting point-source locations of groundwater overdraft and demonstrating forensic potential for the use of InSAR and GRACE data in remote monitoring of aquifer depletion. Paired GRACE and InSAR analyses offer a technique to increase the spatial and temporal resolution of remote applications for monitoring groundwater overdraft in addition to providing a novel parameter - measurable vertical deformation at the surface - to global groundwater models.
NASA Technical Reports Server (NTRS)
Matney, M.; Barker, E.; Seitzer, P.; Abercromby, K. J.; Rodriquez, H. M.
2006-01-01
NASA's Orbital Debris measurements program has a goal to characterize the small debris environment in the geosynchronous Earth-orbit (GEO) region using optical telescopes ("small" refers to objects too small to catalog and track with current systems). Traditionally, observations of GEO and near-GEO objects involve following the object with the telescope long enough to obtain an orbit suitable for tracking purposes. Telescopes operating in survey mode, however, randomly observe objects that pass through their field of view. Typically, these short-arc observation are inadequate to obtain detailed orbits, but can be used to estimate approximate circular orbit elements (semimajor axis, inclination, and ascending node). From this information, it should be possible to make statistical inferences about the orbital distributions of the GEO population bright enough to be observed by the system. The Michigan Orbital Debris Survey Telescope (MODEST) has been making such statistical surveys of the GEO region for four years. During that time, the telescope has made enough observations in enough areas of the GEO belt to have had nearly complete coverage. That means that almost all objects in all possible orbits in the GEO and near- GEO region had a non-zero chance of being observed. Some regions (such as those near zero inclination) have had good coverage, while others are poorly covered. Nevertheless, it is possible to remove these statistical biases and reconstruct the orbit populations within the limits of sampling error. In this paper, these statistical techniques and assumptions are described, and the techniques are applied to the current MODEST data set to arrive at our best estimate of the GEO orbit population distribution.
NASA Astrophysics Data System (ADS)
Wong, Man Sing; Nichol, Janet E.; Lee, Kwon Ho
2011-03-01
Aerosol retrieval algorithms for the MODerate Resolution Imaging Spectroradiometer (MODIS) have been developed to estimate aerosol and microphysical properties of the atmosphere, which help to address aerosol climatic issues at global scale. However, higher spatial resolution aerosol products for urban areas have not been well-researched mainly due to the difficulty of differentiating aerosols from bright surfaces in urban areas. Here, an aerosol retrieval algorithm using the MODIS 500-m resolution bands is described, to retrieve aerosol properties over Hong Kong and the Pearl River Delta region. The rationale of our technique is to first estimate the aerosol reflectances by decomposing the top-of-atmosphere reflectances from surface reflectances and Rayleigh path reflectances. For the determination of surface reflectances, a Minimum Reflectance Technique (MRT) is used, and MRT images are computed for different seasons. For conversion of aerosol reflectance to aerosol optical thickness (AOT), comprehensive Look Up Tables specific to the local region are constructed, which consider aerosol properties and sun-viewing geometry in the radiative transfer calculations. Four local aerosol types, namely coastal urban, polluted urban, dust, and heavy pollution, were derived using cluster analysis on 3 years of AERONET measurements in Hong Kong. The resulting 500 m AOT images were found to be highly correlated with ground measurements from the AERONET (r2 = 0.767) and Microtops II sunphotometers (r2 = 0.760) in Hong Kong. This study further demonstrates the application of the fine resolution AOT images for monitoring inter-urban and intra-urban aerosol distributions and the influence of trans-boundary flows. These applications include characterization of spatial patterns of AOT within the city, and detection of regional biomass burning sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lauvaux, Thomas
Natural Gas (NG) production activities in the northeastern Marcellus shale have significantly increased in the last decade, possibly releasing large amounts of methane (CH 4) into the atmosphere from the operations at the productions sites and during the processing and transmission steps of the natural gas chain. Based on an intensive aircraft survey, leakage rates from the NG production were quantified in May 2015 and found to be in the order of 0.5% of the total production, higher than reported by the Environmental Protection Agency (EPA) but below the usually observed leakage rates over the shale gases in the US.more » Thanks to the high production rates on average at each well, leakage rates normalized by production appeared to be low in the northeastern Marcellus shale. This result confirms that natural gas production using unconventional techniques in this region is emitting relatively less CH 4 into the atmosphere than other shale reservoirs. The low emissions rate can be explained in part by the high productivity of wells drilled across the northeastern Marcellus region. We demonstrated here that atmospheric monitoring techniques can provide an independent quantification of NG leakage rates using aircraft measurements. The CH 4 analyzers were successfully calibrated at four sites across the region, measuring continuously the atmospheric CH 4 mixing ratios and isotopic 13Ch 4. Our preliminary findings confirm the low leakage rates from tower data collected over September 2015 to November 2016 compared to the aircraft mass-balance estimates in may 2015. However, several episodes revealing large releases of natural gas over several weeks showed that temporal variations in the emissions of CH 4 may increase the actual leakage rate over longer time periods.« less
ERIC Educational Resources Information Center
Cruz, Luiz M.; Moreira, Marcelo J.
2005-01-01
The authors evaluate Angrist and Krueger (1991) and Bound, Jaeger, and Baker (1995) by constructing reliable confidence regions around the 2SLS and LIML estimators for returns-to-schooling regardless of the quality of the instruments. The results indicate that the returns-to-schooling were between 8 and 25 percent in 1970 and between 4 and 14…
Wayne D. Shepperd
2007-01-01
One of the difficulties of apportioning growing stock across diameter classes in multi- or uneven-aged forests is estimating how closely the target stocking value compares to the maximum stocking that could occur in a particular forest type and eco-region. Although the BDQ method had been used to develop uneven-aged prescriptions, it is not inherently related to any...
Hans-Erik Andersen; Strunk Jacob; Hailemariam Temesgen; Donald Atwood; Ken Winterberger
2012-01-01
The emergence of a new generation of remote sensing and geopositioning technologies, as well as increased capabilities in image processing, computing, and inferential techniques, have enabled the development and implementation of increasingly efficient and cost-effective multilevel sampling designs for forest inventory. In this paper, we (i) describe the conceptual...
NASA Astrophysics Data System (ADS)
Bostater, Charles R.; Oney, Taylor S.
2017-10-01
Hyperspectral images of coastal waters in urbanized regions were collected from fixed platform locations. Surf zone imagery, images of shallow bays, lagoons and coastal waters are processed to produce bidirectional reflectance factor (BRF) signatures corrected for changing viewing angles. Angular changes as a function of pixel location within a scene are used to estimate changes in pixel size and ground sampling areas. Diffuse calibration targets collected simultaneously from within the image scene provides the necessary information for calculating BRF signatures of the water surface and shorelines. Automated scanning using a pushbroom hyperspectral sensor allows imagery to be collected on the order of one minute or less for different regions of interest. Imagery is then rectified and georeferenced using ground control points within nadir viewing multispectral imagery via image to image registration techniques. This paper demonstrates the above as well as presenting how spectra can be extracted along different directions in the imagery. The extraction of BRF spectra along track lines allows the application of derivative reflectance spectroscopy for estimating chlorophyll-a, dissolved organic matter and suspended matter concentrations at or near the water surface. Imagery is presented demonstrating the techniques to identify subsurface features and targets within the littoral and surf zones.
A Multistage Approach for Image Registration.
Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi
2016-09-01
Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.
Segmentation of cortical bone using fast level sets
NASA Astrophysics Data System (ADS)
Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo
2017-02-01
Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.
Drought in the Horn of Africa: attribution of a damaging and repeating extreme event
NASA Astrophysics Data System (ADS)
Marthews, Toby; Otto, Friederike; Mitchell, Daniel; Dadson, Simon; Jones, Richard
2015-04-01
We have applied detection and attribution techniques to the severe drought that hit the Horn of Africa in 2014. The short rains failed in late 2013 in Kenya, South Sudan, Somalia and southern Ethiopia, leading to a very dry growing season January to March 2014, and subsequently to the current drought in many agricultural areas of the sub-region. We have made use of the weather@home project, which uses publicly-volunteered distributed computing to provide a large ensemble of simulations sufficient to sample regional climate uncertainty. Based on this, we have estimated the occurrence rates of the kinds of the rare and extreme events implicated in this large-scale drought. From land surface model runs based on these ensemble simulations, we have estimated the impacts of climate anomalies during this period and therefore we can reliably identify some factors of the ongoing drought as attributable to human-induced climate change. The UNFCCC's Adaptation Fund is attempting to support projects that bring about an adaptation to "the adverse effects of climate change", but in order to formulate such projects we need a much clearer way to assess how much climate change is human-induced and how much is a consequence of climate anomalies and large-scale teleconnections, which can only be provided by robust attribution techniques.
Use of (137)Cs technique for soil erosion study in the agricultural region of Casablanca in Morocco.
Nouira, A; Sayouty, E H; Benmansour, M
2003-01-01
Accelerated erosion and soil degradation currently cause serious problems to the Oued El Maleh basin (Morocco). Furthermore, there is still only limited information on rates of soil loss for optimising strategies for soil conservation. In the present study we have used the (137)Cs technique to assess the soil erosion rates on an agricultural land in Oued el Maleh basin near Casablanca (Morocco). A small representative agricultural field was selected to investigate the soil degradation required by soil managers in this region. The transect approach was applied for sampling to identify the spatial redistribution of (137)Cs. The spatial variability of (137)Cs inventory has provided evidence of the importance of tillage process and the human effects on the redistribution of (137)Cs. The mean (137)Cs inventory was found about 842 Bq m(-2), this value corresponds to an erosion rate of 82 tha(-1) yr(-1) by applying simplified mass balance model in a preliminary estimation. When data on site characteristics were available, the refined mass balance model was applied to highlight the contribution of tillage effect in soil redistribution. The erosion rate was estimated about 50 tha(-1) yr(-1). The aspects related to the sampling procedures and the models for calculation of erosion rates are discussed.
Model Evaluation To Measuring Efficiencies of ICT Development In Indonesia Region Using DEA
NASA Astrophysics Data System (ADS)
Efendi, Syahril; Fadly Syahputra, M.; Anggia Muchtar, M.
2018-01-01
ICT Pura or digital city is a program designed by the Indonesian government with the main objective is to determine the level of readiness of each district and city in each province in the era of digital economy. It is necessarily to evaluate whether a city or a region that was successfully managing ICT better than other city and significantly contributes to the communities and living systems. Data envelopment analysis (DEA) is a well known technique to estimate efficiency and returns to scale through the construction of a best practice frontier, based on non-parametric mathematical programming approach. This paper addresses DEA BCC method to get index of efficiencies for all region in Indonesia covered by ICT Pura. Numerical result is given.
Computation of entropy and Lyapunov exponent by a shift transform.
Matsuoka, Chihiro; Hiraide, Koichi
2015-10-01
We present a novel computational method to estimate the topological entropy and Lyapunov exponent of nonlinear maps using a shift transform. Unlike the computation of periodic orbits or the symbolic dynamical approach by the Markov partition, the method presented here does not require any special techniques in computational and mathematical fields to calculate these quantities. In spite of its simplicity, our method can accurately capture not only the chaotic region but also the non-chaotic region (window region) such that it is important physically but the (Lebesgue) measure zero and usually hard to calculate or observe. Furthermore, it is shown that the Kolmogorov-Sinai entropy of the Sinai-Ruelle-Bowen measure (the physical measure) coincides with the topological entropy.
Computation of entropy and Lyapunov exponent by a shift transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuoka, Chihiro, E-mail: matsuoka.chihiro.mm@ehime-u.ac.jp; Hiraide, Koichi
2015-10-15
We present a novel computational method to estimate the topological entropy and Lyapunov exponent of nonlinear maps using a shift transform. Unlike the computation of periodic orbits or the symbolic dynamical approach by the Markov partition, the method presented here does not require any special techniques in computational and mathematical fields to calculate these quantities. In spite of its simplicity, our method can accurately capture not only the chaotic region but also the non-chaotic region (window region) such that it is important physically but the (Lebesgue) measure zero and usually hard to calculate or observe. Furthermore, it is shown thatmore » the Kolmogorov-Sinai entropy of the Sinai-Ruelle-Bowen measure (the physical measure) coincides with the topological entropy.« less
NASA Astrophysics Data System (ADS)
Knoblauch, Kenneth; McMahon, Matthew J.
1995-10-01
We tested the Maxwell-Cornsweet conjecture that differential spectral filtering of the two eyes can increase the dimensionality of a dichromat's color vision. Sex-linked dichromats wore filters that differentially passed long- and middle-wavelength regions of the spectrum to each eye. Monocularly, temporal modulation thresholds (1.5 Hz) for color mixtures from the Rayleigh region of the spectrum were accounted for by a single, univariant mechanism. Binocularly, univariance was rejected because, as in monocular viewing by trichromats, in no color direction could silent substitution of the color mixtures be obtained. Despite the filter-aided increase in dimension, estimated wavelength discrimination was quite poor in this spectral region, suggesting a limit to the effectiveness of this technique. binocular summation.
NASA Astrophysics Data System (ADS)
Ben-Zikri, Yehuda Kfir; Linte, Cristian A.
2016-03-01
Region of interest detection is a precursor to many medical image processing and analysis applications, including segmentation, registration and other image manipulation techniques. The optimal region of interest is often selected manually, based on empirical knowledge and features of the image dataset. However, if inconsistently identified, the selected region of interest may greatly affect the subsequent image analysis or interpretation steps, in turn leading to incomplete assessment during computer-aided diagnosis or incomplete visualization or identification of the surgical targets, if employed in the context of pre-procedural planning or image-guided interventions. Therefore, the need for robust, accurate and computationally efficient region of interest localization techniques is prevalent in many modern computer-assisted diagnosis and therapy applications. Here we propose a fully automated, robust, a priori learning-based approach that provides reliable estimates of the left and right ventricle features from cine cardiac MR images. The proposed approach leverages the temporal frame-to-frame motion extracted across a range of short axis left ventricle slice images with small training set generated from les than 10% of the population. This approach is based on histogram of oriented gradients features weighted by local intensities to first identify an initial region of interest depicting the left and right ventricles that exhibits the greatest extent of cardiac motion. This region is correlated with the homologous region that belongs to the training dataset that best matches the test image using feature vector correlation techniques. Lastly, the optimal left ventricle region of interest of the test image is identified based on the correlation of known ground truth segmentations associated with the training dataset deemed closest to the test image. The proposed approach was tested on a population of 100 patient datasets and was validated against the ground truth region of interest of the test images manually annotated by experts. This tool successfully identified a mask around the LV and RV and furthermore the minimal region of interest around the LV that fully enclosed the left ventricle from all testing datasets, yielding a 98% overlap with their corresponding ground truth. The achieved mean absolute distance error between the two contours that normalized by the radius of the ground truth is 0.20 +/- 0.09.
Developing an objective evaluation method to estimate diabetes risk in community-based settings.
Kenya, Sonjia; He, Qing; Fullilove, Robert; Kotler, Donald P
2011-05-01
Exercise interventions often aim to affect abdominal obesity and glucose tolerance, two significant risk factors for type 2 diabetes. Because of limited financial and clinical resources in community and university-based environments, intervention effects are often measured with interviews or questionnaires and correlated with weight loss or body fat indicated by body bioimpedence analysis (BIA). However, self-reported assessments are subject to high levels of bias and low levels of reliability. Because obesity and body fat are correlated with diabetes at different levels in various ethnic groups, data reflecting changes in weight or fat do not necessarily indicate changes in diabetes risk. To determine how exercise interventions affect diabetes risk in community and university-based settings, improved evaluation methods are warranted. We compared a noninvasive, objective measurement technique--regional BIA--with whole-body BIA for its ability to assess abdominal obesity and predict glucose tolerance in 39 women. To determine regional BIA's utility in predicting glucose, we tested the association between the regional BIA method and blood glucose levels. Regional BIA estimates of abdominal fat area were significantly correlated (r = 0.554, P < 0.003) with fasting glucose. When waist circumference and family history of diabetes were added to abdominal fat in multiple regression models, the association with glucose increased further (r = 0.701, P < 0.001). Regional BIA estimates of abdominal fat may predict fasting glucose better than whole-body BIA as well as provide an objective assessment of changes in diabetes risk achieved through physical activity interventions in community settings.
Subpixel urban impervious surface mapping: the impact of input Landsat images
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Li, Chaojun; Zhu, Zhe; Lin, Weiying; Xi, Li
2017-11-01
Due to the heterogeneity of urban environments, subpixel urban impervious surface mapping is a challenging task in urban environmental studies. Factors, such as atmospheric correction, climate conditions, seasonal effect, urban settings, substantially affect fractional impervious surface estimation. Their impacts, however, have not been well studied and documented. In this research, we performed direct and comprehensive examinations to explore the impacts of these factors on subpixel estimation when using an effective machine learning technique (Random Forest) and provided solutions to alleviate these influences. Four conclusions can be drawn based on the repeatable experiments in three study areas under different climate conditions (humid continental, tropical monsoon, and Mediterranean climates). First, the performance of subpixel urban impervious surface mapping using top-of-atmosphere (TOA) reflectance imagery is comparable to, and even slightly better than, the surface reflectance imagery provided by U.S. Geological Services in all seasons and in all testing regions. Second, the effect of images with leaf-on/off season varies, and is contingent upon different climate regions. Specifically, humid continental areas may prefer the leaf-on imagery (e.g., summer), while the tropical monsoon and Mediterranean regions seem to favor the fall and winter imagery. Third, the overall estimation performance in the humid continental area is somewhat better than the other regions. Finally, improvements can be achieved by using multi-season imagery, but the increments become less obvious when including more than two seasons. The strategy and results of this research could improve and accommodate regional/national subpixel land cover mapping using Landsat images for large-scale environmental studies.
NASA Astrophysics Data System (ADS)
Mathur, R.; Kang, D.; Napelenok, S. L.; Xing, J.; Hogrefe, C.
2017-12-01
Air pollution reduction strategies for a region are complicated not only by the interplay of local emissions sources and several complex physical, chemical, dynamical processes in the atmosphere, but also hemispheric background levels of pollutants. Contrasting changes in emission patterns across the globe (e.g. declining emissions in North America and Western Europe in response to implementation of control measures and increasing emissions across Asia due to economic and population growth) are resulting in heterogeneous changes in the tropospheric chemical composition and are likely altering long-range transport impacts and consequently background pollution levels at receptor regions. To quantify these impacts, the WRF-CMAQ model is expanded to hemispheric scales and multi-decadal model simulations are performed for the period spanning 1990-2010 to examine changes in hemispheric air pollution resulting from changes in emissions over this period. Simulated trends in ozone and precursor species concentrations across the U.S. and the Northern Hemisphere over the past two decades are compared with those inferred from available measurements during this period. Additionally, the decoupled direct method (DDM) in CMAQ, a first- and higher-order sensitivity calculation technique, is used to estimate the sensitivity of O3 to emissions from different source regions across the Northern Hemisphere. The seasonal variations in source region contributions to background O3 are then estimated from these sensitivity calculations and will be discussed. These source region sensitivities estimated from DDM are then combined with the multi-decadal simulations of O3 distributions and emissions trends to characterize the changing contributions of different source regions to background O3 levels across North America. This characterization of changing long-range transport contributions is critical for the design and implementation of tighter national air quality standards
Respiratory rate estimation from the built-in cameras of smartphones and tablets.
Nam, Yunyoung; Lee, Jinseok; Chon, Ki H
2014-04-01
This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates. Overall, the VFCDM method provided the best results for accuracy (smaller median error), consistency (smaller interquartile range of the median value), and computational efficiency (less than 0.5 s on 1 min of data using a MATLAB implementation) to extract breathing rates that varied from 12 to 36 breaths/min. The AR method provided the least accurate respiratory rate estimation among the three methods. This work illustrates that both heart rates and normal breathing rates can be accurately derived from a video signal obtained from smartphones, an MP3 player and tablets with or without a flashlight.
NASA Technical Reports Server (NTRS)
Scott, Elaine P.
1994-01-01
Thermal stress analyses are an important aspect in the development of aerospace vehicles at NASA-LaRC. These analyses require knowledge of the temperature distributions within the vehicle structures which consequently necessitates the need for accurate thermal property data. The overall goal of this ongoing research effort is to develop methodologies for the estimation of the thermal property data needed to describe the temperature responses of these complex structures. The research strategy undertaken utilizes a building block approach. The idea here is to first focus on the development of property estimation methodologies for relatively simple conditions, such as isotropic materials at constant temperatures, and then systematically modify the technique for the analysis of more and more complex systems, such as anisotropic multi-component systems. The estimation methodology utilized is a statistically based method which incorporates experimental data and a mathematical model of the system. Several aspects of this overall research effort were investigated during the time of the ASEE summer program. One important aspect involved the calibration of the estimation procedure for the estimation of the thermal properties through the thickness of a standard material. Transient experiments were conducted using a Pyrex standard at various temperatures, and then the thermal properties (thermal conductivity and volumetric heat capacity) were estimated at each temperature. Confidence regions for the estimated values were also determined. These results were then compared to documented values. Another set of experimental tests were conducted on carbon composite samples at different temperatures. Again, the thermal properties were estimated for each temperature, and the results were compared with values obtained using another technique. In both sets of experiments, a 10-15 percent off-set between the estimated values and the previously determined values was found. Another effort was related to the development of the experimental techniques. Initial experiments required a resistance heater placed between two samples. The design was modified such that the heater was placed on the surface of only one sample, as would be necessary in the analysis of built up structures. Experiments using the modified technique were conducted on the composite sample used previously at different temperatures. The results were within 5 percent of those found using two samples. Finally, an initial heat transfer analysis, including conduction, convection and radiation components, was completed on a titanium sandwich structural sample. Experiments utilizing this sample are currently being designed and will be used to first estimate the material's effective thermal conductivity and later to determine the properties associated with each individual heat transfer component.
NASA Technical Reports Server (NTRS)
Holden, Peter N.; Gaffey, Michael J.; Sundararaman, P.
1991-01-01
An interpretive model for estimating porphyrin concentration in bitumen and kerogen from spectral reaflectance data in the visible and near-ultraviolet region of the spectrum is derived and calibrated. Preliminary results obtained using the model are consistent with concentrations determined from the bitumen extract and suggest that 40 to 60 percent of the total porphyrin concentration remains in the kerogen after extraction of bitumen from thermally immature samples. The reflectance technique will contribute to porphyrin and kerogen studies and can be applied at its present level of development to several areas of geologic and paleo-oceanographic research.
Modeling air concentration over macro roughness conditions by Artificial Intelligence techniques
NASA Astrophysics Data System (ADS)
Roshni, T.; Pagliara, S.
2018-05-01
Aeration is improved in rivers by the turbulence created in the flow over macro and intermediate roughness conditions. Macro and intermediate roughness flow conditions are generated by flows over block ramps or rock chutes. The measurements are taken in uniform flow region. Efficacy of soft computing methods in modeling hydraulic parameters are not common so far. In this study, modeling efficiencies of MPMR model and FFNN model are found for estimating the air concentration over block ramps under macro roughness conditions. The experimental data are used for training and testing phases. Potential capability of MPMR and FFNN model in estimating air concentration are proved through this study.
Scientific issues and potential remote-sensing requirements for plant biochemical content
NASA Technical Reports Server (NTRS)
Peterson, David L.; Hubbard, G. S.
1992-01-01
Application of developments in imaging spectrometry to the study of terrestrial ecosystems, which began in 1983, demonstrate the potential to estimate lignin and nitrogen concentrations of plant canopies by remote-sensing techniques. Estimation of these parameters from the first principles of radiative transfer and the interactions of light with plant materials is not presently possible, principally because of lack of knowledge about internal leaf scattering and specific absorption involving biochemical compounds. From the perspective of remote-sensing instrumentation, sensors are needed to support derivative imaging spectroscopy. Biochemical absorption features tend to occur in functional groupings throughout the 1100- to 2500-nm region. Derivative spectroscopy improves the information associated with the weaker, narrower absorption features of biochemical absorption that are superimposed on the strong absolute variations due to foliar biomass, pigments, and leaf water content of plant canopies. Preliminary sensor specifications call for 8-nm bandwidths at 2-nm centers in four spectral regions (about 400 bands total) and a signal-to-noise performance of at least 1000:1 for 20 percent albedo targets in the 2000-nm region.
Canopy reflectance modeling in a tropical wooded grassland
NASA Technical Reports Server (NTRS)
Simonett, David; Franklin, Janet
1986-01-01
Geometric/optical canopy reflectance modeling and spatial/spectral pattern recognition is used to study the form and structure of savanna in West Africa. An invertible plant canopy reflectance model is tested for its ability to estimate the amount of woody vegetation from remotely sensed data in areas of sparsely wooded grassland. Dry woodlands and wooded grasslands, commonly referred to as savannas, are important ecologically and economically in Africa, and cover approximately forty percent of the continent by some estimates. The Sahel and Sudan savannas make up the important and sensitive transition zone between the tropical forests and the arid Sahara region. The depletion of woody cover, used for fodder and fuel in these regions, has become a very severe problem for the people living there. LANDSAT Thematic Mapper (TM) data is used to stratify woodland and wooded grassland into areas of relatively homogeneous canopy cover, and then an invertible forest canopy reflectance model is applied to estimate directly the height and spacing of the trees in the stands. Because height and spacing are proportional to biomass in some cases, a successful application of the segmentation/modeling techniques will allow direct estimation of tree biomass, as well as cover density, over significant areas of these valuable and sensitive ecosystems. The model being tested in sites in two different bioclimatic zones in Mali, West Africa, will be used for testing the canopy model. Sudanian zone crop/woodland test sites were located in the Region of Segou, Mali.
Solid Fuel Use for Household Cooking: Country and Regional Estimates for 1980–2010
Bonjour, Sophie; Adair-Rohani, Heather; Wolf, Jennyfer; Bruce, Nigel G.; Mehta, Sumi; Lahiff, Maureen; Rehfuess, Eva A.; Mishra, Vinod; Smith, Kirk R.
2013-01-01
Background: Exposure to household air pollution from cooking with solid fuels in simple stoves is a major health risk. Modeling reliable estimates of solid fuel use is needed for monitoring trends and informing policy. Objectives: In order to revise the disease burden attributed to household air pollution for the Global Burden of Disease 2010 project and for international reporting purposes, we estimated annual trends in the world population using solid fuels. Methods: We developed a multilevel model based on national survey data on primary cooking fuel. Results: The proportion of households relying mainly on solid fuels for cooking has decreased from 62% (95% CI: 58, 66%) to 41% (95% CI: 37, 44%) between 1980 and 2010. Yet because of population growth, the actual number of persons exposed has remained stable at around 2.8 billion during three decades. Solid fuel use is most prevalent in Africa and Southeast Asia where > 60% of households cook with solid fuels. In other regions, primary solid fuel use ranges from 46% in the Western Pacific, to 35% in the Eastern Mediterranean and < 20% in the Americas and Europe. Conclusion: Multilevel modeling is a suitable technique for deriving reliable solid-fuel use estimates. Worldwide, the proportion of households cooking mainly with solid fuels is decreasing. The absolute number of persons using solid fuels, however, has remained steady globally and is increasing in some regions. Surveys require enhancement to better capture the health implications of new technologies and multiple fuel use. PMID:23674502
Fission Fragment Studies by Gamma-Ray Spectrometry with the Mass Separator Lohengrin
NASA Astrophysics Data System (ADS)
Materna, T.; Amouroux, C.; Bail, A.; Bideau, A.; Chabod, S.; Faust, H.; Capellan, N.; Kessedjian, G.; Köster, U.; Letourneau, A.; Litaize, O.; Martin, F.; Mathieu, L.; Méplan, O.; Panebianco, S.; Régis, J.-M.; Rudigier, M.; Sage, C.; Serot, O.; Urban, W.
2014-09-01
A gamma spectrometric technique was implemented at the exit of the fission fragment separator of the ILL. It allows a precise measurement of isotopic yields of most important actinides in the heavy fragment region by an unambiguous identification of the nuclear charge of the fragments selected by the mass spectrometer. The status of the project and last results are reviewed. A spin-off of this activity is the identification of unknown nanosecond isomers in exotic nuclei through the observation of a disturbed ionic charge distribution. This technique has been improved to provide an estimation of the lifetime of the isomeric state.
Oil pollution signatures by remote sensing.
NASA Technical Reports Server (NTRS)
Catoe, C. E.; Mclean, J. T.
1972-01-01
Study of the possibility of developing an effective remote sensing system for oil pollution monitoring which would be capable of detecting oil films on water, mapping the areal extent of oil slicks, measuring slick thickness, and identifying the oil types. In the spectral regions considered (ultraviolet, visible, infrared, microwave, and radar), the signatures were sufficiently unique when compared to the background so that it was possible to detect and map oil slicks. Both microwave and radar techniques are capable of operating in adverse weather. Fluorescence techniques show promise in identifying oil types. A multispectral system will be required to detect oil, map its distribution, estimate film thickness, and characterize the oil pollutant.
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
NASA Astrophysics Data System (ADS)
Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra
2013-03-01
SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.
Liang, Zhiting; Guan, Yong; Liu, Gang; Chen, Xiangyu; Li, Fahu; Guo, Pengfei; Tian, Yangchao
2016-03-01
The `missing wedge', which is due to a restricted rotation range, is a major challenge for quantitative analysis of an object using tomography. With prior knowledge of the grey levels, the discrete algebraic reconstruction technique (DART) is able to reconstruct objects accurately with projections in a limited angle range. However, the quality of the reconstructions declines as the number of grey levels increases. In this paper, a modified DART (MDART) was proposed, in which each independent region of homogeneous material was chosen as a research object, instead of the grey values. The grey values of each discrete region were estimated according to the solution of the linear projection equations. The iterative process of boundary pixels updating and correcting the grey values of each region was executed alternately. Simulation experiments of binary phantoms as well as multiple grey phantoms show that MDART is capable of achieving high-quality reconstructions with projections in a limited angle range. The interesting advancement of MDART is that neither prior knowledge of the grey values nor the number of grey levels is necessary.
NASA Astrophysics Data System (ADS)
Eckert, R.; Neyhart, J. T.; Burd, L.; Polikar, R.; Mandayam, S. A.; Tseng, M.
2003-03-01
Mammography is the best method available as a non-invasive technique for the early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions due to fat and radiodense (light) regions due to connective and epithelial tissue. The amount of radiodense tissue can be used as a marker for predicting breast cancer risk. Previously, we have shown that the use of statistical models is a reliable technique for segmenting radiodense tissue. This paper presents improvements in the model that allow for further development of an automated system for segmentation of radiodense tissue. The segmentation algorithm employs a two-step process. In the first step, segmentation of tissue and non-tissue regions of a digitized X-ray mammogram image are identified using a radial basis function neural network. The second step uses a constrained Neyman-Pearson algorithm, developed especially for this research work, to determine the amount of radiodense tissue. Results obtained using the algorithm have been validated by comparing with estimates provided by a radiologist employing previously established methods.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed laser sheet velocimetry yields nonintrusive measurements of two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high precision (1 pct) velocity estimates, but can require several hours of processing time on specialized array processors. Under some circumstances, a simple, fast, less accurate (approx. 5 pct), data reduction technique which also gives unambiguous velocity vector information is acceptable. A direct space domain processing technique was examined. The direct space domain processing technique was found to be far superior to any other techniques known, in achieving the objectives listed above. It employs a new data coding and reduction technique, where the particle time history information is used directly. Further, it has no 180 deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 minutes on an 80386 based PC, producing a 2-D velocity vector map of the flow field. Hence, using this new space domain vector scanning (VS) technique, pulsed laser velocimetry data can be reduced quickly and reasonably accurately, without specialized array processing hardware.
NASA Astrophysics Data System (ADS)
Koskelo, Elise Anne C.; Flynn, Eric B.
2017-02-01
Inspection of and around joints, beams, and other three-dimensional structures is integral to practical nondestructive evaluation of large structures. Non-contact, scanning laser ultrasound techniques offer an automated means of physically accessing these regions. However, to realize the benefits of laser-scanning techniques, simultaneous inspection of multiple surfaces at different orientations to the scanner must not significantly degrade the signal level nor diminish the ability to distinguish defects from healthy geometric features. In this study, we evaluated the implementation of acoustic wavenumber spectroscopy for inspecting metal joints and crossbeams from interior angles. With this technique, we used a single-tone, steady-state, ultrasonic excitation to excite the joints via a single transducer attached to one surface. We then measured the full-field velocity responses using a scanning Laser Doppler vibrometer and produced maps of local wavenumber estimates. With the high signal level associated with steady-state excitation, scans could be performed at surface orientations of up to 45 degrees. We applied camera perspective projection transformations to remove the distortion in the scans due to a known projection angle, leading to a significant improvement in the local estimates of wavenumber. Projection leads to asymmetrical distortion in the wavenumber in one direction, making it possible to estimate view angle even when neither it nor the nominal wavenumber is known. Since plate thinning produces a purely symmetric increase in wavenumber, it also possible to independently estimate the degree of hidden corrosion. With a two-surface joint, using the wavenumber estimate maps, we were able to automatically calculate the orthographic projection component of each angled surface in the scan area.
An image registration-based technique for noninvasive vascular elastography
NASA Astrophysics Data System (ADS)
Valizadeh, Sina; Makkiabadi, Bahador; Mirbagheri, Alireza; Soozande, Mehdi; Manwar, Rayyan; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza
2018-02-01
Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in the regions far from the center of the vessel, causing a high error of displacement measurement. On the other hand, increasing the compression leads to a relatively large displacement in the regions near the center, which reduces the performance of the cross correlation-based methods. In this study, a non-rigid image registration-based technique is proposed to measure the tissue displacement for a relatively large compression. The results show that the error of the displacement measurement obtained by the proposed method is reduced by increasing the amount of compression while the error of the cross correlationbased method rises for a relatively large compression. We also used the synthetic aperture imaging method, benefiting the directivity diagram, to improve the image quality, especially in the superficial regions. The best relative root-mean-square error (RMSE) of the proposed method and the adaptive cross correlation method were 4.5% and 6%, respectively. Consequently, the proposed algorithm outperforms the conventional method and reduces the relative RMSE by 25%.
Longo, Maurizio; Andreis, Maria Elena; Pettinato, Cinzia; Ravasio, Giuliano; Rabbogliatti, Vanessa; De Zani, Donatella; Di Giancamillo, Mauro; Zani, Davide Danilo
2016-03-29
The aim of the work is the application of a bolus tracking technique for tomographic evaluation of the uretero-vesicular junction in dogs. Ten adult dogs (8-14 years) with variable body weight (2,8-32 kg) were enrolled in the prospective study. The patients were placed in sternal recumbency with a 10° elevated pelvis and the visualization of the uretero-vesicular junction was obtained with the bolus tracking technique after intravenous administration of non-ionic contrast medium. In the post-contrast late phase a region of interest was placed within the lumen of the distal ureters and the density values were monitored before starting the helical scan. The uretero-vesicular junction was clearly visible in 100% of patients with the visualization of the endoluminal ureteral contrast enhancement and bladder washout. At the end of the tomographic study an evaluation of the dose records was performed and compared to human exposures reported in literature for the pelvic region. The effective dose estimated for each patient (37,5-138 mSv) proved to be elevated, when compared to those reported in human patients. The bolus tracking technique could be applied for the visualization of the uretero-vesicular junction in non-pathological patients, placing the region of interest in the distal ureters. The high effective doses recorded in our study support the need of specific thresholds for veterinary patients, pointing out the attention for paediatric patient's exposure also in veterinary imaging.
NASA Astrophysics Data System (ADS)
Carrano, C. S.; Groves, K. M.; Valladares, C. E.; Delay, S. H.
2014-12-01
A complete characterization of field-aligned ionospheric irregularities responsible for the scintillation of satellite signals includes not only their spectral properties (power spectral strength, spectral index, anisotropy ratio, and outer-scale) but also their horizontal drift velocity. From a system impacts perspective, the horizontal drift velocity is important in that it dictates the rate of signal fading and also, to an extent, the level of phase fluctuations encountered by the receiver. From a physics perspective, studying the longitudinal morphology of zonal irregularity may lead to an improved understanding of the F region dynamo and regional electrodynamics at low latitudes. The irregularity drift at low latitudes is predominantly zonal and is most commonly measured by cross-correlating observations of satellite signals made by a pair of closely-spaced antennas. The AFRL-SCINDA network operates a small number of VHF spaced-antenna systems at low latitude stations for this purpose. A far greater number of GPS scintillation monitors are operated by AFRL-SCINDA (25-30) and the Low Latitude Ionospheric Sensor Network (35-50), but the receivers are situated too far apart to monitor the drift using cross-correlation techniques. In this paper, we present an alternative approach that leverages the weak scatter scintillation theory (Rino, Radio Sci., 1979) to infer the zonal irregularity drift from single-station GPS measurements of S4, sigma-phi, and the propagation geometry alone. Unlike the spaced-receiver technique, this technique requires assumptions for the height of the scattering layer (which introduces a bias in the drift estimates) and the spectral index of the irregularities (which affects the spread of the drift estimates about the mean). Nevertheless, theory and experiment show that the ratio of sigma-phi to S4 is less sensitive to these parameters than it is to the zonal drift, and hence the zonal drift can be estimated with reasonable accuracy. In this talk, we first validate the technique using spaced VHF-antenna measurements of zonal irregularity drift from the AFRL-SCINDA network. Next, we discuss preliminary results from our investigation into the longitudinal morphology of zonal irregularity drift using the AFRL-SCINDA and LISN networks of GPS scintillation monitors.
Interactions of tectonic, igneous, and hydraulic processes in the North Tharsis Region of Mars
NASA Technical Reports Server (NTRS)
Davis, P. A.; Tanaka, Kenneth L.; Golombek, M. P.; Plescia, J. B.
1991-01-01
Recent work on the north Tharsis of Mars has revealed a complex geologic history involving volcanism, tectonism, flooding, and mass wasting. Our detailed photogeologic analysis of this region found many previously unreported volcanic vents, volcaniclastic flows, irregular cracks, and minor pit chains; additional evidence that volcanic tectonic processes dominated this region throughout Martian geologic time; and the local involvement of these processes with surface and near surface water. Also, photoclinometric profiles were obtained within the region of troughs, simple grabens, and pit chains, as well as average spacings of pits along pit chains. These data were used together with techniques to estimate depths of crustal mechanical discontinuities that may have controlled the development of these features. In turn, such discontinuities may be controlled by stratigraphy, presence of water or ice, or chemical cementation.
Fractal dimension analysis of weight-bearing bones of rats during skeletal unloading
NASA Technical Reports Server (NTRS)
Pornprasertsuk, S.; Ludlow, J. B.; Webber, R. L.; Tyndall, D. A.; Sanhueza, A. I.; Yamauchi, M.
2001-01-01
Fractal analysis was used to quantify changes in trabecular bone induced through the use of a rat tail-suspension model to simulate microgravity-induced osteopenia. Fractal dimensions were estimated from digitized radiographs obtained from tail-suspended and ambulatory rats. Fifty 4-month-old male Sprague-Dawley rats were divided into groups of 24 ambulatory (control) and 26 suspended (test) animals. Rats of both groups were killed after periods of 1, 4, and 8 weeks. Femurs and tibiae were removed and radiographed with standard intraoral films and digitized using a flatbed scanner. Square regions of interest were cropped at proximal, middle, and distal areas of each bone. Fractal dimensions were estimated from slopes of regression lines fitted to circularly averaged plots of log power vs. log spatial frequency. The results showed that the computed fractal dimensions were significantly greater for images of trabecular bones from tail-suspended groups than for ambulatory groups (p < 0.01) at 1 week. Periods between 1 and 4 weeks likewise yielded significantly different estimates (p < 0.05), consistent with an increase in bone loss. In the tibiae, the proximal regions of the suspended group produced significantly greater fractal dimensions than other regions (p < 0.05), which suggests they were more susceptible to unloading. The data are consistent with other studies demonstrating osteopenia in microgravity environments and the regional response to skeletal unloading. Thus, fractal analysis could be a useful technique to evaluate the structural changes of bone.
Jang, Cheng-Shin; Huang, Han-Chen
2017-07-01
The Jiaosi Hot Spring Region is one of the most famous tourism destinations in Taiwan. The spring water is processed for various uses, including irrigation, aquaculture, swimming, bathing, foot spas, and recreational tourism. Moreover, the multipurpose uses of spring water can be dictated by the temperature of the water. To evaluate the suitability of spring water for these various uses, this study spatially characterized the spring water temperatures of the Jiaosi Hot Spring Region by integrating ordinary kriging (OK), sequential Gaussian simulation (SGS), and Geographic information system (GIS). First, variogram analyses were used to determine the spatial variability of spring water temperatures. Next, OK and SGS were adopted to model the spatial uncertainty and distributions of the spring water temperatures. Finally, the land use (i.e., agriculture, dwelling, public land, and recreation) was determined using GIS and combined with the estimated distributions of the spring water temperatures. A suitable development strategy for the multipurpose uses of spring water is proposed according to the integration of the land use and spring water temperatures. The study results indicate that the integration of OK, SGS, and GIS is capable of characterizing spring water temperatures and the suitability of multipurpose uses of spring water. SGS realizations are more robust than OK estimates for characterizing spring water temperatures compared to observed data. Furthermore, current land use is almost ideal in the Jiaosi Hot Spring Region according to the estimated spatial pattern of spring water temperatures.
NASA Astrophysics Data System (ADS)
Costa, F. A. F.; Keir, G.; McIntyre, N.; Bulovic, N.
2015-12-01
Most groundwater supply bores in Australia do not have flow metering equipment and so regional groundwater abstraction rates are not well known. Past estimates of unmetered abstraction for regional numerical groundwater modelling typically have not attempted to quantify the uncertainty inherent in the estimation process in detail. In particular, the spatial properties of errors in the estimates are almost always neglected. Here, we apply Bayesian spatial models to estimate these abstractions at a regional scale, using the state-of-the-art computationally inexpensive approaches of integrated nested Laplace approximation (INLA) and stochastic partial differential equations (SPDE). We examine a case study in the Condamine Alluvium aquifer in southern Queensland, Australia; even in this comparatively data-rich area with extensive groundwater abstraction for agricultural irrigation, approximately 80% of bores do not have reliable metered flow records. Additionally, the metering data in this area are characterised by complicated statistical features, such as zero-valued observations, non-normality, and non-stationarity. While this precludes the use of many classical spatial estimation techniques, such as kriging, our model (using the R-INLA package) is able to accommodate these features. We use a joint model to predict both probability and magnitude of abstraction from bores in space and time, and examine the effect of a range of high-resolution gridded meteorological covariates upon the predictive ability of the model. Deviance Information Criterion (DIC) scores are used to assess a range of potential models, which reward good model fit while penalising excessive model complexity. We conclude that maximum air temperature (as a reasonably effective surrogate for evapotranspiration) is the most significant single predictor of abstraction rate; and that a significant spatial effect exists (represented by the SPDE approximation of a Gaussian random field with a Matérn covariance function). Our final model adopts air temperature, solar exposure, and normalized difference vegetation index (NDVI) as covariates, shows good agreement with previous estimates at a regional scale, and additionally offers rigorous quantification of uncertainty in the estimate.
NASA Astrophysics Data System (ADS)
Mogollón, José M.; Dale, Andrew W.; Jensen, Jørn B.; Schlüter, Michael; Regnier, Pierre
2013-08-01
Estimating the amount of methane in the seafloor globally as well as the flux of methane from sediments toward the ocean-atmosphere system are important considerations in both geological and climate sciences. Nevertheless, global estimates of methane inventories and rates of methane production and consumption through anaerobic oxidation in marine sediments are very poorly constrained. Tools for regionally assessing methane formation and consumption rates would greatly increase our understanding of the spatial heterogeneity of the methane cycle as well as help constrain the global methane budget. In this article, an algorithm for calculating methane consumption rates in the inner shelf is applied to the gas-rich sediments of the Belt Seas and The Sound (North Sea-Baltic Sea transition). It is based on the depth of free gas determined by hydroacoustic techniques and the local methane solubility concentration. Due to the continuous nature of shipboard hydroacoustic measurements, this algorithm captures spatial heterogeneities in methane fluxes better than geochemical analyses of point sources such as observational/sampling stations. The sensibility of the algorithm with respect to the resolution of the free gas depth measurements (2 m vs. 50 cm) is proven of minor importance (a discrepancy of <10%) for a small part of the study area. The algorithm-derived anaerobic methane oxidation rates compare well with previous measured and modeling studies. Finally, regional results reveal that contemporary anaerobic methane oxidation in worldwide inner-shelf sediments may be an order of magnitude lower (ca. 0.24 Tmol year-1) than previous estimates (4.6 Tmol year-1). These algorithms ultimately help improve regional estimates of anaerobic oxidation of methane rates.
High dynamic range hyperspectral imaging for camouflage performance test and evaluation
NASA Astrophysics Data System (ADS)
Pearce, D.; Feenan, J.
2016-10-01
This paper demonstrates the use of high dynamic range processing applied to the specific technique of hyper-spectral imaging with linescan spectrometers. The technique provides an improvement in signal to noise for reflectance estimation. This is demonstrated for field measurements of rural imagery collected from a ground-based linescan spectrometer of rural scenes. Once fully developed, the specific application is expected to improve the colour estimation approaches and consequently the test and evaluation accuracy of camouflage performance tests. Data are presented on both field and laboratory experiments that have been used to evaluate the improvements granted by the adoption of high dynamic range data acquisition in the field of hyperspectral imaging. High dynamic ranging imaging is well suited to the hyperspectral domain due to the large variation in solar irradiance across the visible and short wave infra-red (SWIR) spectrum coupled with the wavelength dependence of the nominal silicon detector response. Under field measurement conditions it is generally impractical to provide artificial illumination; consequently, an adaptation of the hyperspectral imaging and re ectance estimation process has been developed to accommodate the solar spectrum. This is shown to improve the signal to noise ratio for the re ectance estimation process of scene materials in the 400-500 nm and 700-900 nm regions.
Wood, Molly S.; Fosness, Ryan L.; Skinner, Kenneth D.; Veilleux, Andrea G.
2016-06-27
The U.S. Geological Survey, in cooperation with the Idaho Transportation Department, updated regional regression equations to estimate peak-flow statistics at ungaged sites on Idaho streams using recent streamflow (flow) data and new statistical techniques. Peak-flow statistics with 80-, 67-, 50-, 43-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities (1.25-, 1.50-, 2.00-, 2.33-, 5.00-, 10.0-, 25.0-, 50.0-, 100-, 200-, and 500-year recurrence intervals, respectively) were estimated for 192 streamgages in Idaho and bordering States with at least 10 years of annual peak-flow record through water year 2013. The streamgages were selected from drainage basins with little or no flow diversion or regulation. The peak-flow statistics were estimated by fitting a log-Pearson type III distribution to records of annual peak flows and applying two additional statistical methods: (1) the Expected Moments Algorithm to help describe uncertainty in annual peak flows and to better represent missing and historical record; and (2) the generalized Multiple Grubbs Beck Test to screen out potentially influential low outliers and to better fit the upper end of the peak-flow distribution. Additionally, a new regional skew was estimated for the Pacific Northwest and used to weight at-station skew at most streamgages. The streamgages were grouped into six regions (numbered 1_2, 3, 4, 5, 6_8, and 7, to maintain consistency in region numbering with a previous study), and the estimated peak-flow statistics were related to basin and climatic characteristics to develop regional regression equations using a generalized least squares procedure. Four out of 24 evaluated basin and climatic characteristics were selected for use in the final regional peak-flow regression equations.Overall, the standard error of prediction for the regional peak-flow regression equations ranged from 22 to 132 percent. Among all regions, regression model fit was best for region 4 in west-central Idaho (average standard error of prediction=46.4 percent; pseudo-R2>92 percent) and region 5 in central Idaho (average standard error of prediction=30.3 percent; pseudo-R2>95 percent). Regression model fit was poor for region 7 in southern Idaho (average standard error of prediction=103 percent; pseudo-R2<78 percent) compared to other regions because few streamgages in region 7 met the criteria for inclusion in the study, and the region’s semi-arid climate and associated variability in precipitation patterns causes substantial variability in peak flows.A drainage area ratio-adjustment method, using ratio exponents estimated using generalized least-squares regression, was presented as an alternative to the regional regression equations if peak-flow estimates are desired at an ungaged site that is close to a streamgage selected for inclusion in this study. The alternative drainage area ratio-adjustment method is appropriate for use when the drainage area ratio between the ungaged and gaged sites is between 0.5 and 1.5.The updated regional peak-flow regression equations had lower total error (standard error of prediction) than all regression equations presented in a 1982 study and in four of six regions presented in 2002 and 2003 studies in Idaho. A more extensive streamgage screening process used in the current study resulted in fewer streamgages used in the current study than in the 1982, 2002, and 2003 studies. Fewer streamgages used and the selection of different explanatory variables were likely causes of increased error in some regions compared to previous studies, but overall, regional peak‑flow regression model fit was generally improved for Idaho. The revised statistical procedures and increased streamgage screening applied in the current study most likely resulted in a more accurate representation of natural peak-flow conditions.The updated, regional peak-flow regression equations will be integrated in the U.S. Geological Survey StreamStats program to allow users to estimate basin and climatic characteristics and peak-flow statistics at ungaged locations of interest. StreamStats estimates peak-flow statistics with quantifiable certainty only when used at sites with basin and climatic characteristics within the range of input variables used to develop the regional regression equations. Both the regional regression equations and StreamStats should be used to estimate peak-flow statistics only in naturally flowing, relatively unregulated streams without substantial local influences to flow, such as large seeps, springs, or other groundwater-surface water interactions that are not widespread or characteristic of the respective region.
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Byrne, Deidre A.
2010-01-01
Seafloor pressure records, collected at 11 stations aligned along a single ground track of the Topex/Poseidon and Jason satellites, are analyzed for their tidal content. With very low background noise levels and approximately 27 months of high-quality records, tidal constituents can be estimated with unusually high precision. This includes many high-frequency lines up through the seventh-diurnal band. The station deployment provides a unique opportunity to compare with tides estimated from satellite altimetry, point by point along the satellite track, in a region of moderately high mesoscale variability. That variability can significantly corrupt altimeter-based tide estimates, even with 17 years of data. A method to improve the along-track altimeter estimates by correcting the data for nontidal variability is found to yield much better agreement with the bottom-pressure data. The technique should prove useful in certain demanding applications, such as altimetric studies of internal tides.
On the influence of latency estimation on dynamic group communication using overlays
NASA Astrophysics Data System (ADS)
Vik, Knut-Helge; Griwodz, Carsten; Halvorsen, Pål
2009-01-01
Distributed interactive applications tend to have stringent latency requirements and some may have high bandwidth demands. Many of them have also very dynamic user groups for which all-to-all communication is needed. In online multiplayer games, for example, such groups are determined through region-of-interest management in the application. We have investigated a variety of group management approaches for overlay networks in earlier work and shown that several useful tree heuristics exist. However, these heuristics require full knowledge of all overlay link latencies. Since this is not scalable, we investigate the effects that latency estimation techqniues have ton the quality of overlay tree constructions. We do this by evaluating one example of our group management approaches in Planetlab and examing how latency estimation techqniues influence their quality. Specifically, we investigate how two well-known latency estimation techniques, Vivaldi and Netvigator, affect the quality of tree building.
NASA Technical Reports Server (NTRS)
Lane, John E.; Kasparis, Takis; Jones, W. Linwood; Metzger, Philip T.
2009-01-01
Methodologies to improve disdrometer processing, loosely based on mathematical techniques common to the field of particle flow and fluid mechanics, are examined and tested. The inclusion of advection and vertical wind field estimates appear to produce significantly improved results in a Lagrangian hydrometeor trajectory model, in spite of very strict assumptions of noninteracting hydrometeors, constant vertical air velocity, and time independent advection during the scan time interval. Wind field data can be extracted from each radar elevation scan by plotting and analyzing reflectivity contours over the disdrometer site and by collecting the radar radial velocity data to obtain estimates of advection. Specific regions of disdrometer spectra (drop size versus time) often exhibit strong gravitational sorting signatures, from which estimates of vertical velocity can be extracted. These independent wind field estimates become inputs and initial conditions to the Lagrangian trajectory simulation of falling hydrometeors.
Methane emissions estimate from airborne measurements over a western United States natural gas field
NASA Astrophysics Data System (ADS)
Karion, Anna; Sweeney, Colm; PéTron, Gabrielle; Frost, Gregory; Michael Hardesty, R.; Kofler, Jonathan; Miller, Ben R.; Newberger, Tim; Wolter, Sonja; Banta, Robert; Brewer, Alan; Dlugokencky, Ed; Lang, Patricia; Montzka, Stephen A.; Schnell, Russell; Tans, Pieter; Trainer, Michael; Zamora, Robert; Conley, Stephen
2013-08-01
(CH4) emissions from natural gas production are not well quantified and have the potential to offset the climate benefits of natural gas over other fossil fuels. We use atmospheric measurements in a mass balance approach to estimate CH4 emissions of 55 ± 15 × 103 kg h-1 from a natural gas and oil production field in Uintah County, Utah, on 1 day: 3 February 2012. This emission rate corresponds to 6.2%-11.7% (1σ) of average hourly natural gas production in Uintah County in the month of February. This study demonstrates the mass balance technique as a valuable tool for estimating emissions from oil and gas production regions and illustrates the need for further atmospheric measurements to determine the representativeness of our single-day estimate and to better assess inventories of CH4 emissions.
Scintillation-based Search for Off-pulse Radio Emission from Pulsars
NASA Astrophysics Data System (ADS)
Ravi, Kumar; Deshpande, Avinash A.
2018-05-01
We propose a new method to detect off-pulse (unpulsed and/or continuous) emission from pulsars using the intensity modulations associated with interstellar scintillation. Our technique involves obtaining the dynamic spectra, separately for on-pulse window and off-pulse region, with time and frequency resolutions to properly sample the intensity variations due to diffractive scintillation and then estimating their mutual correlation as a measure of off-pulse emission, if any. We describe and illustrate the essential details of this technique with the help of simulations, as well as real data. We also discuss the advantages of this method over earlier approaches to detect off-pulse emission. In particular, we point out how certain nonidealities inherent to measurement setups could potentially affect estimations in earlier approaches and argue that the present technique is immune to such nonidealities. We verify both of the above situations with relevant simulations. We apply this method to the observation of PSR B0329+54 at frequencies of 730 and 810 MHz made with the Green Bank Telescope and present upper limits for the off-pulse intensity at the two frequencies. We expect this technique to pave the way for extensive investigations of off-pulse emission with the help of existing dynamic spectral data on pulsars and, of course, with more sensitive long-duration data from new observations.
NASA Astrophysics Data System (ADS)
Butlitsky, M. A.; Zelener, B. B.; Zelener, B. V.
2015-11-01
Earlier a two-component pseudopotential plasma model, which we called a “shelf Coulomb” model has been developed. A Monte-Carlo study of canonical NVT ensemble with periodic boundary conditions has been undertaken to calculate equations of state, pair distribution functions, internal energies and other thermodynamics properties of the model. In present work, an attempt is made to apply so-called hybrid Gibbs statistical ensemble Monte-Carlo technique to this model. First simulation results data show qualitatively similar results for critical point region for both methods. Gibbs ensemble technique let us to estimate the melting curve position and a triple point of the model (in reduced temperature and specific volume coordinates): T* ≈ 0.0476, v* ≈ 6 × 10-4.
Accuracy assessment of fluoroscopy-transesophageal echocardiography registration
NASA Astrophysics Data System (ADS)
Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.
2011-03-01
This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.
Wakefield, Ewan D; Owen, Ellie; Baer, Julia; Carroll, Matthew J; Daunt, Francis; Dodd, Stephen G; Green, Jonathan A; Guilford, Tim; Mavor, Roddy A; Miller, Peter I; Newell, Mark A; Newton, Stephen F; Robertson, Gail S; Shoji, Akiko; Soanes, Louise M; Votier, Stephen C; Wanless, Sarah; Bolton, Mark
2017-10-01
Population-level estimates of species' distributions can reveal fundamental ecological processes and facilitate conservation. However, these may be difficult to obtain for mobile species, especially colonial central-place foragers (CCPFs; e.g., bats, corvids, social insects), because it is often impractical to determine the provenance of individuals observed beyond breeding sites. Moreover, some CCPFs, especially in the marine realm (e.g., pinnipeds, turtles, and seabirds) are difficult to observe because they range tens to ten thousands of kilometers from their colonies. It is hypothesized that the distribution of CCPFs depends largely on habitat availability and intraspecific competition. Modeling these effects may therefore allow distributions to be estimated from samples of individual spatial usage. Such data can be obtained for an increasing number of species using tracking technology. However, techniques for estimating population-level distributions using the telemetry data are poorly developed. This is of concern because many marine CCPFs, such as seabirds, are threatened by anthropogenic activities. Here, we aim to estimate the distribution at sea of four seabird species, foraging from approximately 5,500 breeding sites in Britain and Ireland. To do so, we GPS-tracked a sample of 230 European Shags Phalacrocorax aristotelis, 464 Black-legged Kittiwakes Rissa tridactyla, 178 Common Murres Uria aalge, and 281 Razorbills Alca torda from 13, 20, 12, and 14 colonies, respectively. Using Poisson point process habitat use models, we show that distribution at sea is dependent on (1) density-dependent competition among sympatric conspecifics (all species) and parapatric conspecifics (Kittiwakes and Murres); (2) habitat accessibility and coastal geometry, such that birds travel further from colonies with limited access to the sea; and (3) regional habitat availability. Using these models, we predict space use by birds from unobserved colonies and thereby map the distribution at sea of each species at both the colony and regional level. Space use by all four species' British breeding populations is concentrated in the coastal waters of Scotland, highlighting the need for robust conservation measures in this area. The techniques we present are applicable to any CCPF. © 2017 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
O'Connor, M.; Eads, R.
2007-12-01
Watersheds in the northern California Coast Range have been designated as "impaired" with respect to water quality because of excessive sediment loads and/or high water temperature. Sediment budget techniques have typically been used by regulatory authorities to estimate current erosion rates and to develop targets for future desired erosion rates. This study examines erosion rates estimated by various methods for portions of the Gualala River watershed, designated as having water quality impaired by sediment under provisions of the Clean Water Act Section 303(d), located in northwest Sonoma County (~90 miles north of San Francisco). The watershed is underlain by Jurassic age sedimentary and meta-sedimentary rocks of the Franciscan formation. The San Andreas Fault passes through the western edge of watershed, and other active faults are present. A substantial portion of the watershed is mantled by rock slides and earth flows, many of which are considered dormant. The Coast Range is geologically young, and rapid rates of uplift are believed to have contributed to high erosion rates. This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by a proposed vineyard development project in the watershed, and the need to document conditions in the project area, assess project environmental impacts and meet regulatory requirements pertaining to water quality. Erosion rate estimates were previously developed using sediment budget techniques for relatively large drainage areas (~100 to 1,000 km2) by the North Coast Regional Water Quality Control Board and US EPA and by the California Geological Survey. In this study, similar sediment budget techniques were used for smaller watersheds (~3 to 8 km2), and were supplemented by a suspended sediment monitoring program utilizing Turbidity Threshold Sampling techniques (as described in a companion study in this session). The duration of the monitoring program to date spanned the winter runoff seasons of Water Years 2006 and 2007. These were unusually wet and dry years, respectively, providing perspective on the range of measured sediment yield in relation to sediment budget estimates. The measured suspended sediment yields were substantially lower than predicted by sediment budget methods. Variation in geomorphic processes over time and space and methodological problems of sediment budgets may be responsible for these apparent discrepancies. The implications for water quality policy are discussed.
Guidelines for determining flood flow frequency—Bulletin 17C
England, John F.; Cohn, Timothy A.; Faber, Beth A.; Stedinger, Jery R.; Thomas, Wilbert O.; Veilleux, Andrea G.; Kiang, Julie E.; Mason, Robert R.
2018-03-29
Accurate estimates of flood frequency and magnitude are a key component of any effective nationwide flood risk management and flood damage abatement program. In addition to accuracy, methods for estimating flood risk must be uniformly and consistently applied because management of the Nation’s water and related land resources is a collaborative effort involving multiple actors including most levels of government and the private sector.Flood frequency guidelines have been published in the United States since 1967, and have undergone periodic revisions. In 1967, the U.S. Water Resources Council presented a coherent approach to flood frequency with Bulletin 15, “A Uniform Technique for Determining Flood Flow Frequencies.” The method it recommended involved fitting the log-Pearson Type III distribution to annual peak flow data by the method of moments.The first extension and update of Bulletin 15 was published in 1976 as Bulletin 17, “Guidelines for Determining Flood Flow Frequency” (Guidelines). It extended the Bulletin 15 procedures by introducing methods for dealing with outliers, historical flood information, and regional skew. Bulletin 17A was published the following year to clarify the computation of weighted skew. The next revision of the Bulletin, the Bulletin 17B, provided a host of improvements and new techniques designed to address situations that often arise in practice, including better methods for estimating and using regional skew, weighting station and regional skew, detection of outliers, and use of the conditional probability adjustment.The current version of these Guidelines are presented in this document, denoted Bulletin 17C. It incorporates changes motivated by four of the items listed as “Future Work” in Bulletin 17B and 30 years of post-17B research on flood processes and statistical methods. The updates include: adoption of a generalized representation of flood data that allows for interval and censored data types; a new method, called the Expected Moments Algorithm, which extends the method of moments so that it can accommodate interval data; a generalized approach to identification of low outliers in flood data; and an improved method for computing confidence intervals.Federal agencies are requested to use these Guidelines in all planning activities involving water and related land resources. State, local, and private organizations are encouraged to use these Guidelines to assure uniformity in the flood frequency estimates that all agencies concerned with flood risk should use for Federal planning decisions.This revision is adopted with the knowledge and understanding that review of these procedures will be ongoing. Updated methods will be adopted when warranted by experience and by examination and testing of new techniques.
NASA Astrophysics Data System (ADS)
Hara, Takeshi; Matoba, Naoto; Zhou, Xiangrong; Yokoi, Shinya; Aizawa, Hiroaki; Fujita, Hiroshi; Sakashita, Keiji; Matsuoka, Tetsuya
2007-03-01
We have been developing the CAD scheme for head and abdominal injuries for emergency medical care. In this work, we have developed an automated method to detect typical head injuries, rupture or strokes of brain. Extradural and subdural hematoma region were detected by comparing technique after the brain areas were registered using warping. We employ 5 normal and 15 stroke cases to estimate the performance after creating the brain model with 50 normal cases. Some of the hematoma regions were detected correctly in all of the stroke cases with no false positive findings on normal cases.
Lithospheric structure in the Pacific geoid
NASA Technical Reports Server (NTRS)
Marsh, B. D.
1984-01-01
In order that sub-lithospheric density variations be revealed with the geoid, the regional geoid anomalies associated with bathymetric variations must first be removed. Spectral techniques were used to generate a synthetic geoid by filtering the residual bathymetry assuming an Airy-type isostatic compensation model. An unbiased estimated of the admittances show that for region under study, no single compensation mechanism will explain all of the power in the geoid. Nevertheless, because topographic features are mainly coherent with the geoid, to first order an isostationally compensated lithosphere cut by major E-W fracture zones accounts for most of the power in the high degree and other SEASAT geoid in the Pacific.
NASA Astrophysics Data System (ADS)
Ibraheem, Ismael M.; Elawadi, Eslam A.; El-Qady, Gad M.
2018-03-01
The Wadi El Natrun area in Egypt is located west of the Nile Delta on both sides of the Cairo-Alexandria desert road, between 30°00‧ and 30°40‧N latitude, and 29°40‧ and 30°40‧E longitude. The name refers to the NW-SE trending depression located in the area and containing lakes that produce natron salt. In spite of the area is promising for oil and gas exploration as well as agricultural projects, Geophysical studies carried out in the area is limited to the regional seismic surveys accomplished by oil companies. This study presents the interpretation of the airborne magnetic data to map the structure architecture and depth to the basement of the study area. This interpretation was facilitated by applying different data enhancement and processing techniques. These techniques included filters (regional-residual separation), derivatives and depth estimation using spectral analysis and Euler deconvolution. The results were refined using 2-D forward modeling along three profiles. Based on the depth estimation techniques, the estimated depth to the basement surface, ranges from 2.25 km to 5.43 km while results of the two-dimensional forward modeling show that the depth of the basement surface ranges from 2.2 km to 4.8 km. The dominant tectonic trends in the study area at deep levels are NW (Suez Trend), NNW, NE, and ENE (Syrian Arc System trend). The older ENE trend, which dominates the northwestern desert is overprinted in the study area by relatively recent NW and NE trends, whereas the tectonic trends at shallow levels are NW, ENE, NNE (Aqaba Trend), and NE. The predominant structure trend for both deep and shallow structures is the NW trend. The results of this study can be used to better understand deep-seated basement structures and to support decisions with regard to the development of agriculture, industrial areas, as well as oil and gas exploration in northern Egypt.
Herculano-Houzel, Suzana; von Bartheld, Christopher S; Miller, Daniel J; Kaas, Jon H
2015-04-01
The number of cells comprising biological structures represents fundamental information in basic anatomy, development, aging, drug tests, pathology and genetic manipulations. Obtaining unbiased estimates of cell numbers, however, was until recently possible only through stereological techniques, which require specific training, equipment, histological processing and appropriate sampling strategies applied to structures with a homogeneous distribution of cell bodies. An alternative, the isotropic fractionator (IF), became available in 2005 as a fast and inexpensive method that requires little training, no specific software and only a few materials before it can be used to quantify total numbers of neuronal and non-neuronal cells in a whole organ such as the brain or any dissectible regions thereof. This method entails transforming a highly anisotropic tissue into a homogeneous suspension of free-floating nuclei that can then be counted under the microscope or by flow cytometry and identified morphologically and immunocytochemically as neuronal or non-neuronal. We compare the advantages and disadvantages of each method and provide researchers with guidelines for choosing the best method for their particular needs. IF is as accurate as unbiased stereology and faster than stereological techniques, as it requires no elaborate histological processing or sampling paradigms, providing reliable estimates in a few days rather than many weeks. Tissue shrinkage is also not an issue, since the estimates provided are independent of tissue volume. The main disadvantage of IF, however, is that it necessarily destroys the tissue analyzed and thus provides no spatial information on the cellular composition of biological regions of interest.
NASA Astrophysics Data System (ADS)
Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.
2013-12-01
Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.
Sepehrband, Farshid; Clark, Kristi A.; Ullmann, Jeremy F.P.; Kurniawan, Nyoman D.; Leanage, Gayeshika; Reutens, David C.; Yang, Zhengyi
2015-01-01
We examined whether quantitative density measures of cerebral tissue consistent with histology can be obtained from diffusion magnetic resonance imaging (MRI). By incorporating prior knowledge of myelin and cell membrane densities, absolute tissue density values were estimated from relative intra-cellular and intra-neurite density values obtained from diffusion MRI. The NODDI (neurite orientation distribution and density imaging) technique, which can be applied clinically, was used. Myelin density estimates were compared with the results of electron and light microscopy in ex vivo mouse brain and with published density estimates in a healthy human brain. In ex vivo mouse brain, estimated myelin densities in different sub-regions of the mouse corpus callosum were almost identical to values obtained from electron microscopy (Diffusion MRI: 42±6%, 36±4% and 43±5%; electron microscopy: 41±10%, 36±8% and 44±12% in genu, body and splenium, respectively). In the human brain, good agreement was observed between estimated fiber density measurements and previously reported values based on electron microscopy. Estimated density values were unaffected by crossing fibers. PMID:26096639
NASA Technical Reports Server (NTRS)
Dickinson, Robert E.
1995-01-01
Work under this grant has used information on precipitation and water vapor fluxes in the area of the Mexican Monsoon to analyze the regional precipitation climatology, to understand the nature of water vapor transport during the monsoon using model and observational data, and to analyze the ability of the TRMM remote sensing algorithm to characterize precipitation. An algorithm for estimating daily surface rain volumes from hourly GOES infrared images was developed and compared to radar data. Estimates were usually within a factor of two, but different linear relations between satellite reflectances and rainfall rate were obtained for each day, storm type and storm development stage. This result suggests that using TRMM sensors to calibrate other satellite IR will need to be a complex process taking into account all three of the above factors. Another study, this one of the space-time variability of the Mexican Monsoon, indicate that TRMM will have a difficult time, over the course of its expected three year lifetime, identifying the diurnal cycle of precipitation over monsoon region. Even when considering monthly rainfalls, projected satellite estimates of August rainfall show a root mean square error of 38 percent. A related examination of spatial variability of mean monthly rainfall using a novel method for removing the effects of elevation from gridded gauge data, show wide variation from a satellite-based rainfall estimates for the same time and space resolution. One issue addressed by our research, relating to the basic character of the monsoon circulation, is the determination of the source region for moisture. The monthly maps produced from our study of monsoon variability show the presence of two rainfall maxima in the analysis normalized to sea level, one in south-central Arizona associated with the Mexican monsoon maximum and one in southeastern New Mexico associated with the Gulf of Mexico. From the point of view of vertically-integrated fluxes and flux divergence of water vapor from ECMWF data, most moisture at upper levels arrives from the Gulf of Mexico, while low level moisture comes from the northern Gulf of California. Composites of ECMWF analyses for wet and dry periods (classified by rain gauge data) show that both regimes show low level moisture arriving from northern and central Gulf of California. Above 700 MB, moisture comes from both source regions and the Sierra Madre Occidental. During wet periods a longer fetch through the moist air mass above western Mexico results in a greater moisture flux into the Sonoran Desert region, while there is less moisture from the Gulf of Mexico both above and below 700 mb. Work on the grant subcontract at the University of Colorado concentrated on the development of a technique useful to TRMM combining visible, infrared and passive microwave data for measuring precipitation. Two established techniques using either visible or infrared data applied over the US Southwest correlated with gauges at the 0.58 to 0.70 level. The application of some established passive microwave techniques were less successful for a variety of reason, including problems in both the gauge and satellite data quality, sampling problems and weaknesses inherent in the algorithms themselves. A more promising solution for accurate rainfall estimation was explored using visible and infrared data to perform a cloud classification, which when combined with information about the background (e.g. Iand/ocean), was used to select the most appropriate microwave algorithm from a suite of possibilities.
Characterizing and estimating noise in InSAR and InSAR time series with MODIS
Barnhart, William D.; Lohman, Rowena B.
2013-01-01
InSAR time series analysis is increasingly used to image subcentimeter displacement rates of the ground surface. The precision of InSAR observations is often affected by several noise sources, including spatially correlated noise from the turbulent atmosphere. Under ideal scenarios, InSAR time series techniques can substantially mitigate these effects; however, in practice the temporal distribution of InSAR acquisitions over much of the world exhibit seasonal biases, long temporal gaps, and insufficient acquisitions to confidently obtain the precisions desired for tectonic research. Here, we introduce a technique for constraining the magnitude of errors expected from atmospheric phase delays on the ground displacement rates inferred from an InSAR time series using independent observations of precipitable water vapor from MODIS. We implement a Monte Carlo error estimation technique based on multiple (100+) MODIS-based time series that sample date ranges close to the acquisitions times of the available SAR imagery. This stochastic approach allows evaluation of the significance of signals present in the final time series product, in particular their correlation with topography and seasonality. We find that topographically correlated noise in individual interferograms is not spatially stationary, even over short-spatial scales (<10 km). Overall, MODIS-inferred displacements and velocities exhibit errors of similar magnitude to the variability within an InSAR time series. We examine the MODIS-based confidence bounds in regions with a range of inferred displacement rates, and find we are capable of resolving velocities as low as 1.5 mm/yr with uncertainties increasing to ∼6 mm/yr in regions with higher topographic relief.
Bailiff, I K; Stepanenko, V F; Göksu, H Y; Jungner, H; Balmukhanov, S B; Balmukhanov, T S; Khamidova, L G; Kisilev, V I; Kolyado, I B; Kolizshenkov, T V; Shoikhet, Y N; Tsyb, A F
2004-12-01
Luminescence retrospective dosimetry techniques have been applied with ceramic bricks to determine the cumulative external gamma dose due to fallout, primarily from the 1949 test, in populated regions lying NE of the Semipalatinsk Nuclear Test Site in Altai, Russia, and the Semipalatinsk region, Kazakhstan. As part of a pilot study, nine settlements were examined, three within the regions of highest predicted dose (Dolon in Kazakshstan; Laptev Log and Leshoz Topolinskiy in Russia) and the remainder of lower predicted dose (Akkol, Bolshaya Vladimrovka, Kanonerka, and Izvestka in Kazakshstan; Rubtsovsk and Kuria in Russia) within the lateral regions of the fallout trace due to the 1949 test. The settlement of Kainar, mainly affected by the 24 September 1951 nuclear test, was also examined. The bricks from this region were found to be generally suitable for use with the luminescence method. Estimates of cumulative absorbed dose in air due to fallout for Dolon and Kanonerka in Kazakshstan and Leshoz Topolinskiy were 475 +/- 110 mGy, 240 +/- 60 mGy, and 230 +/- 70 mGy, respectively. The result obtained in Dolon village is in agreement with published calculated estimates of dose normalized to Cs concentration in soil. At all the other locations (except Kainar) the experimental values of cumulative absorbed dose obtained indicated no significant dose due to fallout that could be detected within a margin of about 25 mGy. The results demonstrate the potential suitability of the luminescence method to map variations in cumulative dose within the relatively narrow corridor of fallout distribution from the 1949 test. Such work is needed to provide the basis for accurate dose reconstruction in settlements since the predominance of short-lived radionuclides in the fallout and a high degree of heterogeneity in the distribution of fallout are problematic for the application of conventional dosimetry techniques.
NASA Astrophysics Data System (ADS)
Khan, Firdos; Pilz, Jürgen
2016-04-01
South Asia is under the severe impacts of changing climate and global warming. The last two decades showed that climate change or global warming is happening and the first decade of 21st century is considered as the warmest decade over Pakistan ever in history where temperature reached 53 0C in 2010. Consequently, the spatio-temporal distribution and intensity of precipitation is badly effected and causes floods, cyclones and hurricanes in the region which further have impacts on agriculture, water, health etc. To cope with the situation, it is important to conduct impact assessment studies and take adaptation and mitigation remedies. For impact assessment studies, we need climate variables at higher resolution. Downscaling techniques are used to produce climate variables at higher resolution; these techniques are broadly divided into two types, statistical downscaling and dynamical downscaling. The target location of this study is the monsoon dominated region of Pakistan. One reason for choosing this area is because the contribution of monsoon rains in this area is more than 80 % of the total rainfall. This study evaluates a statistical downscaling technique which can be then used for downscaling climatic variables. Two statistical techniques i.e. quantile regression and copula modeling are combined in order to produce realistic results for climate variables in the area under-study. To reduce the dimension of input data and deal with multicollinearity problems, empirical orthogonal functions will be used. Advantages of this new method are: (1) it is more robust to outliers as compared to ordinary least squares estimates and other estimation methods based on central tendency and dispersion measures; (2) it preserves the dependence among variables and among sites and (3) it can be used to combine different types of distributions. This is important in our case because we are dealing with climatic variables having different distributions over different meteorological stations. The proposed model will be validated by using the (National Centers for Environmental Prediction / National Center for Atmospheric Research) NCEP/NCAR predictors for the period of 1960-1990 and validated for 1990-2000. To investigate the efficiency of the proposed model, it will be compared with the multivariate multiple regression model and with dynamical downscaling climate models by using different climate indices that describe the frequency, intensity and duration of the variables of interest. KEY WORDS: Climate change, Copula, Monsoon, Quantile regression, Spatio-temporal distribution.
Timothy Callahan; Austin E. Morrison
2016-01-01
Interpreting storm-event runoff in coastal plain watersheds is challenging because of the space- and time-variable nature of different sources that contribute to stream flow. These flow vectors and the magnitude of water flux is dependent on the pre-storm soil moisture (as estimated from depth to water table) in the lower coastal plain (LCP) region.
Manpower planning for nurse personnel.
Keaveny, T J; Hayden, R L
1978-01-01
A technique is described which can be applied to manpower planning for nurse personnel at a state or regional level. An iterative process explores the implications of alternative planning policy decision strategies intended to balance manpower supply and requirements. Impacts of the following policy alternatives are estimated: scale of operations of education institutions; interstate migration patterns; labor force participation rates; and job design of licensed practical nurse (LPN) and registered nurse (RN) positions. PMID:665883
A Technical Assessment of Seismic Yield Estimation. Appendix. Part 1
1981-01-01
Heamberger, California Institute of Technology, Pasadena "Near Source Effects on P-Waves" 14. John G. Trulio, Applied Theory, Inc.., Los Angeles...to be reported - hence users may apply their own techniques. We expect the NEIS to report several additional magnitudes, where applicable, in the...and there is still uncertainty concerning the effectiveness of such corrections if applied to various aseismic regions. (I don’t know if anyone has even
NASA Astrophysics Data System (ADS)
Mériaux, Sébastien; Conti, Allegra; Larrat, Benoît
2018-05-01
The characterization of extracellular space (ECS) architecture represents valuable information for the understanding of transport mechanisms occurring in brain parenchyma. ECS tortuosity reflects the hindrance imposed by cell membranes to molecular diffusion. Numerous strategies have been proposed to measure the diffusion through ECS and to estimate its tortuosity. The first method implies the perfusion for several hours of a radiotracer which effective diffusion coefficient D* is determined after post mortem processing. The most well-established techniques are real-time iontophoresis that measures the concentration of a specific ion at known distance from its release point, and integrative optical imaging that relies on acquiring microscopy images of macromolecules labelled with fluorophore. After presenting these methods, we focus on a recent Magnetic Resonance Imaging (MRI)-based technique that consists in acquiring concentration maps of a contrast agent diffusing within ECS. Thanks to MRI properties, molecular diffusion and tortuosity can be estimated in 3D for deep brain regions. To further discuss the reliability of this technique, we point out the influence of the delivery method on the estimation of D*. We compare the value of D* for a contrast agent intracerebrally injected, with its value when the agent is delivered to the brain after an ultrasound-induced blood-brain barrier (BBB) permeabilization. Several studies have already shown that tortuosity may be modified in pathological conditions. Therefore, we believe that MRI-based techniques could be useful in a clinical context for characterizing the diffusion properties of pathological ECS and thus predicting the drug biodistribution into the targeted area.
Intrathoracic airway wall detection using graph search and scanner PSF information
NASA Astrophysics Data System (ADS)
Reinhardt, Joseph M.; Park, Wonkyu; Hoffman, Eric A.; Sonka, Milan
1997-05-01
Measurements of the in vivo bronchial tree can be used to assess regional airway physiology. High-resolution CT (HRCT) provides detailed images of the lungs and has been used to evaluate bronchial airway geometry. Such measurements have been sued to assess diseases affecting the airways, such as asthma and cystic fibrosis, to measure airway response to external stimuli, and to evaluate the mechanics of airway collapse in sleep apnea. To routinely use CT imaging in a clinical setting to evaluate the in vivo airway tree, there is a need for an objective, automatic technique for identifying the airway tree in the CT images and measuring airway geometry parameters. Manual or semi-automatic segmentation and measurement of the airway tree from a 3D data set may require several man-hours of work, and the manual approaches suffer from inter-observer and intra- observer variabilities. This paper describes a method for automatic airway tree analysis that combines accurate airway wall location estimation with a technique for optimal airway border smoothing. A fuzzy logic, rule-based system is used to identify the branches of the 3D airway tree in thin-slice HRCT images. Raycasting is combined with a model-based parameter estimation technique to identify the approximate inner and outer airway wall borders in 2D cross-sections through the image data set. Finally, a 2D graph search is used to optimize the estimated airway wall locations and obtain accurate airway borders. We demonstrate this technique using CT images of a plexiglass tube phantom.
NASA Astrophysics Data System (ADS)
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
NASA Astrophysics Data System (ADS)
Nilsen, K.; van Soesbergen, A.; Matthews, Z.
2016-12-01
Socioeconomic development depends on local environments. However, the scientific evidence quantifying the impact of environmental factors on health, nutrition and poverty at subnational levels is limited. This is because socioeconomic indicators are derived from sample surveys representative only at aggregate levels compared to environmental variables mostly available in high-resolution grids. Cambodia was selected because of its commitment to development in the context of a rapidly deteriorating environment. Having made considerable progress since 2005, access to health services is limited, a quarter of the population is still poor and 40% rural children are malnourished. Cambodia is also facing considerable environmental challenges including high deforestation rates, land degradation and natural hazards. Addressing existing gaps in the knowledge of environmental impacts on health and livelihoods, this study applies small area estimation (SAE) to quantify health, nutritional and poverty outcomes in the context of local environments. SAE produces reliable subnational estimates of socioeconomic outcomes available only from sample surveys by combining them with information from auxiliary sources (census). A model is used to explain common trades across areas and a random effect structure is applied to explain the observed extra heterogeneity. SAE models predicting health, nutrition and poverty outcomes excluding and including contextual environmental variables on natural hazards vulnerability, forest cover, climate, and agricultural production are compared. Results are mapped at regional and district levels to spatially assess the impacts of environmental variation on the outcomes. Inter and intra-regional inequalities are also estimated to examine the efficacy of health/socioeconomic policy targeting based on geographic location. Preliminary results suggest that localised environmental factors have considerable impacts on the indicators estimated and should therefore not be ignored. While there are large regional differences, pockets of malnutrition, poverty and inequitable health outcomes within regions are identified. The inequality decomposition shows under and over-coverage of geographical targeting when environmental factors are taken into account.
Low-head hydropower assessment of the Brazilian State of São Paulo
Artan, Guleid A.; Cushing, W. Matthew; Mathis, Melissa L.; Tieszen, Larry L.
2014-01-01
This study produced a comprehensive estimate of the magnitude of hydropower potential available in the streams that drain watersheds entirely within the State of São Paulo, Brazil. Because a large part of the contributing area is outside of São Paulo, the main stem of the Paraná River was excluded from the assessment. Potential head drops were calculated from the Digital Terrain Elevation Data,which has a 1-arc-second resolution (approximately 30-meter resolution at the equator). For the conditioning and validation of synthetic stream channels derived from the Digital Elevation Model datasets, hydrography data (in digital format) supplied by the São Paulo State Department of Energy and the Agência Nacional de Águas were used. Within the study area there were 1,424 rain gages and 123 streamgages with long-term data records. To estimate average yearly streamflow, a hydrologic regionalization system that divides the State into 21 homogeneous basins was used. Stream segments, upstream areas, and mean annual rainfall were estimated using geographic information systems techniques. The accuracy of the flows estimated with the regionalization models was validated. Overall, simulated streamflows were significantly correlated with the observed flows but with a consistent underestimation bias. When the annual mean flows from the regionalization models were adjusted upward by 10 percent, average streamflow estimation bias was reduced from -13 percent to -4 percent. The sum of all the validated stream reach mean annual hydropower potentials in the 21 basins is 7,000 megawatts (MW). Hydropower potential is mainly concentrated near the Serra do Mar mountain range and along the Tietê River. The power potential along the Tietê River is mainly at sites with medium and high potentials, sites where hydropower has already been harnessed. In addition to the annual mean hydropower estimates, potential hydropower estimates with flow rates with exceedance probabilities of 40 percent, 60 percent, and 90 percent were made.
NASA Astrophysics Data System (ADS)
Miller, D. L.; Roberts, D. A.; Clarke, K. C.; Peters, E. B.; Menzer, O.; Lin, Y.; McFadden, J. P.
2017-12-01
Gross primary productivity (GPP) is commonly estimated with remote sensing techniques over large regions of Earth; however, urban areas are typically excluded due to a lack of light use efficiency (LUE) parameters specific to urban vegetation and challenges stemming from the spatial heterogeneity of urban land cover. In this study, we estimated GPP during the middle of the growing season, both within and among vegetation and land use types, in the Minneapolis-Saint Paul, Minnesota metropolitan region (52.1% vegetation cover). We derived LUE parameters for specific urban vegetation types using estimates of GPP from eddy covariance and tree sap flow-based CO2 flux observations and fraction of absorbed photosynthetically active radiation derived from 2-m resolution WorldView-2 satellite imagery. We produced a pixel-based hierarchical land cover classification of built-up and vegetated urban land cover classes distinguishing deciduous broadleaf trees, evergreen needleleaf trees, turf grass, and golf course grass from impervious and soil surfaces. The overall classification accuracy was 80% (kappa = 0.73). The mapped GPP estimates were within 12% of estimates from independent tall tower eddy covariance measurements. Mean GPP estimates ( ± standard deviation; g C m-2 day-1) for the entire study area from highest to lowest were: golf course grass (11.77 ± 1.20), turf grass (6.05 ± 1.07), evergreen needleleaf trees (5.81 ± 0.52), and deciduous broadleaf trees (2.52 ± 0.25). Turf grass GPP had a larger coefficient of variation (0.18) than the other vegetation classes ( 0.10). Mean land use GPP for the full study area varied as a function of percent vegetation cover. Urban GPP in general, both including and excluding non-vegetated areas, was less than half that of literature estimates for nearby natural forests and grasslands.
Using Meteosat-10 and GPS ZWD measurements for creating regional water vapor maps.
NASA Astrophysics Data System (ADS)
Leontiev, Anton; Reuveni, Yuval
2017-04-01
Water vapor (WV) is one of the greenhouse gases, which plays a crucial role in global warming. It's investigation is of great importance for climate and global warming studies. One of the main difficulties of such studies is that WV varies constantly across the lower part of the atmosphere. Currently, most of studies provides WV estimations using only one technique such as tropospheric GPS path delays [Duan et al.] or multi-spectral reflected measurements from different meteorological satellites such as the Meteosat series [Schroedter et al.]. Constructing WV maps using only interpolated GPS zenith wet delay (ZWD) estimations has a main disadvantage - it doesn't take in account clouds which are located outside the integrated GPS paths. Using our previous work [Leontiev, Reuveni, in review] we were able to estimate Meteosat-10 7.3 μm WV pixel values by extracting the mathematical dependency between the WV amount calculated using GPS ZWD and the Meteosat-10 data. Here, we present a new strategy which combines these two approaches for WV estimation by using the mathematical dependency between GPS-ZWD and Meteosat-10 in order to evaluate the WV amount at cloudy conditions when preforming the interpolation between adjusted GPS station inside our network. This approach increases the accuracy of the estimated regional water vapor maps. References: Duan, J. et al. (1996), GPS Meteorology: Direct Estimation of the Absolute Value of Precipitable Water, J. Appl. Meteorol., 35(6), 830-838, doi:10.1175/15200450(1996)035<0830:GMDEOT>2.0.CO;2. Leontiev, A., Reuveni, Y.: Combining METEOSAT-10 satellite image data with GPS tropospheric path delays to estimate regional Integrated Water Vapor (IWV) distribution, Atmos. Meas. Tech. Discuss, doi:10.5194/amt-2016-217, in review, 2016. Schroedter-Homscheidt, M., A. Drews, and S. Heise (2008), Total water vapor column retrieval from MSG-SEVIRI split window measurements exploiting the daily cycle of land surface temperatures, Remote Sens. Environ., 112(1), 249-258, doi:10.1016/j.rse.2007.05.006
Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.
2012-01-01
In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231
ZWD time series analysis derived from NRT data processing. A regional study of PW in Greece.
NASA Astrophysics Data System (ADS)
Pikridas, Christos; Balidakis, Kyriakos; Katsougiannopoulos, Symeon
2015-04-01
ZWD (Zenith Wet/non-hydrostatic Delay) estimates are routinely derived Near Real Time from the new established Analysis Center in the Department of Geodesy and Surveying of Aristotle University of Thessaloniki (DGS/AUT-AC), in the framework of E-GVAP (EUMETNET GNSS water vapour project) since October 2014. This process takes place on an hourly basis and yields, among else, station coordinates and tropospheric parameter estimates for a network of 90+ permanent GNSS (Global Navigation Satellite System) stations. These are distributed at the wider part of Hellenic region. In this study, temporal and spatial variability of ZWD estimates were examined, as well as their relation with coordinate series extracted from both float and fixed solution of the initial phase ambiguities. For this investigation, Bernese GNSS Software v5.2 was used for the acquisition of the 6 month dataset from the aforementioned network. For time series analysis we employed techniques such as the Generalized Lomb-Scargle periodogram and Burg's maximum entropy method due to inefficiencies of the Discrete Fourier Transform application in the test dataset. Through the analysis, interesting results for further geophysical interpretation were drawn. In addition, the spatial and temporal distributions of Precipitable Water vapour (PW) obtained from both ZWD estimates and ERA-Interim reanalysis grids were investigated.
Shear Wave Velocity Imaging Using Transient Electrode Perturbation: Phantom and ex vivo Validation
Varghese, Tomy; Madsen, Ernest L.
2011-01-01
This paper presents a new shear wave velocity imaging technique to monitor radio-frequency and microwave ablation procedures, coined electrode vibration elastography. A piezoelectric actuator attached to an ablation needle is transiently vibrated to generate shear waves that are tracked at high frame rates. The time-to-peak algorithm is used to reconstruct the shear wave velocity and thereby the shear modulus variations. The feasibility of electrode vibration elastography is demonstrated using finite element models and ultrasound simulations, tissue-mimicking phantoms simulating fully (phantom 1) and partially ablated (phantom 2) regions, and an ex vivo bovine liver ablation experiment. In phantom experiments, good boundary delineation was observed. Shear wave velocity estimates were within 7% of mechanical measurements in phantom 1 and within 17% in phantom 2. Good boundary delineation was also demonstrated in the ex vivo experiment. The shear wave velocity estimates inside the ablated region were higher than mechanical testing estimates, but estimates in the untreated tissue were within 20% of mechanical measurements. A comparison of electrode vibration elastography and electrode displacement elastography showed the complementary information that they can provide. Electrode vibration elastography shows promise as an imaging modality that provides ablation boundary delineation and quantitative information during ablation procedures. PMID:21075719
Association of earthquakes and faults in the San Francisco Bay area using Bayesian inference
Wesson, R.L.; Bakun, W.H.; Perkins, D.M.
2003-01-01
Bayesian inference provides a method to use seismic intensity data or instrumental locations, together with geologic and seismologic data, to make quantitative estimates of the probabilities that specific past earthquakes are associated with specific faults. Probability density functions are constructed for the location of each earthquake, and these are combined with prior probabilities through Bayes' theorem to estimate the probability that an earthquake is associated with a specific fault. Results using this method are presented here for large, preinstrumental, historical earthquakes and for recent earthquakes with instrumental locations in the San Francisco Bay region. The probabilities for individual earthquakes can be summed to construct a probabilistic frequency-magnitude relationship for a fault segment. Other applications of the technique include the estimation of the probability of background earthquakes, that is, earthquakes not associated with known or considered faults, and the estimation of the fraction of the total seismic moment associated with earthquakes less than the characteristic magnitude. Results for the San Francisco Bay region suggest that potentially damaging earthquakes with magnitudes less than the characteristic magnitudes should be expected. Comparisons of earthquake locations and the surface traces of active faults as determined from geologic data show significant disparities, indicating that a complete understanding of the relationship between earthquakes and faults remains elusive.
Validation of Airborne FMCW Radar Measurements of Snow Thickness Over Sea Ice in Antarctica
NASA Technical Reports Server (NTRS)
Galin, Natalia; Worby, Anthony; Markus, Thorsten; Leuschen, Carl; Gogineni, Prasad
2012-01-01
Antarctic sea ice and its snow cover are integral components of the global climate system, yet many aspects of their vertical dimensions are poorly understood, making their representation in global climate models poor. Remote sensing is the key to monitoring the dynamic nature of sea ice and its snow cover. Reliable and accurate snow thickness data are currently a highly sought after data product. Remotely sensed snow thickness measurements can provide an indication of precipitation levels, predicted to increase with effects of climate change in the polar regions. Airborne techniques provide a means for regional-scale estimation of snow depth and distribution. Accurate regional-scale snow thickness data will also facilitate an increase in the accuracy of sea ice thickness retrieval from satellite altimeter freeboard estimates. The airborne data sets are easier to validate with in situ measurements and are better suited to validating satellite algorithms when compared with in situ techniques. This is primarily due to two factors: better chance of getting coincident in situ and airborne data sets and the tractability of comparison between an in situ data set and the airborne data set averaged over the footprint of the antennas. A 28-GHz frequency modulated continuous wave (FMCW) radar loaned by the Center for Remote Sensing of Ice Sheets to the Australian Antarctic Division is used to measure snow thickness over sea ice in East Antarctica. Provided with the radar design parameters, the expected performance parameters of the radar are summarized. The necessary conditions for unambiguous identification of the airsnow and snowice layers for the radar are presented. Roughnesses of the snow and ice surfaces are found to be dominant determinants in the effectiveness of layer identification for this radar. Finally, this paper presents the first in situ validated snow thickness estimates over sea ice in Antarctica derived from an FMCW radar on a helicopterborne platform.
NASA Astrophysics Data System (ADS)
Hong, Yang
Precipitation estimation from satellite information (VISIBLE , IR, or microwave) is becoming increasingly imperative because of its high spatial/temporal resolution and board coverage unparalleled by ground-based data. After decades' efforts of rainfall estimation using IR imagery as basis, it has been explored and concluded that the limitations/uncertainty of the existing techniques are: (1) pixel-based local-scale feature extraction; (2) IR temperature threshold to define rain/no-rain clouds; (3) indirect relationship between rain rate and cloud-top temperature; (4) lumped techniques to model high variability of cloud-precipitation processes; (5) coarse scales of rainfall products. As continuing studies, a new version of Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network (PERSIANN), called Cloud Classification System (CCS), has been developed to cope with these limitations in this dissertation. CCS includes three consecutive components: (1) a hybrid segmentation algorithm, namely Hierarchically Topographical Thresholding and Stepwise Seeded Region Growing (HTH-SSRG), to segment satellite IR images into separated cloud patches; (2) a 3D feature extraction procedure to retrieve both pixel-based local-scale and patch-based large-scale features of cloud patch at various heights; (3) an ANN model, Self-Organizing Nonlinear Output (SONO) network, to classify cloud patches into similarity-based clusters, using Self-Organizing Feature Map (SOFM), and then calibrate hundreds of multi-parameter nonlinear functions to identify the relationship between every cloud types and their underneath precipitation characteristics using Probability Matching Method and Multi-Start Downhill Simplex optimization techniques. The model was calibrated over the Southwest of United States (100°--130°W and 25°--45°N) first and then adaptively adjusted to the study region of North America Monsoon Experiment (65°--135°W and 10°--50°N) using observations from Geostationary Operational Environmental Satellite (GOES) IR imagery, Next Generation Radar (NEXRAD) rainfall network, and Tropical Rainfall Measurement Mission (TRMM) microwave rain rate estimates. CCS functions as a distributed model that first identifies cloud patches and then dispatches different but the best matching cloud-precipitation function for each cloud patch to estimate instantaneous rain rate at high spatial resolution (4km) and full temporal resolution of GOES IR images (every 30-minute). Evaluated over a range of spatial and temporal scales, the performance of CCS compared favorably with GOES Precipitation Index (GPI), Universal Adjusted GPI (UAGPI), PERSIANN, and Auto-Estimator (AE) algorithms, consistently. Particularly, the large number of nonlinear functions and optimum IR-rain rate thresholds of CCS model are highly variable, reflecting the complexity of dominant cloud-precipitation processes from cloud patch to cloud patch over various regions. As a result, CCS can more successfully capture variability in rain rate at small scales than existing algorithms and potentially provides rainfall product from GOES IR-NEXARD-TRMM TMI (SSM/I) at 0.12° x 0.12° and 3-hour resolution with relative low standard error (˜=3.0mm/hr) and high correlation coefficient (˜=0.65).
Progress in Turbulence Detection via GNSS Occultation Data
NASA Technical Reports Server (NTRS)
Cornman, L. B.; Goodrich, R. K.; Axelrad, P.; Barlow, E.
2012-01-01
The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.
On non-parametric maximum likelihood estimation of the bivariate survivor function.
Prentice, R L
The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.
Sampling design optimization for spatial functions
Olea, R.A.
1984-01-01
A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.
Boundary methods for mode estimation
NASA Astrophysics Data System (ADS)
Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.
1999-08-01
This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).
Non-destructive evaluation of laboratory scale hydraulic fracturing using acoustic emission
NASA Astrophysics Data System (ADS)
Hampton, Jesse Clay
The primary objective of this research is to develop techniques to characterize hydraulic fractures and fracturing processes using acoustic emission monitoring based on laboratory scale hydraulic fracturing experiments. Individual microcrack AE source characterization is performed to understand the failure mechanisms associated with small failures along pre-existing discontinuities and grain boundaries. Individual microcrack analysis methods include moment tensor inversion techniques to elucidate the mode of failure, crack slip and crack normal direction vectors, and relative volumetric deformation of an individual microcrack. Differentiation between individual microcrack analysis and AE cloud based techniques is studied in efforts to refine discrete fracture network (DFN) creation and regional damage quantification of densely fractured media. Regional damage estimations from combinations of individual microcrack analyses and AE cloud density plotting are used to investigate the usefulness of weighting cloud based AE analysis techniques with microcrack source data. Two granite types were used in several sample configurations including multi-block systems. Laboratory hydraulic fracturing was performed with sample sizes ranging from 15 x 15 x 25 cm3 to 30 x 30 x 25 cm 3 in both unconfined and true-triaxially confined stress states using different types of materials. Hydraulic fracture testing in rock block systems containing a large natural fracture was investigated in terms of AE response throughout fracture interactions. Investigations of differing scale analyses showed the usefulness of individual microcrack characterization as well as DFN and cloud based techniques. Individual microcrack characterization weighting cloud based techniques correlated well with post-test damage evaluations.
NASA Astrophysics Data System (ADS)
Maurya, S. P.; Singh, K. H.; Singh, N. P.
2018-05-01
In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.
Moho Modeling Using FFT Technique
NASA Astrophysics Data System (ADS)
Chen, Wenjin; Tenzer, Robert
2017-04-01
To improve the numerical efficiency, the Fast Fourier Transform (FFT) technique was facilitated in Parker-Oldenburg's method for a regional gravimetric Moho recovery, which assumes the Earth's planar approximation. In this study, we extend this definition for global applications while assuming a spherical approximation of the Earth. In particular, we utilize the FFT technique for a global Moho recovery, which is practically realized in two numerical steps. The gravimetric forward modeling is first applied, based on methods for a spherical harmonic analysis and synthesis of the global gravity and lithospheric structure models, to compute the refined gravity field, which comprises mainly the gravitational signature of the Moho geometry. The gravimetric inverse problem is then solved iteratively in order to determine the Moho depth. The application of FFT technique to both numerical steps reduces the computation time to a fraction of that required without applying this fast algorithm. The developed numerical producers are used to estimate the Moho depth globally, and the gravimetric result is validated using the global (CRUST1.0) and regional (ESC) seismic Moho models. The comparison reveals a relatively good agreement between the gravimetric and seismic models, with the RMS of differences (of 4-5 km) at the level of expected uncertainties of used input datasets, while without the presence of significant systematic bias.
Regional climate models downscaling in the Alpine area with multimodel superensemble
NASA Astrophysics Data System (ADS)
Cane, D.; Barbarino, S.; Renier, L. A.; Ronchi, C.
2013-05-01
The climatic scenarios show a strong signal of warming in the Alpine area already for the mid-XXI century. The climate simulations, however, even when obtained with regional climate models (RCMs), are affected by strong errors when compared with observations, due both to their difficulties in representing the complex orography of the Alps and to limitations in their physical parametrization. Therefore, the aim of this work is to reduce these model biases by using a specific post processing statistic technique, in order to obtain a more suitable projection of climate change scenarios in the Alpine area. For our purposes we used a selection of regional climate models (RCMs) runs which were developed in the framework of the ENSEMBLES project. They were carefully chosen with the aim to maximise the variety of leading global climate models and of the RCMs themselves, calculated on the SRES scenario A1B. The reference observations for the greater Alpine area were extracted from the European dataset E-OBS (produced by the ENSEMBLES project), which have an available resolution of 25 km. For the study area of Piedmont daily temperature and precipitation observations (covering the period from 1957 to the present) were carefully gridded on a 14 km grid over Piedmont region through the use of an optimal interpolation technique. Hence, we applied the multimodel superensemble technique to temperature fields, reducing the high biases of RCMs temperature field compared to observations in the control period. We also proposed the application of a brand new probabilistic multimodel superensemble dressing technique, already applied to weather forecast models successfully, to RCMS: the aim was to estimate precipitation fields, with careful description of precipitation probability density functions conditioned to the model outputs. This technique allowed for reducing the strong precipitation overestimation, arising from the use of RCMs, over the Alpine chain and to reproduce well the monthly behaviour of precipitation in the control period.
NASA Technical Reports Server (NTRS)
Ziemke, J. R.; Chandra, S.; Bhartia, P. K.; Einaudi, Franco (Technical Monitor)
2000-01-01
A new technique denoted cloud slicing has been developed for estimating tropospheric ozone profile information. All previous methods using satellite data were only capable of estimating the total column of ozone in the troposphere. Cloud slicing takes advantage of the opaque property of water vapor clouds to ultraviolet wavelength radiation. Measurements of above-cloud column ozone from the Nimbus 7 total ozone mapping spectrometer (TOMS) instrument are combined together with Nimbus 7 temperature humidity and infrared radiometer (THIR) cloud-top pressure data to derive ozone column amounts in the upper troposphere. In this study tropical TOMS and THIR data for the period 1979-1984 are analyzed. By combining total tropospheric column ozone (denoted TCO) measurements from the convective cloud differential (CCD) method with 100-400 hPa upper tropospheric column ozone amounts from cloud slicing, it is possible to estimate 400-1000 hPa lower tropospheric column ozone and evaluate its spatial and temporal variability. Results for both the upper and lower tropical troposphere show a year-round zonal wavenumber 1 pattern in column ozone with largest amounts in the Atlantic region (up to approx. 15 DU in the 100-400 hPa pressure band and approx. 25-30 DU in the 400-1000 hPa pressure band). Upper tropospheric ozone derived from cloud slicing shows maximum column amounts in the Atlantic region in the June-August and September-November seasons which is similar to the seasonal variability of CCD derived TCO in the region. For the lower troposphere, largest column amounts occur in the September-November season over Brazil in South America and also southern Africa. Localized increases in the tropics in lower tropospheric ozone are found over the northern region of South America around August and off the west coast of equatorial Africa in the March-May season. Time series analysis for several regions in South America and Africa show an anomalous increase in ozone in the lower troposphere around the month of March which is not observed in the upper troposphere. The eastern Pacific indicates weak seasonal variability of upper, lower, and total tropospheric ozone compared to the western Pacific which shows largest TCO amounts in both hemispheres around spring months. Ozone variability in the western Pacific is expected to have greater variability caused by strong convection, pollution and biomass burning, land/sea contrast and monsoon developments.
Regional climate models downscaling in the Alpine area with Multimodel SuperEnsemble
NASA Astrophysics Data System (ADS)
Cane, D.; Barbarino, S.; Renier, L. A.; Ronchi, C.
2012-08-01
The climatic scenarios show a strong signal of warming in the Alpine area already for the mid XXI century. The climate simulations, however, even when obtained with Regional Climate Models (RCMs), are affected by strong errors where compared with observations, due to their difficulties in representing the complex orography of the Alps and limitations in their physical parametrization. Therefore the aim of this work is reducing these model biases using a specific post processing statistic technique to obtain a more suitable projection of climate change scenarios in the Alpine area. For our purposes we use a selection of RCMs runs from the ENSEMBLES project, carefully chosen in order to maximise the variety of leading Global Climate Models and of the RCMs themselves, calculated on the SRES scenario A1B. The reference observation for the Greater Alpine Area are extracted from the European dataset E-OBS produced by the project ENSEMBLES with an available resolution of 25 km. For the study area of Piedmont daily temperature and precipitation observations (1957-present) were carefully gridded on a 14-km grid over Piedmont Region with an Optimal Interpolation technique. Hence, we applied the Multimodel SuperEnsemble technique to temperature fields, reducing the high biases of RCMs temperature field compared to observations in the control period. We propose also the first application to RCMS of a brand new probabilistic Multimodel SuperEnsemble Dressing technique to estimate precipitation fields, already applied successfully to weather forecast models, with careful description of precipitation Probability Density Functions conditioned to the model outputs. This technique reduces the strong precipitation overestimation by RCMs over the alpine chain and reproduces well the monthly behaviour of precipitation in the control period.
The estimation of probable maximum precipitation: the case of Catalonia.
Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel
2008-12-01
A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.
Interferometric estimation of ice sheet motion and topography
NASA Technical Reports Server (NTRS)
Joughlin, Ian; Kwok, Ron; Fahnestock, Mark; Winebrenner, Dale; Tulaczyk, Slawek; Gogenini, Prasad
1997-01-01
With ERS-1/2 satellite radar interferometry, it is possible to make measurements of glacier motion with high accuracy and fine spatial resolution. Interferometric techniques were applied to map velocity and topography for several outlet glaciers in Greenland. For the Humboldt and Petermann glaciers, data from several adjacent tracks were combined to make a wide-area map that includes the enhanced flow regions of both glaciers. The discharge flux of the Petermann glacier upstream of the grounding line was estimated, thereby establishing the potential use of ERS-1/2 interferometric data for monitoring ice-sheet discharge. Interferograms collected along a single track are sensitive to only one component of motion. By utilizing data from ascending and descending passes and by making a surface-parallel flow assumption, it is possible to measure the full three-dimensional vector flow field. The application of this technique for an area on the Ryder glacier is demonstrated. Finally, ERS-1/2 interferograms were used to observe a mini-surge on the Ryder glacier that occurred in autumn of 1995.
Geostatistical assessment of Pb in soil around Paris, France.
Saby, N; Arrouays, D; Boulonne, L; Jolivet, C; Pochot, A
2006-08-15
This paper presents a survey on soil Pb contamination around Paris (France) using the French soil monitoring network. The first aim of this study is to estimate the total amount of anthropogenic Pb inputs in soils and to distinguish Pb due to diffuse pollution from geochemical background Pb. Secondly, this study tries to find the main controlling factors of the spatial distribution of anthropogenic Pb. We used the technique of relative topsoil enhancement to evaluate the anthropogenic stock of Pb and we performed lognormal kriging to map Pb regional distribution. The results show a strong gradient of anthropogenic stock of Pb around the urban Paris area. We estimate a total amount of anthropogenic stock of Pb close to 143,000 metric tons, which corresponds to an average accumulation of 5.9 t km(-2). Our study suggests that a grid-based survey can help to quantify diffuse Pb contamination by using robust techniques of calculation and that it might also be used to validate predictions of deposition models.
Statistical Inference of a RANS closure for a Jet-in-Crossflow simulation
NASA Astrophysics Data System (ADS)
Heyse, Jan; Edeling, Wouter; Iaccarino, Gianluca
2016-11-01
The jet-in-crossflow is found in several engineering applications, such as discrete film cooling for turbine blades, where a coolant injected through hols in the blade's surface protects the component from the hot gases leaving the combustion chamber. Experimental measurements using MRI techniques have been completed for a single hole injection into a turbulent crossflow, providing full 3D averaged velocity field. For such flows of engineering interest, Reynolds-Averaged Navier-Stokes (RANS) turbulence closure models are often the only viable computational option. However, RANS models are known to provide poor predictions in the region close to the injection point. Since these models are calibrated on simple canonical flow problems, the obtained closure coefficient estimates are unlikely to extrapolate well to more complex flows. We will therefore calibrate the parameters of a RANS model using statistical inference techniques informed by the experimental jet-in-crossflow data. The obtained probabilistic parameter estimates can in turn be used to compute flow fields with quantified uncertainty. Stanford Graduate Fellowship in Science and Engineering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, S.K.; Kim, H.S.; Kim, C.G.
1998-05-01
a new instantaneous torque-control strategy is presented for high-performance control of a permanent magnet (PM) synchronous motor. In order to deal with the torque pulsating problem of a PM synchronous motor in a low-speed region, new torque estimation and control techniques are proposed. The linkage flux of a PM synchronous motor is estimated using a model reference adaptive system technique, and the developed torque is instantaneously controlled by the proposed torque controller combining a variable structure control (VSC) with a space-vector pulse-width modulation (PWM). The proposed control provides the advantage of reducing the torque pulsation caused by the nonsinusoidal fluxmore » distribution. This control strategy is applied to the high-torque PM synchronous motor drive system for direct-drive applications and implemented by using a software of the digital signal processor (DSP) TMS320C30. The simulations and experiments are carried out for this system, and the results well demonstrate the effectiveness of the proposed control.« less
Yoo, Do Guen; Lee, Ho Min; Sadollah, Ali; Kim, Joong Hoon
2015-01-01
Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6). The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply.
Lee, Ho Min; Sadollah, Ali
2015-01-01
Water supply systems are mainly classified into branched and looped network systems. The main difference between these two systems is that, in a branched network system, the flow within each pipe is a known value, whereas in a looped network system, the flow in each pipe is considered an unknown value. Therefore, an analysis of a looped network system is a more complex task. This study aims to develop a technique for estimating the optimal pipe diameter for a looped agricultural irrigation water supply system using a harmony search algorithm, which is an optimization technique. This study mainly serves two purposes. The first is to develop an algorithm and a program for estimating a cost-effective pipe diameter for agricultural irrigation water supply systems using optimization techniques. The second is to validate the developed program by applying the proposed optimized cost-effective pipe diameter to an actual study region (Saemangeum project area, zone 6). The results suggest that the optimal design program, which applies an optimization theory and enhances user convenience, can be effectively applied for the real systems of a looped agricultural irrigation water supply. PMID:25874252
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahid, Ali, E-mail: ali.wahid@live.com; Salim, Ahmed Mohamed Ahmed, E-mail: mohamed.salim@petronas.com.my; Yusoff, Wan Ismail Wan, E-mail: wanismail-wanyusoff@petronas.com.my
2016-02-01
Geostatistics or statistical approach is based on the studies of temporal and spatial trend, which depend upon spatial relationships to model known information of variable(s) at unsampled locations. The statistical technique known as kriging was used for petrophycial and facies analysis, which help to assume spatial relationship to model the geological continuity between the known data and the unknown to produce a single best guess of the unknown. Kriging is also known as optimal interpolation technique, which facilitate to generate best linear unbiased estimation of each horizon. The idea is to construct a numerical model of the lithofacies and rockmore » properties that honor available data and further integrate with interpreting seismic sections, techtonostratigraphy chart with sea level curve (short term) and regional tectonics of the study area to find the structural and stratigraphic growth history of the NW Bonaparte Basin. By using kriging technique the models were built which help to estimate different parameters like horizons, facies, and porosities in the study area. The variograms were used to determine for identification of spatial relationship between data which help to find the depositional history of the North West (NW) Bonaparte Basin.« less
Temperature of the plasmasphere from Van Allen Probes HOPE
NASA Astrophysics Data System (ADS)
Genestreti, K. J.; Goldstein, J.; Corley, G. D.; Farner, W.; Kistler, L. M.; Larsen, B. A.; Mouikis, C. G.; Ramnarace, C.; Skoug, R. M.; Turner, N. E.
2017-01-01
We introduce two novel techniques for estimating temperatures of very low energy space plasmas using, primarily, in situ data from an electrostatic analyzer mounted on a charged and moving spacecraft. The techniques are used to estimate proton temperatures during intervals where the bulk of the ion plasma is well below the energy bandpass of the analyzer. Both techniques assume that the plasma may be described by a one-dimensional E→×B→ drifting Maxwellian and that the potential field and motion of the spacecraft may be accounted for in the simplest possible manner, i.e., by a linear shift of coordinates. The first technique involves the application of a constrained theoretical fit to a measured distribution function. The second technique involves the comparison of total and partial-energy number densities. Both techniques are applied to Van Allen Probes Helium, Oxygen, Proton, and Electron (HOPE) observations of the proton component of the plasmasphere during two orbits on 15 January 2013. We find that the temperatures calculated from these two order-of-magnitude-type techniques are in good agreement with typical ranges of the plasmaspheric temperature calculated using retarding potential analyzer-based measurements—generally between 0.2 and 2 eV (2000-20,000 K). We also find that the temperature is correlated with L shell and hot plasma density and is negatively correlated with the cold plasma density. We posit that the latter of these three relationships may be indicative of collisional or wave-driven heating of the plasmasphere in the ring current overlap region. We note that these techniques may be easily applied to similar data sets or used for a variety of purposes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frey, K.A.; Hichwa, R.D.; Ehrenkaufer, R.L.
1985-10-01
A tracer kinetic method is developed for the in vivo estimation of high-affinity radioligand binding to central nervous system receptors. Ligand is considered to exist in three brain pools corresponding to free, nonspecifically bound, and specifically bound tracer. These environments, in addition to that of intravascular tracer, are interrelated by a compartmental model of in vivo ligand distribution. A mathematical description of the model is derived, which allows determination of regional blood-brain barrier permeability, nonspecific binding, the rate of receptor-ligand association, and the rate of dissociation of bound ligand, from the time courses of arterial blood and tissue tracer concentrations.more » The term ''free receptor density'' is introduced to describe the receptor population measured by this method. The technique is applied to the in vivo determination of regional muscarinic acetylcholine receptors in the rat, with the use of (TH)scopolamine. Kinetic estimates of free muscarinic receptor density are in general agreement with binding capacities obtained from previous in vivo and in vitro equilibrium binding studies. In the striatum, however, kinetic estimates of free receptor density are less than those in the neocortex--a reversal of the rank ordering of these regions derived from equilibrium determinations. A simplified model is presented that is applicable to tracers that do not readily dissociate from specific binding sites during the experimental period.« less