Langdon, Jonathan H; Elegbe, Etana; McAleavey, Stephen A
2015-01-01
Single Tracking Location (STL) Shear wave Elasticity Imaging (SWEI) is a method for detecting elastic differences between tissues. It has the advantage of intrinsic speckle bias suppression compared to Multiple Tracking Location (MTL) variants of SWEI. However, the assumption of a linear model leads to an overestimation of the shear modulus in viscoelastic media. A new reconstruction technique denoted Single Tracking Location Viscosity Estimation (STL-VE) is introduced to correct for this overestimation. This technique utilizes the same raw data generated in STL-SWEI imaging. Here, the STL-VE technique is developed by way of a Maximum Likelihood Estimation (MLE) for general viscoelastic materials. The method is then implemented for the particular case of the Kelvin-Voigt Model. Using simulation data, the STL-VE technique is demonstrated and the performance of the estimator is characterized. Finally, the STL-VE method is used to estimate the viscoelastic parameters of ex-vivo bovine liver. We find good agreement between the STL-VE results and the simulation parameters as well as between the liver shear wave data and the modeled data fit. PMID:26168170
Optimal Use of TDOA Geo-Location Techniques Within the Mountainous Terrain of Turkey
2012-09-01
Cross -Correlation TDOA Estimation Technique ................. 49 3. Standard Deviation...76 Figure 32. The Effect of Noise on Accuracy ........................................................ 77 Figure 33. The Effect of Noise to...finding techniques. In contrast, people have been using active location finding techniques, such as radar , for decades. When active location finding
Epistemic uncertainty in the location and magnitude of earthquakes in Italy from Macroseismic data
Bakun, W.H.; Gomez, Capera A.; Stucchi, M.
2011-01-01
Three independent techniques (Bakun and Wentworth, 1997; Boxer from Gasperini et al., 1999; and Macroseismic Estimation of Earthquake Parameters [MEEP; see Data and Resources section, deliverable D3] from R.M.W. Musson and M.J. Jimenez) have been proposed for estimating an earthquake location and magnitude from intensity data alone. The locations and magnitudes obtained for a given set of intensity data are almost always different, and no one technique is consistently best at matching instrumental locations and magnitudes of recent well-recorded earthquakes in Italy. Rather than attempting to select one of the three solutions as best, we use all three techniques to estimate the location and the magnitude and the epistemic uncertainties among them. The estimates are calculated using bootstrap resampled data sets with Monte Carlo sampling of a decision tree. The decision-tree branch weights are based on goodness-of-fit measures of location and magnitude for recent earthquakes. The location estimates are based on the spatial distribution of locations calculated from the bootstrap resampled data. The preferred source location is the locus of the maximum bootstrap location spatial density. The location uncertainty is obtained from contours of the bootstrap spatial density: 68% of the bootstrap locations are within the 68% confidence region, and so on. For large earthquakes, our preferred location is not associated with the epicenter but with a location on the extended rupture surface. For small earthquakes, the epicenters are generally consistent with the location uncertainties inferred from the intensity data if an epicenter inaccuracy of 2-3 km is allowed. The preferred magnitude is the median of the distribution of bootstrap magnitudes. As with location uncertainties, the uncertainties in magnitude are obtained from the distribution of bootstrap magnitudes: the bounds of the 68% uncertainty range enclose 68% of the bootstrap magnitudes, and so on. The instrumental magnitudes for large and small earthquakes are generally consistent with the confidence intervals inferred from the distribution of bootstrap resampled magnitudes.
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.
Donné, Simon; Goossens, Bart; Philips, Wilfried
2017-08-23
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.
Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor
NASA Astrophysics Data System (ADS)
Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.
Choosing a DIVA: a comparison of emerging digital imagery vegetation analysis techniques
Jorgensen, Christopher F.; Stutzman, Ryan J.; Anderson, Lars C.; Decker, Suzanne E.; Powell, Larkin A.; Schacht, Walter H.; Fontaine, Joseph J.
2013-01-01
Question: What is the precision of five methods of measuring vegetation structure using ground-based digital imagery and processing techniques? Location: Lincoln, Nebraska, USA Methods: Vertical herbaceous cover was recorded using digital imagery techniques at two distinct locations in a mixed-grass prairie. The precision of five ground-based digital imagery vegetation analysis (DIVA) methods for measuring vegetation structure was tested using a split-split plot analysis of covariance. Variability within each DIVA technique was estimated using coefficient of variation of mean percentage cover. Results: Vertical herbaceous cover estimates differed among DIVA techniques. Additionally, environmental conditions affected the vertical vegetation obstruction estimates for certain digital imagery methods, while other techniques were more adept at handling various conditions. Overall, percentage vegetation cover values differed among techniques, but the precision of four of the five techniques was consistently high. Conclusions: DIVA procedures are sufficient for measuring various heights and densities of standing herbaceous cover. Moreover, digital imagery techniques can reduce measurement error associated with multiple observers' standing herbaceous cover estimates, allowing greater opportunity to detect patterns associated with vegetation structure.
Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominantlco-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...
Two ground-based canopy closure estimation techniques, the Spherical Densitometer (SD) and the Vertical Tube (VT), were compared for the effect of deciduous understory on dominant/co-dominant crown closure estimates in even-aged loblolly (Pinus taeda) pine stands located in the N...
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2010-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.
Location estimation in wireless sensor networks using spring-relaxation technique.
Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M
2010-01-01
Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.
A Bayesian framework for infrasound location
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Arrowsmith, Stephen J.; Anderson, Dale N.
2010-04-01
We develop a framework for location of infrasound events using backazimuth and infrasonic arrival times from multiple arrays. Bayesian infrasonic source location (BISL) developed here estimates event location and associated credibility regions. BISL accounts for unknown source-to-array path or phase by formulating infrasonic group velocity as random. Differences between observed and predicted source-to-array traveltimes are partitioned into two additive Gaussian sources, measurement error and model error, the second of which accounts for the unknown influence of wind and temperature on path. By applying the technique to both synthetic tests and ground-truth events, we highlight the complementary nature of back azimuths and arrival times for estimating well-constrained event locations. BISL is an extension to methods developed earlier by Arrowsmith et al. that provided simple bounds on location using a grid-search technique.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.
NASA Technical Reports Server (NTRS)
Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.
2011-01-01
A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.
A solar energy estimation procedure using remote sensing techniques. [watershed hydrologic models
NASA Technical Reports Server (NTRS)
Khorram, S.
1977-01-01
The objective of this investigation is to design a remote sensing-aided procedure for daily location-specific estimation of solar radiation components over the watershed(s) of interest. This technique has been tested on the Spanish Creek Watershed, Northern California, with successful results.
An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft
NASA Technical Reports Server (NTRS)
Bauer, Jeffrey E.; Teets, Edward
1997-01-01
An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.
Defect inspection of periodic patterns with low-order distortions
NASA Astrophysics Data System (ADS)
Khalaj, Babak H.; Aghajan, Hamid K.; Paulraj, Arogyaswami; Kailath, Thomas
1994-03-01
A self-reliance technique is developed for detecting defects in repeated pattern wafers and masks with low-order distortions. If the patterns are located on a perfect rectangular grid, it is possible to estimate the period of repeated patterns in both directions, and then produce a defect-free reference image for making comparison with the actual image. But in some applications, the repeated patterns are somehow shifted from their desired position on a rectangular grid, and the aforementioned algorithm cannot be directly applied. In these situations, to produce a defect-free reference image and locate the defected cells, it is necessary to estimate the amount of misalignment of each cell beforehand. The proposed technique first estimates the misalignment of repeated patterns in each row and column. After estimating the location of all cells in the image, a defect-free reference image is generated by averaging over all the cells and is compared with the input image to localize the possible defects.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.
Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-09-13
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.
Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter
Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il
2017-01-01
This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154
Estimation of precipitable water at different locations using surface dew-point
NASA Astrophysics Data System (ADS)
Abdel Wahab, M.; Sharif, T. A.
1995-09-01
The Reitan (1963) regression equation of the form ln w = a + bT d has been examined and tested to estimate precipitable water vapor content from the surface dew point temperature at different locations. The results of this study indicate that the slope b of the above equation has a constant value of 0.0681, while the intercept a changes rapidly with latitude. The use of the variable intercept technique can improve the estimated result by about 2%.
Traffic volume estimation using network interpolation techniques.
DOT National Transportation Integrated Search
2013-12-01
Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
Development of the One-Sided Nonlinear Adaptive Doppler Shift Estimation
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Singh, Upendra N.; Kavaya, Michael J.; Serror, Judith A.
2009-01-01
The new development of a one-sided nonlinear adaptive shift estimation technique (NADSET) is introduced. The background of the algorithm and a brief overview of NADSET are presented. The new technique is applied to the wind parameter estimates from a 2-micron wavelength coherent Doppler lidar system called VALIDAR located in NASA Langley Research Center in Virginia. The new technique enhances wind parameters such as Doppler shift and power estimates in low Signal-To-Noise-Ratio (SNR) regimes using the estimates in high SNR regimes as the algorithm scans the range bins from low to high altitude. The original NADSET utilizes the statistics in both the lower and the higher range bins to refine the wind parameter estimates in between. The results of the two different approaches of NADSET are compared.
Determining Titan surface topography from Cassini SAR data
Stiles, Bryan W.; Hensley, Scott; Gim, Yonggyu; Bates, David M.; Kirk, Randolph L.; Hayes, Alex; Radebaugh, Jani; Lorenz, Ralph D.; Mitchell, Karl L.; Callahan, Philip S.; Zebker, Howard; Johnson, William T.K.; Wall, Stephen D.; Lunine, Jonathan I.; Wood, Charles A.; Janssen, Michael; Pelletier, Frederic; West, Richard D.; Veeramacheneni, Chandini
2009-01-01
A technique, referred to as SARTopo, has been developed for obtaining surface height estimates with 10 km horizontal resolution and 75 m vertical resolution of the surface of Titan along each Cassini Synthetic Aperture Radar (SAR) swath. We describe the technique and present maps of the co-located data sets. A global map and regional maps of Xanadu and the northern hemisphere hydrocarbon lakes district are included in the results. A strength of the technique is that it provides topographic information co-located with SAR imagery. Having a topographic context vastly improves the interpretability of the SAR imagery and is essential for understanding Titan. SARTopo is capable of estimating surface heights for most of the SAR-imaged surface of Titan. Currently nearly 30% of the surface is within 100 km of a SARTopo height profile. Other competing techniques provide orders of magnitude less coverage. We validate the SARTopo technique through comparison with known geomorphological features such as mountain ranges and craters, and by comparison with co-located nadir altimetry, including a 3000 km strip that had been observed by SAR a month earlier. In this area, the SARTopo and nadir altimetry data sets are co-located tightly (within 5-10 km for one 500 km section), have similar resolution, and as expected agree closely in surface height. Furthermore the region contains prominent high spatial resolution topography, so it provides an excellent test of the resolution and precision of both techniques.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
Practical Methods for Estimating Software Systems Fault Content and Location
NASA Technical Reports Server (NTRS)
Nikora, A.; Schneidewind, N.; Munson, J.
1999-01-01
Over the past several years, we have developed techniques to discriminate between fault-prone software modules and those that are not, to estimate a software system's residual fault content, to identify those portions of a software system having the highest estimated number of faults, and to estimate the effects of requirements changes on software quality.
Estimation of Traffic Variables Using Point Processing Techniques
DOT National Transportation Integrated Search
1978-05-01
An alternative approach to estimating aggregate traffic variables on freeways--spatial mean velocity and density--is presented. Vehicle arrival times at a given location on a roadway, typically a presence detector, are regarded as a point or counting...
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Recchia, Gabriel L; Louwerse, Max M
2016-11-01
Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. Copyright © 2015 Cognitive Science Society, Inc.
Spatio-temporal distribution of energy radiation from low frequency tremor
NASA Astrophysics Data System (ADS)
Maeda, T.; Obara, K.
2007-12-01
Recent fine-scale hypocenter locations of low frequency tremors (LFTs) estimated by cross-correlation technique (Shelly et al. 2006; Maeda et al. 2006) and new finding of very low frequency earthquake (Ito et al. 2007) suggest that these slow events occur at the plate boundary associated with slow slip events (Obara and Hirose, 2006). However, the number of tremor detected by above technique is limited since continuous tremor waveforms are too complicated. Although an envelope correlation method (ECM) (Obara, 2002) enables us to locate epicenters of LFT without arrival time picks, however, ECM fails to locate LFTs precisely especially on the most active stage of tremor activity because of the low-correlation of envelope amplitude. To reveal total energy release of LFT, here we propose a new method for estimating the location of LFTs together with radiated energy from the tremor source by using envelope amplitude. The tremor amplitude observed at NIED Hi-net stations in western Shikoku simply decays in proportion to the reciprocal of the source-receiver distance after the correction of site- amplification factor even though the phases of the tremor are very complicated. So, we model the observed mean square envelope amplitude by time-dependent energy radiation with geometrical spreading factor. In the model, we do not have origin time of the tremor since we assume that the source of the tremor continuously radiates the energy. Travel-time differences between stations estimated by the ECM technique also incorporated in our locating algorithm together with the amplitude information. Three-component 1-hour Hi-net velocity continuous waveforms with a pass-band of 2-10 Hz are used for the inversion after the correction of site amplification factors at each station estimated by coda normalization method (Takahashi et al. 2005) applied to normal earthquakes in the region. The source location and energy are estimated by applying least square inversion to the 1-min window iteratively. As a first application of our method, we estimated the spatio-temporal distribution of energy radiation for 2006 May episodic tremor and slip event occurred in western Shikoku, Japan, region. Tremor location and their radiated energy are estimated for every 1 minute. We counted the number of located LFTs and summed up their total energy at each grid having 0.05-degree spacing at each day to figure out the spatio-temporal distribution of energy release of tremors. The resultant spatial distribution of radiated energy is concentrated at a specific region. Additionally, we see the daily change of released energy, both of location and amount, which corresponds to the migration of tremor activity. The spatio-temporal distribution of energy radiation of tremors is in good agreement with a spatio-temporal slip distribution of slow slip event estimated from Hi-net tiltmeter record (Hirose et al. 2007). This suggests that small continuous tremors occur associated with a rupture process of slow slip.
A Novel Technique Applying Spectral Estimation to Johnson Noise Thermometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezell, N. Dianne Bull; Britton, Chuck; Ericson, Nance
Johnson noise thermometry is one of many important measurement techniques used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the minimal electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift-free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed here. Spectral estimation is a key component in the signal processing algorithm used for EMI removal and temperature calculation. The cross-power spectral density is a key component in the Johnson noise temperature computation. Applying eithermore » technique requires the simple addition of electronics and signal processing to existing resistive thermometers. With minimal installation changes, the system discussed here can be installed on existing nuclear power plants. The Johnson noise system developed is tested at three locations: ORNL, Sandia National Laboratory, and the Tennessee Valley Authority’s Kingston Fossil Plant. Each of these locations enabled improvement on the EMI removal algorithm. Finally, the conclusions made from the results at each of these locations is discussed, as well as possible future work.« less
A Novel Technique Applying Spectral Estimation to Johnson Noise Thermometry
Ezell, N. Dianne Bull; Britton, Chuck; Ericson, Nance; ...
2018-03-30
Johnson noise thermometry is one of many important measurement techniques used to monitor the safety levels and stability in a nuclear reactor. However, this measurement is very dependent on the minimal electromagnetic environment. Properly removing unwanted electromagnetic interference (EMI) is critical for accurate drift-free temperature measurements. The two techniques developed by Oak Ridge National Laboratory (ORNL) to remove transient and periodic EMI are briefly discussed here. Spectral estimation is a key component in the signal processing algorithm used for EMI removal and temperature calculation. The cross-power spectral density is a key component in the Johnson noise temperature computation. Applying eithermore » technique requires the simple addition of electronics and signal processing to existing resistive thermometers. With minimal installation changes, the system discussed here can be installed on existing nuclear power plants. The Johnson noise system developed is tested at three locations: ORNL, Sandia National Laboratory, and the Tennessee Valley Authority’s Kingston Fossil Plant. Each of these locations enabled improvement on the EMI removal algorithm. Finally, the conclusions made from the results at each of these locations is discussed, as well as possible future work.« less
Estimating procedure times for surgeries by determining location parameters for the lognormal model.
Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H
2004-05-01
We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
Autocorrelation of location estimates and the analysis of radiotracking data
Otis, D.L.; White, Gary C.
1999-01-01
The wildlife literature has been contradictory about the importance of autocorrelation in radiotracking data used for home range estimation and hypothesis tests of habitat selection. By definition, the concept of a home range involves autocorrelated movements, but estimates or hypothesis tests based on sampling designs that predefine a time frame of interest, and that generate representative samples of an animal's movement during this time frame, should not be affected by length of the sampling interval and autocorrelation. Intensive sampling of the individual's home range and habitat use during the time frame of the study leads to improved estimates for the individual, but use of location estimates as the sample unit to compare across animals is pseudoreplication. We therefore recommend against use of habitat selection analysis techniques that use locations instead of individuals as the sample unit. We offer a general outline for sampling designs for radiotracking studies.
Multi-technique combination of space geodesy observations
NASA Astrophysics Data System (ADS)
Zoulida, Myriam; Pollet, Arnaud; Coulot, David; Biancale, Richard; Rebischung, Paul; Collilieux, Xavier
2014-05-01
Over the last few years, combination at the observation level (COL) of the different space geodesy techniques has been thoroughly studied. Various studies have shown that this type of combination can take advantage of common parameters. Some of these parameters, such as Zenithal Tropospheric Delays (ZTD), are available on co-location sites, where more than one technique is present. Local ties (LT) are provided for these sites, and act as intra-technique links and allow resulting terrestrial reference frames (TRF) to be homogeneous. However the use of LT can be problematic on weekly calculations, where their geographical distribution can be poor, and there are often differences observed between available LTs and space geodesy results. Similar co-locations can be found on multi-technique satellites, where more than one technique receiver is featured. A great advantage of these space ties (STs) is the densification of co-locations as the orbiting satellite acts as a moving station. The challenge of using space ties relies in the accurate knowledge or estimation of their values, as officially provided values are sometimes not reaching the required level of precision for the solution, due to receivers' or acting forces mismodelings and other factors. Thus, the necessity of an estimation and/or weighting strategy for the STs is introduced. To this day, on subsets of available data, using STs has shown promising results regarding the TRF determination through the stations' positions estimation, on the orbit determination of the GPS constellation and on the GPS antenna Phase Center Offsets and Variations (PCO and PCV) . In this study, results from a multi-technique combination including the Jason-2 satellite and its effect on the GNSS orbit determination during the CONT2011 period are presented, as well as some preliminary results on station positions' determination. Comparing resulting orbits with official solutions provides an assessment of the effect on the orbit calculation by introducing orbiting stations' observations. Moreover, simulated solutions will be presented, showing the effect of adding multi-technique observations on the estimation of STs parameters errors, such as Laser Retroreflector Offsets (LROs) or GNSS antennae Phase Center Offsets (PCOs).
Array processing for RFID tag localization exploiting multi-frequency signals
NASA Astrophysics Data System (ADS)
Zhang, Yimin; Li, Xin; Amin, Moeness G.
2009-05-01
RFID is an increasingly valuable business and technology tool for electronically identifying, locating, and tracking products, assets, and personnel. As a result, precise positioning and tracking of RFID tags and readers have received considerable attention from both academic and industrial communities. Finding the position of RFID tags is considered an important task in various real-time locating systems (RTLS). As such, numerous RFID localization products have been developed for various applications. The majority of RFID positioning systems is based on the fusion of pieces of relevant information, such as the range and the direction-of-arrival (DOA). For example, trilateration can determine the tag position by using the range information of the tag estimated from three or more spatially separated reader antennas. Triangulation is another method to locate RFID tags that use the direction-of-arrival (DOA) information estimated at multiple spatially separated locations. The RFID tag positions can also be determined through hybrid techniques that combine the range and DOA information. The focus of this paper to study the design and performance of the localization of passive RFID tags using array processing techniques in a multipath environment, and exploiting multi-frequency CW signals. The latter are used to decorrelate the coherent multipath signals for effective DOA estimation and for the purpose of accurate range estimation. Accordingly, the spatial and frequency dimensionalities are fully utilized for robust and accurate positioning of RFID tags.
NASA Astrophysics Data System (ADS)
Abd-el-Malek, Mina; Abdelsalam, Ahmed K.; Hassan, Ola E.
2017-09-01
Robustness, low running cost and reduced maintenance lead Induction Motors (IMs) to pioneerly penetrate the industrial drive system fields. Broken rotor bars (BRBs) can be considered as an important fault that needs to be early assessed to minimize the maintenance cost and labor time. The majority of recent BRBs' fault diagnostic techniques focus on differentiating between healthy and faulty rotor cage. In this paper, a new technique is proposed for detecting the location of the broken bar in the rotor. The proposed technique relies on monitoring certain statistical parameters estimated from the analysis of the start-up stator current envelope. The envelope of the signal is obtained using Hilbert Transformation (HT). The proposed technique offers non-invasive, fast computational and accurate location diagnostic process. Various simulation scenarios are presented that validate the effectiveness of the proposed technique.
NASA Astrophysics Data System (ADS)
Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.
2018-01-01
Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.
Accelerometer-based on-body sensor localization for health and medical monitoring applications
Vahdatpour, Alireza; Amini, Navid; Xu, Wenyao; Sarrafzadeh, Majid
2011-01-01
In this paper, we present a technique to recognize the position of sensors on the human body. Automatic on-body device localization ensures correctness and accuracy of measurements in health and medical monitoring systems. In addition, it provides opportunities to improve the performance and usability of ubiquitous devices. Our technique uses accelerometers to capture motion data to estimate the location of the device on the user’s body, using mixed supervised and unsupervised time series analysis methods. We have evaluated our technique with extensive experiments on 25 subjects. On average, our technique achieves 89% accuracy in estimating the location of devices on the body. In order to study the feasibility of classification of left limbs from right limbs (e.g., left arm vs. right arm), we performed analysis, based of which no meaningful classification was observed. Personalized ultraviolet monitoring and wireless transmission power control comprise two immediate applications of our on-body device localization approach. Such applications, along with their corresponding feasibility studies, are discussed. PMID:22347840
A comparison of five sampling techniques to estimate surface fuel loading in montane forests
Pamela G. Sikkink; Robert E. Keane
2008-01-01
Designing a fuel-sampling program that accurately and efficiently assesses fuel load at relevant spatial scales requires knowledge of each sample method's strengths and weaknesses.We obtained loading values for six fuel components using five fuel load sampling techniques at five locations in western Montana, USA. The techniques included fixed-area plots, planar...
Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard
2004-01-01
Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Lee, Minhyun; Koo, Choongwan; Hong, Taehoon; Park, Hyo Seon
2014-04-15
For the effective photovoltaic (PV) system, it is necessary to accurately determine the monthly average daily solar radiation (MADSR) and to develop an accurate MADSR map, which can simplify the decision-making process for selecting the suitable location of the PV system installation. Therefore, this study aimed to develop a framework for the mapping of the MADSR using an advanced case-based reasoning (CBR) and a geostatistical technique. The proposed framework consists of the following procedures: (i) the geographic scope for the mapping of the MADSR is set, and the measured MADSR and meteorological data in the geographic scope are collected; (ii) using the collected data, the advanced CBR model is developed; (iii) using the advanced CBR model, the MADSR at unmeasured locations is estimated; and (iv) by applying the measured and estimated MADSR data to the geographic information system, the MADSR map is developed. A practical validation was conducted by applying the proposed framework to South Korea. It was determined that the MADSR map developed through the proposed framework has been improved in terms of accuracy. The developed MADSR map can be used for estimating the MADSR at unmeasured locations and for determining the optimal location for the PV system installation.
NASA Astrophysics Data System (ADS)
Lertwiram, Namzilp; Tran, Gia Khanh; Mizutani, Keiichi; Sakaguchi, Kei; Araki, Kiyomichi
Setting relays can address the shadowing problem between a transmitter (Tx) and a receiver (Rx). Moreover, the Multiple-Input Multiple-Output (MIMO) technique has been introduced to improve wireless link capacity. The MIMO technique can be applied in relay network to enhance system performance. However, the efficiency of relaying schemes and relay placement have not been well investigated with experiment-based study. This paper provides a propagation measurement campaign of a MIMO two-hop relay network in 5GHz band in an L-shaped corridor environment with various relay locations. Furthermore, this paper proposes a Relay Placement Estimation (RPE) scheme to identify the optimum relay location, i.e. the point at which the network performance is highest. Analysis results of channel capacity show that relaying technique is beneficial over direct transmission in strong shadowing environment while it is ineffective in non-shadowing environment. In addition, the optimum relay location estimated with the RPE scheme also agrees with the location where the network achieves the highest performance as identified by network capacity. Finally, the capacity analysis shows that two-way MIMO relay employing network coding has the best performance while cooperative relaying scheme is not effective due to shadowing effect weakening the signal strength of the direct link.
Structural Health Monitoring for Impact Damage in Composite Structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roach, Dennis P.; Raymond Bond; Doug Adams
Composite structures are increasing in prevalence throughout the aerospace, wind, defense, and transportation industries, but the many advantages of these materials come with unique challenges, particularly in inspecting and repairing these structures. Because composites of- ten undergo sub-surface damage mechanisms which compromise the structure without a clear visual indication, inspection of these components is critical to safely deploying composite re- placements to traditionally metallic structures. Impact damage to composites presents one of the most signi fi cant challenges because the area which is vulnerable to impact damage is generally large and sometimes very dif fi cult to access. This workmore » seeks to further evolve iden- ti fi cation technology by developing a system which can detect the impact load location and magnitude in real time, while giving an assessment of the con fi dence in that estimate. Fur- thermore, we identify ways by which impact damage could be more effectively identi fi ed by leveraging impact load identi fi cation information to better characterize damage. The impact load identi fi cation algorithm was applied to a commercial scale wind turbine blade, and results show the capability to detect impact magnitude and location using a single accelerometer, re- gardless of sensor location. A technique for better evaluating the uncertainty of the impact estimates was developed by quantifying how well the impact force estimate meets the assump- tions underlying the force estimation technique. This uncertainty quanti fi cation technique was found to reduce the 95% con fi dence interval by more than a factor of two for impact force estimates showing the least uncertainty, and widening the 95% con fi dence interval by a fac- tor of two for the most uncertain force estimates, avoiding the possibility of understating the uncertainty associated with these estimates. Linear vibration based damage detection tech- niques were investigated in the context of structural stiffness reductions and impact damage. A method by which the sensitivity to damage could be increased for simple structures was presented, and the challenges of applying that technique to a more complex structure were identi fi ed. The structural dynamic changes in a weak adhesive bond were investigated, and the results showed promise for identifying weak bonds that show little or no static reduction in stiffness. To address these challenges in identifying highly localized impact damage, the possi- bility of detecting damage through nonlinear dynamic characteristics was also identi fi ed, with a proposed technique which would leverage impact location estimates to enable the detection of impact damage. This nonlinear damage identi fi cation concept was evaluated on a composite panel with a substructure disbond, and the results showed that the nonlinear dynamics at the damage site could be observed without a baseline healthy reference. By further developing impact load identi fi cation technology and combining load and damage estimation techniques into an integrated solution, the challenges associated with impact detection in composite struc- tures can be effectively solved, thereby reducing costs, improving safety, and enhancing the operational readiness and availability of high value assets.« less
Estimating population size of Pygoscelid Penguins from TM data
NASA Technical Reports Server (NTRS)
Olson, Charles E., Jr.; Schwaller, Mathew R.; Dahmer, Paul A.
1987-01-01
An estimate was made toward a continent wide population of penguins. The results indicate that Thematic Mapper data can be used to identify penguin rookeries due to the unique reflectance properties of guano. Strong correlations exist between nesting populations and rookery area occupied by the birds. These correlations allow estimation of the number of nesting pairs in colonies. The success of remote sensing and biometric analyses leads one to believe that a continent wide estimate of penguin populations is possible based on a timely sample employing ground based and remote sensing techniques. Satellite remote sensing along the coastline may well locate previously undiscovered penguin nesting sites, or locate rookeries which have been assumed to exist for over a half century, but never located. Observations which found that penguins are one of the most sensitive elements in the complex of Southern Ocean ecosystems motivated this study.
Optimization methods for locating lightning flashes using magnetic direction finding networks
NASA Technical Reports Server (NTRS)
Goodman, Steven J.
1989-01-01
Techniques for producing best point estimates of target position using direction finder bearing information are reviewed. The use of an algorithm that calculates the cloud-to-ground flash location given multiple bearings is illustrated and the position errors are described. This algorithm can be used to analyze direction finder network performance.
Using x-ray mammograms to assist in microwave breast image interpretation.
Curtis, Charlotte; Frayne, Richard; Fear, Elise
2012-01-01
Current clinical breast imaging modalities include ultrasound, magnetic resonance (MR) imaging, and the ubiquitous X-ray mammography. Microwave imaging, which takes advantage of differing electromagnetic properties to obtain image contrast, shows potential as a complementary imaging technique. As an emerging modality, interpretation of 3D microwave images poses a significant challenge. MR images are often used to assist in this task, and X-ray mammograms are readily available. However, X-ray mammograms provide 2D images of a breast under compression, resulting in significant geometric distortion. This paper presents a method to estimate the 3D shape of the breast and locations of regions of interest from standard clinical mammograms. The technique was developed using MR images as the reference 3D shape with the future intention of using microwave images. Twelve breast shapes were estimated and compared to ground truth MR images, resulting in a skin surface estimation accurate to within an average Euclidean distance of 10 mm. The 3D locations of regions of interest were estimated to be within the same clinical area of the breast as corresponding regions seen on MR imaging. These results encourage investigation into the use of mammography as a source of information to assist with microwave image interpretation as well as validation of microwave imaging techniques.
NASA Technical Reports Server (NTRS)
Reed, D. L.; Wallace, R. G.
1981-01-01
The results of system analyses and implementation studies of an advanced location and data collection system (ALDCS) , proposed for inclusion on the National Oceanic Satellite System (NOSS) spacecraft are reported. The system applies Doppler processing and radiofrequency interferometer position location technqiues both alone and in combination. Aspects analyzed include: the constraints imposed by random access to the system by platforms, the RF link parameters, geometric concepts of position and velocity estimation by the two techniques considered, and the effects of electrical measurement errors, spacecraft attitude errors, and geometric parameters on estimation accuracy. Hardware techniques and trade-offs for interferometric phase measurement, ambiguity resolution and calibration are considered. A combined Doppler-interferometer ALDCS intended to fulfill the NOSS data validation and oceanic research support mission is also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.
Rideout, Brendan P; Dosso, Stan E; Hannay, David E
2013-09-01
This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.
The propagation of wind errors through ocean wave hindcasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holthuijsen, L.H.; Booij, N.; Bertotti, L.
1996-08-01
To estimate uncertainties in wave forecast and hindcasts, computations have been carried out for a location in the Mediterranean Sea using three different analyses of one historic wind field. These computations involve a systematic sensitivity analysis and estimated wind field errors. This technique enables a wave modeler to estimate such uncertainties in other forecasts and hindcasts if only one wind analysis is available.
Remote Sensing in Agriculture: An Introductory Review.
ERIC Educational Resources Information Center
Curran, Paul J.
1987-01-01
Discusses the use of remote sensing techniques to obtain locational, estimated, and mapped information at the scales varying from individual fields and farms, to entire continents and the world. (AEM)
NASA Technical Reports Server (NTRS)
Hoisington, C. M.
1984-01-01
A position estimation algorithm was developed to track a humpback whale tagged with an ARGOS platform after a transmitter deployment failure and the whale's diving behavior precluded standard methods. The algorithm is especially useful where a transmitter location program exists; it determines the classical keplarian elements from the ARGOS spacecraft position vectors included with the probationary file messages. A minimum of three distinct messages are required. Once the spacecraft orbit is determined, the whale is located using standard least squares regression techniques. Experience suggests that in instances where circumstances inherent in the experiment yield message data unsuitable for the standard ARGOS reduction, (message data may be too sparse, span an insufficient period, or include variable-length messages). System ARGOS can still provide much valuable location information if the user is willing to accept the increased location uncertainties.
Optimising the location of antenatal classes.
Tomintz, Melanie N; Clarke, Graham P; Rigby, Janette E; Green, Josephine M
2013-01-01
To combine microsimulation and location-allocation techniques to determine antenatal class locations which minimise the distance travelled from home by potential users. Microsimulation modeling and location-allocation modeling. City of Leeds, UK. Potential users of antenatal classes. An individual-level microsimulation model was built to estimate the number of births for small areas by combining data from the UK Census 2001 and the Health Survey for England 2006. Using this model as a proxy for service demand, we then used a location-allocation model to optimize locations. Different scenarios show the advantage of combining these methods to optimize (re)locating antenatal classes and therefore reduce inequalities in accessing services for pregnant women. Use of these techniques should lead to better use of resources by allowing planners to identify optimal locations of antenatal classes which minimise women's travel. These results are especially important for health-care planners tasked with the difficult issue of targeting scarce resources in a cost-efficient, but also effective or accessible, manner. (169 words). Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.;
2014-01-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.
NASA Astrophysics Data System (ADS)
Sidery, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; Kalogera, V.; Mandel, I.; O'Shaughnessy, R.; Pitkin, M.; Price, L.; Raymond, V.; Röver, C.; Singer, L.; van der Sluys, M.; Smith, R. J. E.; Vecchio, A.; Veitch, J.; Vitale, S.
2014-04-01
The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiral-only signals from compact binary systems with a total mass of ≤20M⊙ and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor ≈20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor ≈1000 longer processing time.
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
Using Monte Carlo/Gaussian Based Small Area Estimates to Predict Where Medicaid Patients Reside.
Behrens, Jess J; Wen, Xuejin; Goel, Satyender; Zhou, Jing; Fu, Lina; Kho, Abel N
2016-01-01
Electronic Health Records (EHR) are rapidly becoming accepted as tools for planning and population health 1,2 . With the national dialogue around Medicaid expansion 12 , the role of EHR data has become even more important. For their potential to be fully realized and contribute to these discussions, techniques for creating accurate small area estimates is vital. As such, we examined the efficacy of developing small area estimates for Medicaid patients in two locations, Albuquerque and Chicago, by using a Monte Carlo/Gaussian technique that has worked in accurately locating registered voters in North Carolina 11 . The Albuquerque data, which includes patient address, will first be used to assess the accuracy of the methodology. Subsequently, it will be combined with the EHR data from Chicago to develop a regression that predicts Medicaid patients by US Block Group. We seek to create a tool that is effective in translating EHR data's potential for population health studies.
Evaluation of a technique for satellite-derived area estimation of forest fires
NASA Technical Reports Server (NTRS)
Cahoon, Donald R., Jr.; Stocks, Brian J.; Levine, Joel S.; Cofer, Wesley R., III; Chung, Charles C.
1992-01-01
The advanced very high resolution radiometer (AVHRR), has been found useful for the location and monitoring of both smoke and fires because of the daily observations, the large geographical coverage of the imagery, the spectral characteristics of the instrument, and the spatial resolution of the instrument. This paper will discuss the application of AVHRR data to assess the geographical extent of burning. Methods have been developed to estimate the surface area of burning by analyzing the surface area effected by fire with AVHRR imagery. Characteristics of the AVHRR instrument, its orbit, field of view, and archived data sets are discussed relative to the unique surface area of each pixel. The errors associated with this surface area estimation technique are determined using AVHRR-derived area estimates of target regions with known sizes. This technique is used to evaluate the area burned during the Yellowstone fires of 1988.
Weighted least squares techniques for improved received signal strength based localization.
Tarrío, Paula; Bernardos, Ana M; Casar, José R
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092
Techniques for estimating magnitude and frequency of floods on streams in Indiana
Glatfelter, D.R.
1984-01-01
A rainfall-runoff model was tlsed to synthesize long-term peak data at 11 gaged locations on small streams. Flood-frequency curves developed from the long-term synthetic data were combined with curves based on short-term observed data to provide weighted estimates of flood magnitude and frequency at the rainfall-runoff stations.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
Bakun, W.H.; Scotti, O.
2006-01-01
Intensity assignments for 33 calibration earthquakes were used to develop intensity attenuation models for the Alps, Armorican, Provence, Pyrenees and Rhine regions of France. Intensity decreases with ?? most rapidly in the French Alps, Provence and Pyrenees regions, and least rapidly in the Armorican and Rhine regions. The comparable Armorican and Rhine region attenuation models are aggregated into a French stable continental region model and the comparable Provence and Pyrenees region models are aggregated into a Southern France model. We analyse MSK intensity assignments using the technique of Bakun & Wentworth, which provides an objective method for estimating epicentral location and intensity magnitude MI. MI for the 1356 October 18 earthquake in the French stable continental region is 6.6 for a location near Basle, Switzerland, and moment magnitude M is 5.9-7.2 at the 95 per cent (??2??) confidence level. MI for the 1909 June 11 Trevaresse (Lambesc) earthquake near Marseilles in the Southern France region is 5.5, and M is 4.9-6.0 at the 95 per cent confidence level. Bootstrap resampling techniques are used to calculate objective, reproducible 67 per cent and 95 per cent confidence regions for the locations of historical earthquakes. These confidence regions for location provide an attractive alternative to the macroseismic epicentre and qualitative location uncertainties used heretofore. ?? 2006 The Authors Journal compilation ?? 2006 RAS.
A statistical evaluation of non-ergodic variogram estimators
Curriero, F.C.; Hohn, M.E.; Liebhold, A.M.; Lele, S.R.
2002-01-01
Geostatistics is a set of statistical techniques that is increasingly used to characterize spatial dependence in spatially referenced ecological data. A common feature of geostatistics is predicting values at unsampled locations from nearby samples using the kriging algorithm. Modeling spatial dependence in sampled data is necessary before kriging and is usually accomplished with the variogram and its traditional estimator. Other types of estimators, known as non-ergodic estimators, have been used in ecological applications. Non-ergodic estimators were originally suggested as a method of choice when sampled data are preferentially located and exhibit a skewed frequency distribution. Preferentially located samples can occur, for example, when areas with high values are sampled more intensely than other areas. In earlier studies the visual appearance of variograms from traditional and non-ergodic estimators were compared. Here we evaluate the estimators' relative performance in prediction. We also show algebraically that a non-ergodic version of the variogram is equivalent to the traditional variogram estimator. Simulations, designed to investigate the effects of data skewness and preferential sampling on variogram estimation and kriging, showed the traditional variogram estimator outperforms the non-ergodic estimators under these conditions. We also analyzed data on carabid beetle abundance, which exhibited large-scale spatial variability (trend) and a skewed frequency distribution. Detrending data followed by robust estimation of the residual variogram is demonstrated to be a successful alternative to the non-ergodic approach.
Estimation of bladder wall location in ultrasound images.
Topper, A K; Jernigan, M E
1991-05-01
A method of automatically estimating the location of the bladder wall in ultrasound images is proposed. Obtaining this estimate is intended to be the first stage in the development of an automatic bladder volume calculation system. The first step in the bladder wall estimation scheme involves globally processing the images using standard image processing techniques to highlight the bladder wall. Separate processing sequences are required to highlight the anterior bladder wall and the posterior bladder wall. The sequence to highlight the anterior bladder wall involves Gaussian smoothing and second differencing followed by zero-crossing detection. Median filtering followed by thresholding and gradient detection is used to highlight as much of the rest of the bladder wall as was visible in the original images. Then a 'bladder wall follower'--a line follower with rules based on the characteristics of ultrasound imaging and the anatomy involved--is applied to the processed images to estimate the bladder wall location by following the portions of the bladder wall which are highlighted and filling in the missing segments. The results achieved using this scheme are presented.
Plotz, Roan D.; Grecian, W. James; Kerley, Graham I.H.; Linklater, Wayne L.
2016-01-01
Comparisons of recent estimations of home range sizes for the critically endangered black rhinoceros in Hluhluwe-iMfolozi Park (HiP), South Africa, with historical estimates led reports of a substantial (54%) increase, attributed to over-stocking and habitat deterioration that has far-reaching implications for rhino conservation. Other reports, however, suggest the increase is more likely an artefact caused by applying various home range estimators to non-standardised datasets. We collected 1939 locations of 25 black rhino over six years (2004–2009) to estimate annual home ranges and evaluate the hypothesis that they have increased in size. A minimum of 30 and 25 locations were required for accurate 95% MCP estimation of home range of adult rhinos, during the dry and wet seasons respectively. Forty and 55 locations were required for adult female and male annual MCP home ranges, respectively, and 30 locations were necessary for estimating 90% bivariate kernel home ranges accurately. Average annual 95% bivariate kernel home ranges were 20.4 ± 1.2 km2, 53 ±1.9% larger than 95% MCP ranges (9.8 km2 ± 0.9). When home range techniques used during the late-1960s in HiP were applied to our dataset, estimates were similar, indicating that ranges have not changed substantially in 50 years. Inaccurate, non-standardised, home range estimates and their comparison have the potential to mislead black rhino population management. We recommend that more care be taken to collect adequate numbers of rhino locations within standardized time periods (i.e., season or year) and that the comparison of home ranges estimated using dissimilar procedures be avoided. Home range studies of black rhino have been data deficient and procedurally inconsistent. Standardisation of methods is required. PMID:27028728
Plotz, Roan D; Grecian, W James; Kerley, Graham I H; Linklater, Wayne L
2016-01-01
Comparisons of recent estimations of home range sizes for the critically endangered black rhinoceros in Hluhluwe-iMfolozi Park (HiP), South Africa, with historical estimates led reports of a substantial (54%) increase, attributed to over-stocking and habitat deterioration that has far-reaching implications for rhino conservation. Other reports, however, suggest the increase is more likely an artefact caused by applying various home range estimators to non-standardised datasets. We collected 1939 locations of 25 black rhino over six years (2004-2009) to estimate annual home ranges and evaluate the hypothesis that they have increased in size. A minimum of 30 and 25 locations were required for accurate 95% MCP estimation of home range of adult rhinos, during the dry and wet seasons respectively. Forty and 55 locations were required for adult female and male annual MCP home ranges, respectively, and 30 locations were necessary for estimating 90% bivariate kernel home ranges accurately. Average annual 95% bivariate kernel home ranges were 20.4 ± 1.2 km(2), 53 ± 1.9% larger than 95% MCP ranges (9.8 km(2) ± 0.9). When home range techniques used during the late-1960s in HiP were applied to our dataset, estimates were similar, indicating that ranges have not changed substantially in 50 years. Inaccurate, non-standardised, home range estimates and their comparison have the potential to mislead black rhino population management. We recommend that more care be taken to collect adequate numbers of rhino locations within standardized time periods (i.e., season or year) and that the comparison of home ranges estimated using dissimilar procedures be avoided. Home range studies of black rhino have been data deficient and procedurally inconsistent. Standardisation of methods is required.
Use of LANDSAT 2 data technique to estimate silverleaf sunflower infestation
NASA Technical Reports Server (NTRS)
Richardson, A. J.; Escobar, D. E.; Gausman, H. W.; Everitt, J. H. (Principal Investigator)
1982-01-01
The feasibility of the technique using the Earth Resources Technology Satellite (LANDSAT-2) multispectral scanner (MSS) was tested; to distinguish silverleaf sunflowers (Helianthus argophyllus Torr. and Gray) from other plant species and to estimate the hectarage percent of its infestation. Sunflowers gave high mean digital counts in all four LANDSAT MSS bands that were manifested as a pinkish image response on the LANDSAT color composite imagery. Photo- and LANDSAT-estimated hectare percentages for silverleaf sunflower within a 23,467 ha study area were 9.1 and 9.5%, respectively. The geographic occurrence of sunflower areas on the line-printer recognition map was in good agreement with their known aerial photographic locations.
Estimating bridge stiffness using a forced-vibration technique for timber bridge health monitoring
James P. Wacker; Xiping Wang; Brian Brashaw; Robert J. Ross
2006-01-01
This paper describes an effort to refine a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the frequency response of several simple-span, sawn timber beam (with plank deck) bridges located in St. Louis County, Minnesota. Static load deflections were also measured to...
Sen, Novonil; Kundu, Tribikram
2018-07-01
Estimating the location of an acoustic source in a structure is an important step towards passive structural health monitoring. Techniques for localizing an acoustic source in isotropic structures are well developed in the literature. Development of similar techniques for anisotropic structures, however, has gained attention only in the recent years and has a scope of further improvement. Most of the existing techniques for anisotropic structures either assume a straight line wave propagation path between the source and an ultrasonic sensor or require the material properties to be known. This study considers different shapes of the wave front generated during an acoustic event and develops a methodology to localize the acoustic source in an anisotropic plate from those wave front shapes. An elliptical wave front shape-based technique was developed first, followed by the development of a parametric curve-based technique for non-elliptical wave front shapes. The source coordinates are obtained by minimizing an objective function. The proposed methodology does not assume a straight line wave propagation path and can predict the source location without any knowledge of the elastic properties of the material. A numerical study presented here illustrates how the proposed methodology can accurately estimate the source coordinates. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Karimi, Kurosh; Shirzaditabar, Farzad
2017-08-01
The analytic signal of magnitude of the magnetic field’s components and its first derivatives have been employed for locating magnetic structures, which can be considered as point-dipoles or line of dipoles. Although similar methods have been used for locating such magnetic anomalies, they cannot estimate the positions of anomalies in noisy states with an acceptable accuracy. The methods are also inexact in determining the depth of deep anomalies. In noisy cases and in places other than poles, the maximum points of the magnitude of the magnetic vector components and Az are not located exactly above 3D bodies. Consequently, the horizontal location estimates of bodies are accompanied by errors. Here, the previous methods are altered and generalized to locate deeper models in the presence of noise even at lower magnetic latitudes. In addition, a statistical technique is presented for working in noisy areas and a new method, which is resistant to noise by using a ‘depths mean’ method, is made. Reduction to the pole transformation is also used to find the most possible actual horizontal body location. Deep models are also well estimated. The method is tested on real magnetic data over an urban gas pipeline in the vicinity of Kermanshah province, Iran. The estimated location of the pipeline is accurate in accordance with the result of the half-width method.
Optimizing focal plane electric field estimation for detecting exoplanets
NASA Astrophysics Data System (ADS)
Groff, T.; Kasdin, N. J.; Riggs, A. J. E.
Detecting extrasolar planets with angular separations and contrast levels similar to Earth requires a large space-based observatory and advanced starlight suppression techniques. This paper focuses on techniques employing an internal coronagraph, which is highly sensitive to optical errors and must rely on focal plane wavefront control techniques to achieve the necessary contrast levels. To maximize the available science time for a coronagraphic mission we demonstrate an estimation scheme using a discrete time Kalman filter. The state estimate feedback inherent to the filter allows us to minimize the number of exposures required to estimate the electric field. We also show progress including a bias estimate into the Kalman filter to eliminate incoherent light from the estimate. Since the exoplanets themselves are incoherent to the star, this has the added benefit of using the control history to gain certainty in the location of exoplanet candidates as the signal-to-noise between the planets and speckles improves. Having established a purely focal plane based wavefront estimation technique, we discuss a sensor fusion concept where alternate wavefront sensors feedforward a time update to the focal plane estimate to improve robustness to time varying speckle. The overall goal of this work is to reduce the time required for wavefront control on a target, thereby improving the observatory's planet detection performance by increasing the number of targets reachable during the lifespan of the mission.
Fusion-based multi-target tracking and localization for intelligent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2008-04-01
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
NASA Astrophysics Data System (ADS)
Sehad, Mounir; Lazri, Mourad; Ameur, Soltane
2017-03-01
In this work, a new rainfall estimation technique based on the high spatial and temporal resolution of the Spinning Enhanced Visible and Infra Red Imager (SEVIRI) aboard the Meteosat Second Generation (MSG) is presented. This work proposes efficient scheme rainfall estimation based on two multiclass support vector machine (SVM) algorithms: SVM_D for daytime and SVM_N for night time rainfall estimations. Both SVM models are trained using relevant rainfall parameters based on optical, microphysical and textural cloud proprieties. The cloud parameters are derived from the Spectral channels of the SEVIRI MSG radiometer. The 3-hourly and daily accumulated rainfall are derived from the 15 min-rainfall estimation given by the SVM classifiers for each MSG observation image pixel. The SVMs were trained with ground meteorological radar precipitation scenes recorded from November 2006 to March 2007 over the north of Algeria located in the Mediterranean region. Further, the SVM_D and SVM_N models were used to estimate 3-hourly and daily rainfall using data set gathered from November 2010 to March 2011 over north Algeria. The results were validated against collocated rainfall observed by rain gauge network. Indeed, the statistical scores given by correlation coefficient, bias, root mean square error and mean absolute error, showed good accuracy of rainfall estimates by the present technique. Moreover, rainfall estimates of our technique were compared with two high accuracy rainfall estimates methods based on MSG SEVIRI imagery namely: random forests (RF) based approach and an artificial neural network (ANN) based technique. The findings of the present technique indicate higher correlation coefficient (3-hourly: 0.78; daily: 0.94), and lower mean absolute error and root mean square error values. The results show that the new technique assign 3-hourly and daily rainfall with good and better accuracy than ANN technique and (RF) model.
Michael J. Falkowski; Alistair M.S. Smith; Andrew T. Hudak; Paul E. Gessler; Lee A. Vierling; Nicholas L. Crookston
2006-01-01
We describe and evaluate a new analysis technique, spatial wavelet analysis (SWA), to automatically estimate the location, height, and crown diameter of individual trees within mixed conifer open canopy stands from light detection and ranging (lidar) data. Two-dimensional Mexican hat wavelets, over a range of likely tree crown diameters, were convolved with lidar...
Spatial and spectral interpolation of ground-motion intensity measure observations
Worden, Charles; Thompson, Eric M.; Baker, Jack W.; Bradley, Brendon A.; Luco, Nicolas; Wilson, David
2018-01-01
Following a significant earthquake, ground‐motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground‐motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground‐motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Shackelford, S D; Wheeler, T L; Koohmaraie, M
2004-03-01
Experiments were conducted to compare the effects of two cookery methods, two shear force procedures, and sampling location within non-callipyge and callipyge lamb LM on the magnitude, variance, and repeatability of LM shear force data. In Exp. 1, 15 non-callipyge and 15 callipyge carcasses were sampled, and Warner-Bratzler shear force (WBSF) was determined for both sides of each carcass at three locations along the length (anterior to posterior) of the LM, whereas slice shear force (SSF) was determined for both sides of each carcass at only one location. For approximately half the carcasses within each genotype, LM chops were cooked for a constant amount of time using a belt grill, and chops of the remaining carcasses were cooked to a constant endpoint temperature using open-hearth electric broilers. Regardless of cooking method and sampling location, repeatability estimates were at least 0.8 for LM WBSF and SSF. For WBSF, repeatability estimates were slightly higher at the anterior location (0.93 to 0.98) than the posterior location (0.88 to 0.90). The difference in repeatability between locations was probably a function of a greater level of variation in shear force at the anterior location. For callipyge LM, WBSF was higher (P < 0.001) at the anterior location than at the middle or posterior locations. For non-callipyge LM, WBSF was lower (P < 0.001) at the anterior location than at the middle or posterior locations. Consequently, the difference in WBSF between callipyge and non-callipyge LM was largest at the anterior location. Experiment 2 was conducted to obtain an estimate of the repeatability of SSF for lamb LM chops cooked with the belt grill using a larger number of animals (n = 87). In Exp. 2, LM chops were obtained from matching locations of both sides of 44 non-callipyge and 43 callipyge carcasses. Chops were cooked with a belt grill and SSF was measured, and repeatability was estimated to be 0.95. Repeatable estimates of lamb LM tenderness can be achieved either by cooking to a constant endpoint temperature with electric broilers or cooking for a constant amount of time with a belt grill. Likewise, repeatable estimates of lamb LM tenderness can be achieved with WBSF or SSF. However, use of belt grill cookery and the SSF technique could decrease time requirements which would decrease research costs.
An Automated Technique for Estimating Daily Precipitation over the State of Virginia
NASA Technical Reports Server (NTRS)
Follansbee, W. A.; Chamberlain, L. W., III
1981-01-01
Digital IR and visible imagery obtained from a geostationary satellite located over the equator at 75 deg west latitude were provided by NASA and used to obtain a linear relationship between cloud top temperature and hourly precipitation. Two computer programs written in FORTRAN were used. The first program computes the satellite estimate field from the hourly digital IR imagery. The second program computes the final estimate for the entire state area by comparing five preliminary estimates of 24 hour precipitation with control raingage readings and determining which of the five methods gives the best estimate for the day. The final estimate is then produced by incorporating control gage readings into the winning method. In presenting reliable precipitation estimates for every cell in Virginia in near real time on a daily on going basis, the techniques require on the order of 125 to 150 daily gage readings by dependable, highly motivated observers distributed as uniformly as feasible across the state.
Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale
NASA Astrophysics Data System (ADS)
Shragge, J.; Witten, B.
2017-12-01
Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.
Radar Remote Sensing of Waves and Currents in the Nearshore Zone
2006-01-01
and application of novel microwave, acoustic, and optical remote sensing techniques. The objectives of this effort are to determine the extent to which...Doppler radar techniques are useful for nearshore remote sensing applications. Of particular interest are estimates of surf zone location and extent...surface currents, waves, and bathymetry. To date, optical (video) techniques have been the primary remote sensing technology used for these applications. A key advantage of the radar is its all weather day-night operability.
Head-mounted active noise control system with virtual sensing technique
NASA Astrophysics Data System (ADS)
Miyazaki, Nobuhiro; Kajikawa, Yoshinobu
2015-03-01
In this paper, we apply a virtual sensing technique to a head-mounted active noise control (ANC) system we have already proposed. The proposed ANC system can reduce narrowband noise while improving the noise reduction ability at the desired locations. A head-mounted ANC system based on an adaptive feedback structure can reduce noise with periodicity or narrowband components. However, since quiet zones are formed only at the locations of error microphones, an adequate noise reduction cannot be achieved at the locations where error microphones cannot be placed such as near the eardrums. A solution to this problem is to apply a virtual sensing technique. A virtual sensing ANC system can achieve higher noise reduction at the desired locations by measuring the system models from physical sensors to virtual sensors, which will be used in the online operation of the virtual sensing ANC algorithm. Hence, we attempt to achieve the maximum noise reduction near the eardrums by applying the virtual sensing technique to the head-mounted ANC system. However, it is impossible to place the microphone near the eardrums. Therefore, the system models from physical sensors to virtual sensors are estimated using the Head And Torso Simulator (HATS) instead of human ears. Some simulation, experimental, and subjective assessment results demonstrate that the head-mounted ANC system with virtual sensing is superior to that without virtual sensing in terms of the noise reduction ability at the desired locations.
NASA Technical Reports Server (NTRS)
Turso, James; Lawrence, Charles; Litt, Jonathan
2004-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
NASA Technical Reports Server (NTRS)
Turso, James A.; Lawrence, Charles; Litt, Jonathan S.
2007-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/ health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite-element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
Towards Unmanned Systems for Dismounted Operations in the Canadian Forces
2011-01-01
LIDAR , and RADAR) and lower power/mass, passive imaging techniques such as structure from motion and simultaneous localisation and mapping ( SLAM ...sensors and learning algorithms. 5.1.2 Simultaneous localisation and mapping SLAM algorithms concurrently estimate a robot pose and a map of unique...locations and vehicle pose are part of the SLAM state vector and are estimated in each update step. AISS developed a monocular camera-based SLAM
NASA Astrophysics Data System (ADS)
Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.; Rodríguez-Bouza, M.
2017-04-01
Global Navigation Satellite Systems (GNSS) have become a powerful tool use in surveying and mapping, air and maritime navigation, ionospheric/space weather research and other applications. However, in some cases, its maximum efficiency could not be attained due to some uncorrelated errors associated with the system measurements, which is caused mainly by the dispersive nature of the ionosphere. Ionosphere has been represented using the total number of electrons along the signal path at a particular height known as Total Electron Content (TEC). However, there are many methods to estimate TEC but the outputs are not uniform, which could be due to the peculiarity in characterizing the biases inside the observables (measurements), and sometimes could be associated to the influence of mapping function. The errors in TEC estimation could lead to wrong conclusion and this could be more critical in case of safety-of-life application. This work investigated the performance of Ciraolo's and Gopi's GNSS-TEC calibration techniques, during 5 geomagnetic quiet and disturbed conditions in the month of October 2013, at the grid points located in low and middle latitudes. The data used are obtained from the GNSS ground-based receivers located at Borriana in Spain (40°N, 0°E; mid latitude) and Accra in Ghana (5.50°N, -0.20°E; low latitude). The results of the calibrated TEC are compared with the TEC obtained from European Geostationary Navigation Overlay System Processing Set (EGNOS PS) TEC algorithm, which is considered as a reference data. The TEC derived from Global Ionospheric Maps (GIM) through International GNSS service (IGS) was also examined at the same grid points. The results obtained in this work showed that Ciraolo's calibration technique (a calibration technique based on carrier-phase measurements only) estimates TEC better at middle latitude in comparison to Gopi's technique (a calibration technique based on code and carrier-phase measurements). At the same time, Gopi's calibration was also found more reliable in low latitude than Ciraolo's technique. In addition, the TEC derived from IGS GIM seems to be much reliable in middle-latitude than in low-latitude region.
Precise visual navigation using multi-stereo vision and landmark matching
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh
2007-04-01
Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.
System health monitoring using multiple-model adaptive estimation techniques
NASA Astrophysics Data System (ADS)
Sifford, Stanley Ryan
Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary. Customizable rules define the specific resample behavior when the GRAPE parameter estimates converge. Convergence itself is determined from the derivatives of the parameter estimates using a simple moving average window to filter out noise. The system can be tuned to match the desired performance goals by making adjustments to parameters such as the sample size, convergence criteria, resample criteria, initial sampling method, resampling method, confidence in prior sample covariances, sample delay, and others.
Association of earthquakes and faults in the San Francisco Bay area using Bayesian inference
Wesson, R.L.; Bakun, W.H.; Perkins, D.M.
2003-01-01
Bayesian inference provides a method to use seismic intensity data or instrumental locations, together with geologic and seismologic data, to make quantitative estimates of the probabilities that specific past earthquakes are associated with specific faults. Probability density functions are constructed for the location of each earthquake, and these are combined with prior probabilities through Bayes' theorem to estimate the probability that an earthquake is associated with a specific fault. Results using this method are presented here for large, preinstrumental, historical earthquakes and for recent earthquakes with instrumental locations in the San Francisco Bay region. The probabilities for individual earthquakes can be summed to construct a probabilistic frequency-magnitude relationship for a fault segment. Other applications of the technique include the estimation of the probability of background earthquakes, that is, earthquakes not associated with known or considered faults, and the estimation of the fraction of the total seismic moment associated with earthquakes less than the characteristic magnitude. Results for the San Francisco Bay region suggest that potentially damaging earthquakes with magnitudes less than the characteristic magnitudes should be expected. Comparisons of earthquake locations and the surface traces of active faults as determined from geologic data show significant disparities, indicating that a complete understanding of the relationship between earthquakes and faults remains elusive.
Archfield, Stacey A.; Pugliese, Alessio; Castellarin, Attilio; Skøien, Jon O.; Kiang, Julie E.
2013-01-01
In the United States, estimation of flood frequency quantiles at ungauged locations has been largely based on regional regression techniques that relate measurable catchment descriptors to flood quantiles. More recently, spatial interpolation techniques of point data have been shown to be effective for predicting streamflow statistics (i.e., flood flows and low-flow indices) in ungauged catchments. Literature reports successful applications of two techniques, canonical kriging, CK (or physiographical-space-based interpolation, PSBI), and topological kriging, TK (or top-kriging). CK performs the spatial interpolation of the streamflow statistic of interest in the two-dimensional space of catchment descriptors. TK predicts the streamflow statistic along river networks taking both the catchment area and nested nature of catchments into account. It is of interest to understand how these spatial interpolation methods compare with generalized least squares (GLS) regression, one of the most common approaches to estimate flood quantiles at ungauged locations. By means of a leave-one-out cross-validation procedure, the performance of CK and TK was compared to GLS regression equations developed for the prediction of 10, 50, 100 and 500 yr floods for 61 streamgauges in the southeast United States. TK substantially outperforms GLS and CK for the study area, particularly for large catchments. The performance of TK over GLS highlights an important distinction between the treatments of spatial correlation when using regression-based or spatial interpolation methods to estimate flood quantiles at ungauged locations. The analysis also shows that coupling TK with CK slightly improves the performance of TK; however, the improvement is marginal when compared to the improvement in performance over GLS.
Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data
NASA Astrophysics Data System (ADS)
Das, S. B.; Mitra, S.
2015-12-01
We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.
A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Hezarkhani, Ardeshir
2012-05-01
The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called "Coactive Neuro-Fuzzy Inference System" (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) - as a well-known technique to solve the complex optimization problems - is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS-GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS-GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems.
A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation
Tahmasebi, Pejman; Hezarkhani, Ardeshir
2012-01-01
The grade estimation is a quite important and money/time-consuming stage in a mine project, which is considered as a challenge for the geologists and mining engineers due to the structural complexities in mineral ore deposits. To overcome this problem, several artificial intelligence techniques such as Artificial Neural Networks (ANN) and Fuzzy Logic (FL) have recently been employed with various architectures and properties. However, due to the constraints of both methods, they yield the desired results only under the specific circumstances. As an example, one major problem in FL is the difficulty of constructing the membership functions (MFs).Other problems such as architecture and local minima could also be located in ANN designing. Therefore, a new methodology is presented in this paper for grade estimation. This method which is based on ANN and FL is called “Coactive Neuro-Fuzzy Inference System” (CANFIS) which combines two approaches, ANN and FL. The combination of these two artificial intelligence approaches is achieved via the verbal and numerical power of intelligent systems. To improve the performance of this system, a Genetic Algorithm (GA) – as a well-known technique to solve the complex optimization problems – is also employed to optimize the network parameters including learning rate, momentum of the network and the number of MFs for each input. A comparison of these techniques (ANN, Adaptive Neuro-Fuzzy Inference System or ANFIS) with this new method (CANFIS–GA) is also carried out through a case study in Sungun copper deposit, located in East-Azerbaijan, Iran. The results show that CANFIS–GA could be a faster and more accurate alternative to the existing time-consuming methodologies for ore grade estimation and that is, therefore, suggested to be applied for grade estimation in similar problems. PMID:25540468
A comparison of techniques for assessing farmland bumblebee populations.
Wood, T J; Holland, J M; Goulson, D
2015-04-01
Agri-environment schemes have been implemented across the European Union in order to reverse declines in farmland biodiversity. To assess the impact of these schemes for bumblebees, accurate measures of their populations are required. Here, we compared bumblebee population estimates on 16 farms using three commonly used techniques: standardised line transects, coloured pan traps and molecular estimates of nest abundance. There was no significant correlation between the estimates obtained by the three techniques, suggesting that each technique captured a different aspect of local bumblebee population size and distribution in the landscape. Bumblebee abundance as observed on the transects was positively influenced by the number of flowers present on the transect. The number of bumblebees caught in pan traps was positively influenced by the density of flowers surrounding the trapping location and negatively influenced by wider landscape heterogeneity. Molecular estimates of the number of nests of Bombus terrestris and B. hortorum were positively associated with the proportion of the landscape covered in oilseed rape and field beans. Both direct survey techniques are strongly affected by floral abundance immediately around the survey site, potentially leading to misleading results if attempting to infer overall abundance in an area or on a farm. In contrast, whilst the molecular method suffers from an inability to detect sister pairs at low sample sizes, it appears to be unaffected by the abundance of forage and thus is the preferred survey technique.
NASA Astrophysics Data System (ADS)
Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Boyles, Ryan
2016-12-01
Surface soil moisture is a critical parameter for understanding the energy flux at the land atmosphere boundary. Weather modeling, climate prediction, and remote sensing validation are some of the applications for surface soil moisture information. The most common in situ measurement for these purposes are sensors that are installed at depths of approximately 5 cm. There are however, sensor technologies and network designs that do not provide an estimate at this depth. If soil moisture estimates at deeper depths could be extrapolated to the near surface, in situ networks providing estimates at other depths would see their values enhanced. Soil moisture sensors from the U.S. Climate Reference Network (USCRN) were used to generate models of 5 cm soil moisture, with 10 cm soil moisture measurements and antecedent precipitation as inputs, via machine learning techniques. Validation was conducted with the available, in situ, 5 cm resources. It was shown that a 5 cm estimate, which was extrapolated from a 10 cm sensor and antecedent local precipitation, produced a root-mean-squared-error (RMSE) of 0.0215 m3/m3. Next, these machine-learning-generated 5 cm estimates were also compared to AMSR-E estimates at these locations. These results were then compared with the performance of the actual in situ readings against the AMSR-E data. The machine learning estimates at 5 cm produced an RMSE of approximately 0.03 m3/m3 when an optimized gain and offset were applied. This is necessary considering the performance of AMSR-E in locations characterized by high vegetation water contents, which are present across North Carolina. Lastly, the application of this extrapolation technique is applied to the ECONet in North Carolina, which provides a 10 cm depth measurement as its shallowest soil moisture estimate. A raw RMSE of 0.028 m3/m3 was achieved, and with a linear gain and offset applied at each ECONet site, an RMSE of 0.013 m3/m3 was possible.
Modeling of environmentally induced transients within satellites
NASA Technical Reports Server (NTRS)
Stevens, N. John; Barbay, Gordon J.; Jones, Michael R.; Viswanathan, R.
1987-01-01
A technique is described that allows an estimation of possible spacecraft charging hazards. This technique, called SCREENS (spacecraft response to environments of space), utilizes the NASA charging analyzer program (NASCAP) to estimate the electrical stress locations and the charge stored in the dielectric coatings due to spacecraft encounter with a geomagnetic substorm environment. This information can then be used to determine the response of the spacecraft electrical system to a surface discharge by means of lumped element models. The coupling into the electronics is assumed to be due to magnetic linkage from the transient currents flowing as a result of the discharge transient. The behavior of a spinning spacecraft encountering a severe substorm is predicted using this technique. It is found that systems are potentially vulnerable to upset if transient signals enter through the ground lines.
Measuring and modeling near surface reflected and emitted radiation fluxes at the FIFE site
NASA Technical Reports Server (NTRS)
Blad, Blaine L.; Norman, John M.; Walter-Shea, Elizabeth; Starks, Patrick; Vining, Roel; Hays, Cynthia
1988-01-01
Research was conducted during the four Intensive Field Campaigns (IFC) of the FIFE project in 1987. The research was done on a tall grass prairie with specific measurement sites on and near the Konza Prairie in Kansas. Measurements were made to help meet the following objectives: determination of the variability in reflected and emitted radiation fluxes in selected spectral wavebands as a function of topography and vegetative community; development of techniques to account for slope and sun angle effects on the radiation fluxes; estimation of shortwave albedo and net radiation fluxes using the reflected and emitted spectral measurements described; estimation of leaf and canopy spectral properties from calculated normalized differences coupled with off-nadir measurements using inversion techniques; estimation of plant water status at several locations with indices utilizing plant temperature and other environmental parameters; and determination of relationships between estimated plant water status and measured soil water content. Results are discussed.
NASA Technical Reports Server (NTRS)
Stuart, J. R.
1984-01-01
The evolution of NASA's planetary navigation techniques is traced, and radiometric and optical data types are described. Doppler navigation; the Deep Space Network; differenced two-way range techniques; differential very long base interferometry; and optical navigation are treated. The Doppler system enables a spacecraft in cruise at high absolute declination to be located within a total angular uncertainty of 1/4 microrad. The two-station range measurement provides a 1 microrad backup at low declinations. Optical data locate the spacecraft relative to the target to an angular accuracy of 5 microrad. Earth-based radio navigation and its less accurate but target-relative counterpart, optical navigation, thus form complementary measurement sources, which provide a powerful sensory system to produce high-precision orbit estimates.
NASA Technical Reports Server (NTRS)
Ostroff, A. J.
1973-01-01
Some of the major difficulties associated with large orbiting astronomical telescopes are the cost of manufacturing the primary mirror to precise tolerances and the maintaining of diffraction-limited tolerances while in orbit. One successfully demonstrated approach for minimizing these problem areas is the technique of actively deforming the primary mirror by applying discrete forces to the rear of the mirror. A modal control technique, as applied to active optics, has previously been developed and analyzed. The modal control technique represents the plant to be controlled in terms of its eigenvalues and eigenfunctions which are estimated via numerical approximation techniques. The report includes an extension of previous work using the modal control technique and also describes an optimal feedback controller. The equations for both control laws are developed in state-space differential form and include such considerations as stability, controllability, and observability. These equations are general and allow the incorporation of various mode-analyzer designs; two design approaches are presented. The report also includes a technique for placing actuator and sensor locations at points on the mirror based upon the flexibility matrix of the uncontrolled or unobserved modes of the structure. The locations selected by this technique are used in the computer runs which are described. The results are based upon three different initial error distributions, two mode-analyzer designs, and both the modal and optimal control laws.
Motor unit size estimation: confrontation of surface EMG with macro EMG.
Roeleveld, K; Stegeman, D F; Falck, B; Stålberg, E V
1997-06-01
Surface EMG (SEMG) is little used for diagnostic purposes in clinical neurophysiology, mainly because it provides little direct information on individual motor units (MUs). One of the techniques to estimate the MU size is intra-muscular Macro EMG. The present study compares SEMG with Macro EMG. Fifty-eight channel SEMG was recorded simultaneously with Macro EMG. Individual MUPs were obtained by single fiber triggered averaging. All recordings were made from the biceps brachii of healthy subjects during voluntary contraction at low force. High positive correlations were found between all Macro and Surface motor unit potential (MUP) parameters: area, peak-to-peak amplitude, negative peak amplitude and positive peak amplitude. The MUPs recorded with SEMG were dependent on the distance between the MU and the skin surface. Normalizing the SEMG parameters for MU location did not improve the correlation coefficient between the parameters of both techniques. The two measurement techniques had almost the same relative range in MUP parameters in any individual subject compared to the others, especially after normalizing the surface MUP parameters for MU location. MUPs recorded with this type of SEMG provide useful information about the MU size.
Anti-forensics of chromatic aberration
NASA Astrophysics Data System (ADS)
Mayer, Owen; Stamm, Matthew C.
2015-03-01
Over the past decade, a number of information forensic techniques have been developed to identify digital image manipulation and falsification. Recent research has shown, however, that an intelligent forger can use anti-forensic countermeasures to disguise their forgeries. In this paper, an anti-forensic technique is proposed to falsify the lateral chromatic aberration present in a digital image. Lateral chromatic aberration corresponds to the relative contraction or expansion between an image's color channels that occurs due to a lens's inability to focus all wavelengths of light on the same point. Previous work has used localized inconsistencies in an image's chromatic aberration to expose cut-and-paste image forgeries. The anti-forensic technique presented in this paper operates by estimating the expected lateral chromatic aberration at an image location, then removing deviations from this estimate caused by tampering or falsification. Experimental results are presented that demonstrate that our anti-forensic technique can be used to effectively disguise evidence of an image forgery.
Intrathoracic airway wall detection using graph search and scanner PSF information
NASA Astrophysics Data System (ADS)
Reinhardt, Joseph M.; Park, Wonkyu; Hoffman, Eric A.; Sonka, Milan
1997-05-01
Measurements of the in vivo bronchial tree can be used to assess regional airway physiology. High-resolution CT (HRCT) provides detailed images of the lungs and has been used to evaluate bronchial airway geometry. Such measurements have been sued to assess diseases affecting the airways, such as asthma and cystic fibrosis, to measure airway response to external stimuli, and to evaluate the mechanics of airway collapse in sleep apnea. To routinely use CT imaging in a clinical setting to evaluate the in vivo airway tree, there is a need for an objective, automatic technique for identifying the airway tree in the CT images and measuring airway geometry parameters. Manual or semi-automatic segmentation and measurement of the airway tree from a 3D data set may require several man-hours of work, and the manual approaches suffer from inter-observer and intra- observer variabilities. This paper describes a method for automatic airway tree analysis that combines accurate airway wall location estimation with a technique for optimal airway border smoothing. A fuzzy logic, rule-based system is used to identify the branches of the 3D airway tree in thin-slice HRCT images. Raycasting is combined with a model-based parameter estimation technique to identify the approximate inner and outer airway wall borders in 2D cross-sections through the image data set. Finally, a 2D graph search is used to optimize the estimated airway wall locations and obtain accurate airway borders. We demonstrate this technique using CT images of a plexiglass tube phantom.
An evaluation of the accuracy of some radar wind profiling techniques
NASA Technical Reports Server (NTRS)
Koscielny, A. J.; Doviak, R. J.
1983-01-01
Major advances in Doppler radar measurement in optically clear air have made it feasible to monitor radial velocities in the troposphere and lower stratosphere. For most applications the three dimensional wind vector is monitored rather than the radial velocity. Measurement of the wind vector with a single radar can be made assuming a spatially linear, time invariant wind field. The components and derivatives of the wind are estimated by the parameters of a linear regression of the radial velocities on functions of their spatial locations. The accuracy of the wind measurement thus depends on the locations of the radial velocities. The suitability is evaluated of some of the common retrieval techniques for simultaneous measurement of both the vertical and horizontal wind components. The techniques considered for study are fixed beam, azimuthal scanning (VAD) and elevation scanning (VED).
Nonlinear optimization-based device-free localization with outlier link rejection.
Xiao, Wendong; Song, Biao; Yu, Xiting; Chen, Peiyuan
2015-04-07
Device-free localization (DFL) is an emerging wireless technique for estimating the location of target that does not have any attached electronic device. It has found extensive use in Smart City applications such as healthcare at home and hospitals, location-based services at smart spaces, city emergency response and infrastructure security. In DFL, wireless devices are used as sensors that can sense the target by transmitting and receiving wireless signals collaboratively. Many DFL systems are implemented based on received signal strength (RSS) measurements and the location of the target is estimated by detecting the changes of the RSS measurements of the wireless links. Due to the uncertainty of the wireless channel, certain links may be seriously polluted and result in erroneous detection. In this paper, we propose a novel nonlinear optimization approach with outlier link rejection (NOOLR) for RSS-based DFL. It consists of three key strategies, including: (1) affected link identification by differential RSS detection; (2) outlier link rejection via geometrical positional relationship among links; (3) target location estimation by formulating and solving a nonlinear optimization problem. Experimental results demonstrate that NOOLR is robust to the fluctuation of the wireless signals with superior localization accuracy compared with the existing Radio Tomographic Imaging (RTI) approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Min-Joo; Park, So-Hyun; Research Institute of Biomedical Engineering, The Catholic University of Korea, Seoul
2013-10-01
The partial-breast irradiation (PBI) technique, an alternative to whole-breast irradiation, is a beam delivery method that uses a limited range of treatment volume. The present study was designed to determine the optimal PBI treatment modalities for 8 different tumor locations. Treatment planning was performed on computed tomography (CT) data sets of 6 patients who had received lumpectomy treatments. Tumor locations were classified into 8 subsections according to breast quadrant and depth. Three-dimensional conformal radiation therapy (3D-CRT), electron beam therapy (ET), and helical tomotherapy (H-TOMO) were utilized to evaluate the dosimetric effect for each tumor location. Conformation number (CN), radical dosemore » homogeneity index (rDHI), and dose delivered to healthy tissue were estimated. The Kruskal-Wallis, Mann-Whitney U, and Bonferroni tests were used for statistical analysis. The ET approach showed good sparing effects and acceptable target coverage for the lower inner quadrant—superficial (LIQ-S) and lower inner quadrant—deep (LIQ-D) locations. The H-TOMO method was the least effective technique as no evaluation index achieved superiority for all tumor locations except CN. The ET method is advisable for treating LIQ-S and LIQ-D tumors, as opposed to 3D-CRT or H-TOMO, because of acceptable target coverage and much lower dose applied to surrounding tissue.« less
NASA Astrophysics Data System (ADS)
Yoshidome, Satoshi; Arimura, Hidetaka; Terashima, Koutarou; Hirakawa, Masakazu; Hirose, Taka-aki; Fukunaga, Junichi; Nakamura, Yasuhiko
2017-03-01
Recently, image-guided radiotherapy (IGRT) systems using kilovolt cone-beam computed tomography (kV-CBCT) images have become more common for highly accurate patient positioning in stereotactic lung body radiotherapy (SLBRT). However, current IGRT procedures are based on bone structures and subjective correction. Therefore, the aim of this study was to evaluate the proposed framework for automated estimation of lung tumor locations in kV-CBCT images for tumor-based patient positioning in SLBRT. Twenty clinical cases are considered, involving solid, pure ground-glass opacity (GGO), mixed GGO, solitary, and non-solitary tumor types. The proposed framework consists of four steps: (1) determination of a search region for tumor location detection in a kV-CBCT image; (2) extraction of a tumor template from a planning CT image; (3) preprocessing for tumor region enhancement (edge and tumor enhancement using a Sobel filter and a blob structure enhancement (BSE) filter, respectively); and (4) tumor location estimation based on a template-matching technique. The location errors in the original, edge-, and tumor-enhanced images were found to be 1.2 ± 0.7 mm, 4.2 ± 8.0 mm, and 2.7 ± 4.6 mm, respectively. The location errors in the original images of solid, pure GGO, mixed GGO, solitary, and non-solitary types of tumors were 1.2 ± 0.7 mm, 1.3 ± 0.9 mm, 0.4 ± 0.6 mm, 1.1 ± 0.8 mm and 1.0 ± 0.7 mm, respectively. These results suggest that the proposed framework is robust as regards automatic estimation of several types of tumor locations in kV-CBCT images for tumor-based patient positioning in SLBRT.
Modeling particle number concentrations along Interstate 10 in El Paso, Texas
Olvera, Hector A.; Jimenez, Omar; Provencio-Vasquez, Elias
2014-01-01
Annual average daily particle number concentrations around a highway were estimated with an atmospheric dispersion model and a land use regression model. The dispersion model was used to estimate particle concentrations along Interstate 10 at 98 locations within El Paso, Texas. This model employed annual averaged wind speed and annual average daily traffic counts as inputs. A land use regression model with vehicle kilometers traveled as the predictor variable was used to estimate local background concentrations away from the highway to adjust the near-highway concentration estimates. Estimated particle number concentrations ranged between 9.8 × 103 particles/cc and 1.3 × 105 particles/cc, and averaged 2.5 × 104 particles/cc (SE 421.0). Estimates were compared against values measured at seven sites located along I10 throughout the region. The average fractional error was 6% and ranged between -1% and -13% across sites. The largest bias of -13% was observed at a semi-rural site where traffic was lowest. The average bias amongst urban sites was 5%. The accuracy of the estimates depended primarily on the emission factor and the adjustment to local background conditions. An emission factor of 1.63 × 1014 particles/veh-km was based on a value proposed in the literature and adjusted with local measurements. The integration of the two modeling techniques ensured that the particle number concentrations estimates captured the impact of traffic along both the highway and arterial roadways. The performance and economical aspects of the two modeling techniques used in this study shows that producing particle concentration surfaces along major roadways would be feasible in urban regions where traffic and meteorological data are readily available. PMID:25313294
Electromagnetic Methods of Lightning Detection
NASA Astrophysics Data System (ADS)
Rakov, V. A.
2013-11-01
Both cloud-to-ground and cloud lightning discharges involve a number of processes that produce electromagnetic field signatures in different regions of the spectrum. Salient characteristics of measured wideband electric and magnetic fields generated by various lightning processes at distances ranging from tens to a few hundreds of kilometers (when at least the initial part of the signal is essentially radiation while being not influenced by ionospheric reflections) are reviewed. An overview of the various lightning locating techniques, including magnetic direction finding, time-of-arrival technique, and interferometry, is given. Lightning location on global scale, when radio-frequency electromagnetic signals are dominated by ionospheric reflections, is also considered. Lightning locating system performance characteristics, including flash and stroke detection efficiencies, percentage of misclassified events, location accuracy, and peak current estimation errors, are discussed. Both cloud and cloud-to-ground flashes are considered. Representative examples of modern lightning locating systems are reviewed. Besides general characterization of each system, the available information on its performance characteristics is given with emphasis on those based on formal ground-truth studies published in the peer-reviewed literature.
Fidan, Barış; Umay, Ilknur
2015-01-01
Accurate signal-source and signal-reflector target localization tasks via mobile sensory units and wireless sensor networks (WSNs), including those for environmental monitoring via sensory UAVs, require precise knowledge of specific signal propagation properties of the environment, which are permittivity and path loss coefficients for the electromagnetic signal case. Thus, accurate estimation of these coefficients has significant importance for the accuracy of location estimates. In this paper, we propose a geometric cooperative technique to instantaneously estimate such coefficients, with details provided for received signal strength (RSS) and time-of-flight (TOF)-based range sensors. The proposed technique is integrated to a recursive least squares (RLS)-based adaptive localization scheme and an adaptive motion control law, to construct adaptive target localization and adaptive target tracking algorithms, respectively, that are robust to uncertainties in aforementioned environmental signal propagation coefficients. The efficiency of the proposed adaptive localization and tracking techniques are both mathematically analysed and verified via simulation experiments. PMID:26690441
Hybrid Feedforward-Feedback Noise Control Using Virtual Sensors
NASA Technical Reports Server (NTRS)
Bean, Jacob; Fuller, Chris; Schiller, Noah
2016-01-01
Several approaches to active noise control using virtual sensors are evaluated for eventual use in an active headrest. Specifically, adaptive feedforward, feedback, and hybrid control structures are compared. Each controller incorporates the traditional filtered-x least mean squares algorithm. The feedback controller is arranged in an internal model configuration to draw comparisons with standard feedforward control theory results. Simulation and experimental results are presented that illustrate each controllers ability to minimize the pressure at both physical and virtual microphone locations. The remote microphone technique is used to obtain pressure estimates at the virtual locations. It is shown that a hybrid controller offers performance benefits over the traditional feedforward and feedback controllers. Stability issues associated with feedback and hybrid controllers are also addressed. Experimental results show that 15-20 dB reduction in broadband disturbances can be achieved by minimizing the measured pressure, whereas 10-15 dB reduction is obtained when minimizing the estimated pressure at a virtual location.
Ghost Images in Helioseismic Holography? Toy Models in a Uniform Medium
NASA Astrophysics Data System (ADS)
Yang, Dan
2018-02-01
Helioseismic holography is a powerful technique used to probe the solar interior based on estimations of the 3D wavefield. The Porter-Bojarski holography, which is a well-established method used in acoustics to recover sources and scatterers in 3D, is also an estimation of the wavefield, and hence it has the potential of being applied to helioseismology. Here we present a proof-of-concept study, where we compare helioseismic holography and Porter-Bojarski holography under the assumption that the waves propagate in a homogeneous medium. We consider the problem of locating a point source of wave excitation inside a sphere. Under these assumptions, we find that the two imaging methods have the same capability of locating the source, with the exception that helioseismic holography suffers from "ghost images" ( i.e. artificial peaks away from the source location). We conclude that Porter-Bojarski holography may improve the method currently used in helioseismology.
Husak, G.J.; Marshall, M. T.; Michaelsen, J.; Pedreros, Diego; Funk, Christopher C.; Galu, G.
2008-01-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
NASA Astrophysics Data System (ADS)
Husak, G. J.; Marshall, M. T.; Michaelsen, J.; Pedreros, D.; Funk, C.; Galu, G.
2008-07-01
Reliable estimates of cropped area (CA) in developing countries with chronic food shortages are essential for emergency relief and the design of appropriate market-based food security programs. Satellite interpretation of CA is an effective alternative to extensive and costly field surveys, which fail to represent the spatial heterogeneity at the country-level. Bias-corrected, texture based classifications show little deviation from actual crop inventories, when estimates derived from aerial photographs or field measurements are used to remove systematic errors in medium resolution estimates. In this paper, we demonstrate a hybrid high-medium resolution technique for Central Ethiopia that combines spatially limited unbiased estimates from IKONOS images, with spatially extensive Landsat ETM+ interpretations, land-cover, and SRTM-based topography. Logistic regression is used to derive the probability of a location being crop. These individual points are then aggregated to produce regional estimates of CA. District-level analysis of Landsat based estimates showed CA totals which supported the estimates of the Bureau of Agriculture and Rural Development. Continued work will evaluate the technique in other parts of Africa, while segmentation algorithms will be evaluated, in order to automate classification of medium resolution imagery for routine CA estimation in the future.
Experience gained in operation of the VLF ATD lightning location system
NASA Technical Reports Server (NTRS)
Lee, Anthony C. L.
1991-01-01
The United Kingdom (UK) Meteorological Office's Very Low Frequency (VLF) Arrival Time Difference (ATD) System for long-range location of lightning flashes started automatic international issue of lightning-location products on 17 Jun. 1988. Data from before and after this formal start-date were carefully scrutinized to judge performance. Techniques for estimating location accuracy include internal consistency and comparisons against other systems. Other areas studied were range (up to several thousand km); detection efficiency, saturation effects in active situations, and communication difficulties (for this redundant system); and spurious fix rate. Care was taken to assess the potential of the system, in addition to identifying the operational difficulties of the present implementation.
NASA Astrophysics Data System (ADS)
Lee, T. R.; Wood, W. T.; Dale, J.
2017-12-01
Empirical and theoretical models of sub-seafloor organic matter transformation, degradation and methanogenesis require estimates of initial seafloor total organic carbon (TOC). This subsurface methane, under the appropriate geophysical and geochemical conditions may manifest as methane hydrate deposits. Despite the importance of seafloor TOC, actual observations of TOC in the world's oceans are sparse and large regions of the seafloor yet remain unmeasured. To provide an estimate in areas where observations are limited or non-existent, we have implemented interpolation techniques that rely on existing data sets. Recent geospatial analyses have provided accurate accounts of global geophysical and geochemical properties (e.g. crustal heat flow, seafloor biomass, porosity) through machine learning interpolation techniques. These techniques find correlations between the desired quantity (in this case TOC) and other quantities (predictors, e.g. bathymetry, distance from coast, etc.) that are more widely known. Predictions (with uncertainties) of seafloor TOC in regions lacking direct observations are made based on the correlations. Global distribution of seafloor TOC at 1 x 1 arc-degree resolution was estimated from a dataset of seafloor TOC compiled by Seiter et al. [2004] and a non-parametric (i.e. data-driven) machine learning algorithm, specifically k-nearest neighbors (KNN). Built-in predictor selection and a ten-fold validation technique generated statistically optimal estimates of seafloor TOC and uncertainties. In addition, inexperience was estimated. Inexperience is effectively the distance in parameter space to the single nearest neighbor, and it indicates geographic locations where future data collection would most benefit prediction accuracy. These improved geospatial estimates of TOC in data deficient areas will provide new constraints on methane production and subsequent methane hydrate accumulation.
Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.
2014-01-01
The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.
Estimation and filtering techniques for high-accuracy GPS applications
NASA Technical Reports Server (NTRS)
Lichten, S. M.
1989-01-01
Techniques for determination of very precise orbits for satellites of the Global Positioning System (GPS) are currently being studied and demonstrated. These techniques can be used to make cm-accurate measurements of station locations relative to the geocenter, monitor earth orientation over timescales of hours, and provide tropospheric and clock delay calibrations during observations made with deep space radio antennas at sites where the GPS receivers have been collocated. For high-earth orbiters, meter-level knowledge of position will be available from GPS, while at low altitudes, sub-decimeter accuracy will be possible. Estimation of satellite orbits and other parameters such as ground station positions is carried out with a multi-satellite batch sequential pseudo-epoch state process noise filter. Both square-root information filtering (SRIF) and UD-factorized covariance filtering formulations are implemented in the software.
Research on Inversion Models for Forest Height Estimation Using Polarimetric SAR Interferometry
NASA Astrophysics Data System (ADS)
Zhang, L.; Duan, B.; Zou, B.
2017-09-01
The forest height is an important forest resource information parameter and usually used in biomass estimation. Forest height extraction with PolInSAR is a hot research field of imaging SAR remote sensing. SAR interferometry is a well-established SAR technique to estimate the vertical location of the effective scattering center in each resolution cell through the phase difference in images acquired from spatially separated antennas. The manipulation of PolInSAR has applications ranging from climate monitoring to disaster detection especially when used in forest area, is of particular interest because it is quite sensitive to the location and vertical distribution of vegetation structure components. However, some of the existing methods can't estimate forest height accurately. Here we introduce several available inversion models and compare the precision of some classical inversion approaches using simulated data. By comparing the advantages and disadvantages of these inversion methods, researchers can find better solutions conveniently based on these inversion methods.
Estimation of anomaly location and size using electrical impedance tomography.
Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun; Woo, Eung Je; Cho, Young Gu
2003-01-01
We developed a new algorithm that estimates locations and sizes of anomalies in electrically conducting medium based on electrical impedance tomography (EIT) technique. When only the boundary current and voltage measurements are available, it is not practically feasible to reconstruct accurate high-resolution cross-sectional conductivity or resistivity images of a subject. In this paper, we focus our attention on the estimation of locations and sizes of anomalies with different conductivity values compared with the background tissues. We showed the performance of the algorithm from experimental results using a 32-channel EIT system and saline phantom. With about 1.73% measurement error in boundary current-voltage data, we found that the minimal size (area) of the detectable anomaly is about 0.72% of the size (area) of the phantom. Potential applications include the monitoring of impedance related physiological events and bubble detection in two-phase flow. Since this new algorithm requires neither any forward solver nor time-consuming minimization process, it is fast enough for various real-time applications in medicine and nondestructive testing.
Learning to select useful landmarks.
Greiner, R; Isukapalli, R
1996-01-01
To navigate effectively, an autonomous agent must be able to quickly and accurately determine its current location. Given an initial estimate of its position (perhaps based on dead-reckoning) and an image taken of a known environment, our agent first attempts to locate a set of landmarks (real-world objects at known locations), then uses their angular separation to obtain an improved estimate of its current position. Unfortunately, some landmarks may not be visible, or worse, may be confused with other landmarks, resulting in both time wasted in searching for the undetected landmarks, and in further errors in the agent's estimate of its position. To address these problems, we propose a method that uses previous experiences to learn a selection function that, given the set of landmarks that might be visible, returns the subset that can be used to reliably provide an accurate registration of the agent's position. We use statistical techniques to prove that the learned selection function is, with high probability, effectively at a local optimum in the space of such functions. This paper also presents empirical evidence, using real-world data, that demonstrate the effectiveness of our approach.
Full load estimation of an offshore wind turbine based on SCADA and accelerometer data
NASA Astrophysics Data System (ADS)
Noppe, N.; Iliopoulos, A.; Weijtjens, W.; Devriendt, C.
2016-09-01
As offshore wind farms (OWFs) grow older, the optimal use of the actual fatigue lifetime of an offshore wind turbine (OWT) and predominantly its foundation will get more important. In case of OWTs, both quasi-static wind/thrust loads and dynamic loads, as induced by turbulence, waves and the turbine's dynamics, contribute to its fatigue life progression. To estimate the remaining useful life of an OWT, the stresses acting on the fatigue critical locations within the structure should be monitored continuously. Unfortunately, in case of the most common monopile foundations these locations are often situated below sea-level and near the mud line and thus difficult or even impossible to access for existing OWTs. Actual strain measurements taken at accessible locations above the sea level show a correlation between thrust load and several SCADA parameters. Therefore a model is created to estimate the thrust load using SCADA data and strain measurements. Afterwards the thrust load acting on the OWT is estimated using the created model and SCADA data only. From this model the quasi static loads on the foundation can be estimated over the lifetime of the OWT. To estimate the contribution of the dynamic loads a modal decomposition and expansion based virtual sensing technique is applied. This method only uses acceleration measurements recorded at accessible locations on the tower. Superimposing both contributions leads to a so-called multi-band virtual sensing. The result is a method that allows to estimate the strain history at any location on the foundation and thus the full load, being a combination of both quasi-static and dynamic loads, acting on the entire structure. This approach is validated using data from an operating Belgian OWF. An initial good match between measured and predicted strains for a short period of time proofs the concept.
Locating and defining underground goaf caused by coal mining from space-borne SAR interferometry
NASA Astrophysics Data System (ADS)
Yang, Zefa; Li, Zhiwei; Zhu, Jianjun; Yi, Huiwei; Feng, Guangcai; Hu, Jun; Wu, Lixin; Preusse, Alex; Wang, Yunjia; Papst, Markus
2018-01-01
It is crucial to locate underground goafs (i.e., mined-out areas) resulting from coal mining and define their spatial dimensions for effectively controlling the induced damages and geohazards. Traditional geophysical techniques for locating and defining underground goafs, however, are ground-based, labour-consuming and costly. This paper presents a novel space-based method for locating and defining the underground goaf caused by coal extraction using Interferometric Synthetic Aperture Radar (InSAR) techniques. As the coal mining-induced goaf is often a cuboid-shaped void and eight critical geometric parameters (i.e., length, width, height, inclined angle, azimuth angle, mining depth, and two central geodetic coordinates) are capable of locating and defining this underground space, the proposed method reduces to determine the eight geometric parameters from InSAR observations. Therefore, it first applies the Probability Integral Method (PIM), a widely used model for mining-induced deformation prediction, to construct a functional relationship between the eight geometric parameters and the InSAR-derived surface deformation. Next, the method estimates these geometric parameters from the InSAR-derived deformation observations using a hybrid simulated annealing and genetic algorithm. Finally, the proposed method was tested with both simulated and two real data sets. The results demonstrate that the estimated geometric parameters of the goafs are accurate and compatible overall, with averaged relative errors of approximately 2.1% and 8.1% being observed for the simulated and the real data experiments, respectively. Owing to the advantages of the InSAR observations, the proposed method provides a non-contact, convenient and practical method for economically locating and defining underground goafs in a large spatial area from space.
NASA Technical Reports Server (NTRS)
Baumgardner, M. F. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The most significant result was the use of the temporal overlay technique where the computer was used to overlay ERTS-1 data from three different dates (9 Oct., 14 Nov., 2 Dec.). The registration of MSS digital data from different dates was estimated to be accurate within one half resolution element. The temporal overlay capability provides a significant advance in machine-processing of MSS data. It is no longer essential to go through the tedious exercise of locating ground observation sites on the digital data from each ERTS-1 overpass. Once the address of a ground observation site has been located on a digital tape from any ERTS-1 overpass, the overlay technique can be used to locate the same address on a digital tape of MSS data from any other ERTS-1 pass over the same area. The temporal overlay technique also adds a valuable dimension for identifying and mapping changes in vegetation, water, and other dynamic surface features.
Computing Fault Displacements from Surface Deformations
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy
2006-01-01
Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work
On the precision of automated activation time estimation
NASA Technical Reports Server (NTRS)
Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.
1988-01-01
We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Yao, Jianing; Chu, Ying-Ju; Meemon, Panomsak; Rolland, Jannick P.; Parker, Kevin J.
2016-03-01
Optical Coherence Elastography (OCE) is a widely investigated noninvasive technique for estimating the mechanical properties of tissue. In particular, vibrational OCE methods aim to estimate the shear wave velocity generated by an external stimulus in order to calculate the elastic modulus of tissue. In this study, we compare the performance of five acquisition and processing techniques for estimating the shear wave speed in simulations and experiments using tissue-mimicking phantoms. Accuracy, contrast-to-noise ratio, and resolution are measured for all cases. The first two techniques make the use of one piezoelectric actuator for generating a continuous shear wave propagation (SWP) and a tone-burst propagation (TBP) of 400 Hz over the gelatin phantom. The other techniques make use of one additional actuator located on the opposite side of the region of interest in order to create an interference pattern. When both actuators have the same frequency, a standing wave (SW) pattern is generated. Otherwise, when there is a frequency difference df between both actuators, a crawling wave (CrW) pattern is generated and propagates with less speed than a shear wave, which makes it suitable for being detected by the 2D cross-sectional OCE imaging. If df is not small compared to the operational frequency, the CrW travels faster and a sampled version of it (SCrW) is acquired by the system. Preliminary results suggest that TBP (error < 4.1%) and SWP (error < 6%) techniques are more accurate when compared to mechanical measurement test results.
A straightforward frequency-estimation technique for GPS carrier-phase time transfer.
Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen
2006-09-01
Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).
NASA Astrophysics Data System (ADS)
Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.
2017-12-01
Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-01-01
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-03-12
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.
Eigenvector of gravity gradient tensor for estimating fault dips considering fault type
NASA Astrophysics Data System (ADS)
Kusumoto, Shigekazu
2017-12-01
The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.
Detection, Source Location, and Analysis of Volcano Infrasound
NASA Astrophysics Data System (ADS)
McKee, Kathleen F.
The study of volcano infrasound focuses on low frequency sound from volcanoes, how volcanic processes produce it, and the path it travels from the source to our receivers. In this dissertation we focus on detecting, locating, and analyzing infrasound from a number of different volcanoes using a variety of analysis techniques. These works will help inform future volcano monitoring using infrasound with respect to infrasonic source location, signal characterization, volatile flux estimation, and back-azimuth to source determination. Source location is an important component of the study of volcano infrasound and in its application to volcano monitoring. Semblance is a forward grid search technique and common source location method in infrasound studies as well as seismology. We evaluated the effectiveness of semblance in the presence of significant topographic features for explosions of Sakurajima Volcano, Japan, while taking into account temperature and wind variations. We show that topographic obstacles at Sakurajima cause a semblance source location offset of 360-420 m to the northeast of the actual source location. In addition, we found despite the consistent offset in source location semblance can still be a useful tool for determining periods of volcanic activity. Infrasonic signal characterization follows signal detection and source location in volcano monitoring in that it informs us of the type of volcanic activity detected. In large volcanic eruptions the lowermost portion of the eruption column is momentum-driven and termed the volcanic jet or gas-thrust zone. This turbulent fluid-flow perturbs the atmosphere and produces a sound similar to that of jet and rocket engines, known as jet noise. We deployed an array of infrasound sensors near an accessible, less hazardous, fumarolic jet at Aso Volcano, Japan as an analogue to large, violent volcanic eruption jets. We recorded volcanic jet noise at 57.6° from vertical, a recording angle not normally feasible in volcanic environments. The fumarolic jet noise was found to have a sustained, low amplitude signal with a spectral peak between 7-10 Hz. From thermal imagery we measure the jet temperature ( 260 °C) and estimate the jet diameter ( 2.5 m). From the estimated jet diameter, an assumed Strouhal number of 0.19, and the jet noise peak frequency, we estimated the jet velocity to be 79 - 132 m/s. We used published gas data to then estimate the volatile flux at 160 - 270 kg/s (14,000 - 23,000 t/d). These estimates are typically difficult to obtain in volcanic environments, but provide valuable information on the eruption. At regional and global length scales we use infrasound arrays to detect signals and determine their source back-azimuths. A ground coupled airwave (GCA) occurs when an incident acoustic pressure wave encounters the Earth's surface and part of the energy of the wave is transferred to the ground. GCAs are commonly observed from sources such as volcanic eruptions, bolides, meteors, and explosions. They have been observed to have retrograde particle motion. When recorded on collocated seismo-acoustic sensors, the phase between the infrasound and seismic signals is 90°. If the sensors are separated wind noise is usually incoherent and an additional phase is added due to the sensor separation. We utilized the additional phase and the characteristic particle motion to determine a unique back-azimuth solution to an acoustic source. The additional phase will be different depending on the direction from which a wave arrives. Our technique was tested using synthetic seismo-acoustic data from a coupled Earth-atmosphere 3D finite difference code and then applied to two well-constrained datasets: Mount St. Helens, USA, and Mount Pagan, Commonwealth of the Northern Mariana Islands Volcanoes. The results from our method are within <1° - 5° of the actual and traditional infrasound array processing determined back-azimuths. Ours is a new method to detect and determine the back-azimuth to infrasonic signals, which will be useful when financial and spatial resources are limited.
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.
Geopressure modeling from petrophysical data: An example from East Kalimantan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herkommer, M.A.
1994-07-01
Localized models of abnormal formation pressure (geopressure) are important economic and safety tools frequently used for well planning and drilling operations. Simplified computer-based procedures have been developed that permit these models to be developed more rapidly and with greater accuracy. These techniques are broadly applicable to basins throughout the world where abnormal formation pressures occur. An example from the Attaka field of East Kalimantan, southeast Asia, shows how geopressure models are developed. Using petrophysical and engineering data, empirical correlations between observed pressure and petrophysical logs can be created by computer-assisted data-fitting techniques. These correlations serve as the basis for modelsmore » of the geopressure. By performing repeated analyses on wells at various locations, contour maps on the top of abnormal geopressure can be created. Methods that are simple in their development and application make the task of geopressure estimation less formidable to the geologist and petroleum engineer. Further, more accurate estimates can significantly improve drilling speeds while reducing the incidence of stuck pipe, kicks, and blowouts. In general, geopressure estimates are used in all phases of drilling operations: To develop mud plans and specify equipment ratings, to assist in the recognition of geopressured formations and determination of mud weights, and to improve predictions at offset locations and geologically comparable areas.« less
Investigating the use of multi-point coupling for single-sensor bearing estimation in one direction
NASA Astrophysics Data System (ADS)
Woolard, Americo G.; Phoenix, Austin A.; Tarazaga, Pablo A.
2018-04-01
Bearing estimation of radially propagating symmetric waves in solid structures typically requires a minimum of two sensors. As a test specimen, this research investigates the use of multi-point coupling to provide directional inference using a single-sensor. By this provision, the number of sensors required for localization can be reduced. A finite-element model of a beam is constructed with a symmetrically placed bipod that has asymmetric joint-stiffness properties. Impulse loading is applied at different points along the beam, and measurements are taken from the apex of the bipod. A technique is developed to determine the direction-of-arrival of the propagating wave. The accuracy when using the bipod with the developed technique is compared against results gathered without the bipod and measuring from an asymmetric location along the beam. The results show 92% accuracy when the bipod is used, compared to 75% when measuring without the bipod from an asymmetric location. A geometry investigation finds the best accuracy results when one leg of the bipod has a low stiffness and a large diameter relative to the other leg.
Real-time vehicle noise cancellation techniques for gunshot acoustics
NASA Astrophysics Data System (ADS)
Ramos, Antonio L. L.; Holm, Sverre; Gudvangen, Sigmund; Otterlei, Ragnvald
2012-06-01
Acoustical sniper positioning systems rely on the detection and direction-of-arrival (DOA) estimation of the shockwave and the muzzle blast in order to provide an estimate of a potential snipers location. Field tests have shown that detecting and estimating the DOA of the muzzle blast is a rather difficult task in the presence of background noise sources, e.g., vehicle noise, especially in long range detection and absorbing terrains. In our previous work presented in the 2011 edition of this conference we highlight the importance of improving the SNR of the gunshot signals prior to the detection and recognition stages, aiming at lowering the false alarm and miss-detection rates and, thereby, increasing the reliability of the system. This paper reports on real-time noise cancellation techniques, like Spectral Subtraction and Adaptive Filtering, applied to gunshot signals. Our model assumes the background noise as being short-time stationary and uncorrelated to the impulsive gunshot signals. In practice, relatively long periods without signal occur and can be used to estimate the noise spectrum and its first and second order statistics as required in the spectral subtraction and adaptive filtering techniques, respectively. The results presented in this work are supported with extensive simulations based on real data.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
Comparison of ionospheric plasma drifts obtained by different techniques
NASA Astrophysics Data System (ADS)
Kouba, Daniel; Arikan, Feza; Arikan, Orhan; Toker, Cenk; Mosna, Zbysek; Gok, Gokhan; Rejfek, Lubos; Ari, Gizem
2016-07-01
Ionospheric observatory in Pruhonice (Czech Republic, 50N, 14.9E) provides regular ionospheric sounding using Digisonde DPS-4D. The paper is focused on F-region vertical drift data. Vertical component of the drift velocity vector can be estimated by several methods. Digisonde DPS-4D allows sounding in drift mode with direct output represented by drift velocity vector. The Digisonde located in Pruhonice provides direct drift measurement routinely once per 15 minutes. However, also other different techniques can be found in the literature, for example the indirect estimation based on the temporal evolution of measured ionospheric characteristics is often used for calculation of the vertical drift component. The vertical velocity is thus estimated according to the change of characteristics scaled from the classical quarter-hour ionograms. In present paper direct drift measurement is compared with technique based on measuring of the virtual height at fixed frequency from the F-layer trace on ionogram, technique based on variation of h`F and hmF. This comparison shows possibility of using different methods for calculating vertical drift velocity and their relationship to the direct measurement used by Digisonde. This study is supported by the Joint TUBITAK 114E092 and AS CR 14/001 projects.
Improved Battery State Estimation Using Novel Sensing Techniques
NASA Astrophysics Data System (ADS)
Abdul Samad, Nassim
Lithium-ion batteries have been considered a great complement or substitute for gasoline engines due to their high energy and power density capabilities among other advantages. However, these types of energy storage devices are still yet not widespread, mainly because of their relatively high cost and safety issues, especially at elevated temperatures. This thesis extends existing methods of estimating critical battery states using model-based techniques augmented by real-time measurements from novel temperature and force sensors. Typically, temperature sensors are located near the edge of the battery, and away from the hottest core cell regions, which leads to slower response times and increased errors in the prediction of core temperatures. New sensor technology allows for flexible sensor placement at the cell surface between cells in a pack. This raises questions about the optimal locations of these sensors for best observability and temperature estimation. Using a validated model, which is developed and verified using experiments in laboratory fixtures that replicate vehicle pack conditions, it is shown that optimal sensor placement can lead to better and faster temperature estimation. Another equally important state is the state of health or the capacity fading of the cell. This thesis introduces a novel method of using force measurements for capacity fade estimation. Monitoring capacity is important for defining the range of electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs). Current capacity estimation techniques require a full discharge to monitor capacity. The proposed method can complement or replace current methods because it only requires a shallow discharge, which is especially useful in EVs and PHEVs. Using the accurate state estimation accomplished earlier, a method for downsizing a battery pack is shown to effectively reduce the number of cells in a pack without compromising safety. The influence on the battery performance (e.g. temperature, utilization, capacity fade, and cost) while downsizing and shifting the nominal operating SOC is demonstrated via simulations. The contributions in this thesis aim to make EVs, HEVs and PHEVs less costly while maintaining safety and reliability as more people are transitioning towards more environmentally friendly means of transportation.
NASA Astrophysics Data System (ADS)
Ajith Kumar, P.; Kumar, Shashi
2016-04-01
Surface maturity estimation of the lunar regolith revealed selenological process behind the formation of lunar surface, which might be provided vital information regarding the geological evolution of earth, because lunar surface is being considered as 8-9 times older than as that of the earth. Spectral reflectances data from Moon mineralogy mapper (M3), the hyperspectral sensor of chandrayan-1 coupled with the standard weight percentages of FeO from lunar returned samples of Apollo and Luna landing sites, through data interpolation techniques to generate the weight percentage FeO map of the target lunar locations. With the interpolated data mineral maps were prepared and the results are analyzed.
Fiber Bragg grating based arterial localization device
NASA Astrophysics Data System (ADS)
Ho, Siu Chun Michael; Li, Weijie; Razavi, Mehdi; Song, Gangbing
2017-06-01
A critical first step to many surgical procedures is locating and gaining access to a patients vascular system. Vascular access allows the deployment of other surgical instruments and also the monitoring of many physiological parameters. Current methods to locate blood vessels are predominantly based on the landmark technique coupled with ultrasound, fluoroscopy, or Doppler. However, even with experience and technological assistance, locating the required blood vessel is not always an easy task, especially with patients that present atypical anatomy or suffer from conditions such as weak pulsation or obesity that make vascular localization difficult. With recent advances in fiber optic sensors, there is an opportunity to develop a new tool that can make vascular localization safer and easier. In this work, the authors present a new fiber Bragg grating (FBG) based vascular access device that specializes in arterial localization. The device estimates the location towards a local artery based on the bending of a needle inserted near the tissue surrounding the artery. Experimental results obtained from an artificial circulatory loop and a mock artery show the device works best for lower angles of needle insertion and can provide an approximately 40° range of estimation towards the location of a pulsating source (e.g. an artery).
NASA Astrophysics Data System (ADS)
Lagos, Soledad R.; Velis, Danilo R.
2018-02-01
We perform the location of microseismic events generated in hydraulic fracturing monitoring scenarios using two global optimization techniques: Very Fast Simulated Annealing (VFSA) and Particle Swarm Optimization (PSO), and compare them against the classical grid search (GS). To this end, we present an integrated and optimized workflow that concatenates into an automated bash script the different steps that lead to the microseismic events location from raw 3C data. First, we carry out the automatic detection, denoising and identification of the P- and S-waves. Secondly, we estimate their corresponding backazimuths using polarization information, and propose a simple energy-based criterion to automatically decide which is the most reliable estimate. Finally, after taking proper care of the size of the search space using the backazimuth information, we perform the location using the aforementioned algorithms for 2D and 3D usual scenarios of hydraulic fracturing processes. We assess the impact of restricting the search space and show the advantages of using either VFSA or PSO over GS to attain significant speed-ups.
Estimating the number of double-strand breaks formed during meiosis from partial observation.
Toyoizumi, Hiroshi; Tsubouchi, Hideo
2012-12-01
Analyzing the basic mechanism of DNA double-strand breaks (DSB) formation during meiosis is important for understanding sexual reproduction and genetic diversity. The location and amount of meiotic DSBs can be examined by using a common molecular biological technique called Southern blotting, but only a subset of the total DSBs can be observed; only DSB fragments still carrying the region recognized by a Southern blot probe are detected. With the assumption that DSB formation follows a nonhomogeneous Poisson process, we propose two estimators of the total number of DSBs on a chromosome: (1) an estimator based on the Nelson-Aalen estimator, and (2) an estimator based on a record value process. Further, we compared their asymptotic accuracy.
Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E
2017-12-01
Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.
Kalman filter data assimilation: Targeting observations and parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less
Christin, Sylvain; St-Laurent, Martin-Hugues; Berteaux, Dominique
2015-01-01
Animal tracking through Argos satellite telemetry has enormous potential to test hypotheses in animal behavior, evolutionary ecology, or conservation biology. Yet the applicability of this technique cannot be fully assessed because no clear picture exists as to the conditions influencing the accuracy of Argos locations. Latitude, type of environment, and transmitter movement are among the main candidate factors affecting accuracy. A posteriori data filtering can remove “bad” locations, but again testing is still needed to refine filters. First, we evaluate experimentally the accuracy of Argos locations in a polar terrestrial environment (Nunavut, Canada), with both static and mobile transmitters transported by humans and coupled to GPS transmitters. We report static errors among the lowest published. However, the 68th error percentiles of mobile transmitters were 1.7 to 3.8 times greater than those of static transmitters. Second, we test how different filtering methods influence the quality of Argos location datasets. Accuracy of location datasets was best improved when filtering in locations of the best classes (LC3 and 2), while the Douglas Argos filter and a homemade speed filter yielded similar performance while retaining more locations. All filters effectively reduced the 68th error percentiles. Finally, we assess how location error impacted, at six spatial scales, two common estimators of home-range size (a proxy of animal space use behavior synthetizing movements), the minimum convex polygon and the fixed kernel estimator. Location error led to a sometimes dramatic overestimation of home-range size, especially at very local scales. We conclude that Argos telemetry is appropriate to study medium-size terrestrial animals in polar environments, but recommend that location errors are always measured and evaluated against research hypotheses, and that data are always filtered before analysis. How movement speed of transmitters affects location error needs additional research. PMID:26545245
NASA Technical Reports Server (NTRS)
Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.
2010-01-01
A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenberg, Jim; Penuelas, J.; Guenther, Alex B.
To survey landscape-scale fluxes of biogenic gases, a100-meterTeflon tube was attached to a tethered balloon as a sampling inlet for a fast response Proton Transfer Reaction Mass Spectrometer (PTRMS). Along with meteorological instruments deployed on the tethered balloon and at 3-mand outputs from a regional weather model, these observations were used to estimate landscape scale biogenic volatile organic compound fluxes with two micrometeorological techniques: mixed layer variance and surface layer gradients. This highly mobile sampling system was deployed at four field sites near Barcelona to estimate landscape-scale BVOC emission factors in a relatively short period (3 weeks). The two micrometeorologicalmore » techniques agreed within the uncertainty of the flux measurements at all four sites even though the locations had considerable heterogeneity in species distribution and complex terrain. The observed fluxes were significantly different than emissions predicted with an emission model using site-specific emission factors and land-cover characteristics. Considering the wide range in reported BVOC emission factors of VOCs for individual vegetation species (more than an order of magnitude), this flux estimation technique is useful for constraining BVOC emission factors used as model inputs.« less
NASA Astrophysics Data System (ADS)
Behtani, A.; Bouazzouni, A.; Khatir, S.; Tiachacht, S.; Zhou, Y.-L.; Abdel Wahab, M.
2017-05-01
In this paper, the problem of using measured modal parameters to detect and locate damage in beam composite stratified structures with four layers of graphite/epoxy [0°/902°/0°] is investigated. A technique based on the residual force method is applied to composite stratified structure with different boundary conditions, the results of damage detection for several damage cases demonstrate that using residual force method as damage index, the damage location can be identified correctly and the damage extents can be estimated as well.
NASA Astrophysics Data System (ADS)
Meng, Xiaobo; Chen, Haichao; Niu, Fenglin; Tang, Youcai; Yin, Chen; Wu, Furong
2018-02-01
We introduce an improved matching and locating technique to detect and locate microseismic events (-4 < ML < 0) associated with hydraulic fracturing treatment. We employ a set of representative master events to act as template waveforms and detect slave events that strongly resemble master events through stacking cross correlograms of both P and S waves between the template waveforms and the continuous records of the monitoring array. Moreover, the residual moveout in the cross correlograms across the array is used to locate slave events relative to the corresponding master event. In addition, P wave polarization constraint is applied to resolve the lateral extent of slave events in the case of unfavorable array configuration. We first demonstrate the detectability and location accuracy of the proposed approach with a pseudo-synthetic data set. Compared to the matched filter analysis, the proposed approach can significantly enhance detectability at low false alarm rate and yield robust location estimates of very low SNR events, particularly along the vertical direction. Then, we apply the method to a real microseismic data set acquired in the Weiyuan shale reservoir of China in November of 2014. The expanded microseismic catalog provides more easily interpretable spatiotemporal evolution of microseismicity, which is investigated in detail in a companion paper.
NASA Astrophysics Data System (ADS)
Heinkelmann, Robert; Dick, Galina; Nilsson, Tobias; Soja, Benedikt; Wickert, Jens; Zus, Florian; Schuh, Harald
2015-04-01
Observations from space-geodetic techniques are nowadays increasingly used to derive atmospheric information for various commercial and scientific applications. A prominent example is the operational use of GNSS data to improve global and regional weather forecasts, which was started in 2006. Atmosphere gradients describe the azimuthal asymmetry of zenith delays. Estimates of geodetic and other parameters significantly improve when atmosphere gradients are determined in addition. Here we assess the capability of several space geodetic techniques (GNSS, VLBI, DORIS) to determine atmosphere gradients of refractivity. For this purpose we implement and compare various strategies for gradient estimation, such as different values for the temporal resolution and the corresponding parameter constraints. Applying least squares estimation the gradients are usually deterministically modelled as constants or piece-wise linear functions. In our study we compare this approach with a stochastic approach modelling atmosphere gradients as random walk processes and applying a Kalman Filter for parameter estimation. The gradients, derived from space geodetic techniques are verified by comparison with those derived from Numerical Weather Models (NWM). These model data were generated using raytracing calculations based on European Centre for Medium-Range Weather Forecast (ECMWF) and National Centers for Environmental Prediction (NCEP) analyses with different spatial resolutions. The investigation of the differences between the ECMWF and NCEP gradients hereby in addition allow for an empirical assessment of the quality of model gradients and how suitable the NWM data are for verification. CONT14 (2014-05-06 until 2014-05-20) is the youngest two week long continuous VLBI campaign carried out by IVS (International VLBI Service for Geodesy and Astrometry). It presents the state-of-the-art VLBI performance in terms of number of stations and number of observations and presents thus an excellent test period for comparisons with other space geodetic techniques. During the VLBI campaign CONT14 the HOBART12 and HOBART26 (Hobart, Tasmania, Australia) VLBI antennas were involved that co-locate with each other. The investigation of the gradient estimate differences from these co-located antennas allows for a valuable empirical quality assessment. Another quality criterion for gradient estimates are the differences of parameters at the borders of adjacent 24h-sessions. Both are investigated in our study.
Stress-wave grading techniques on veneer sheets
Joseph Jung
1979-01-01
A study was conducted to compare stress wave devices and determine the information available from stress waves in veneer sheets. The distortion of the stress wave as it passed a defect indicated that an estimate of the location and size of the defect can be obtained but information regarding wood quality is lost in the areas immediately behind a knot.
Ito, Takahiro; Anzai, Daisuke; Jianqing Wang
2014-01-01
This paper proposes a novel joint time of arrival (TOA)/received signal strength indicator (RSSI)-based wireless capsule endoscope (WCE) location tracking method without prior knowledge of biological human tissues. Generally, TOA-based localization can achieve much higher localization accuracy than other radio frequency-based localization techniques, whereas wireless signals transmitted from a WCE pass through various kinds of human body tissues, as a result, the propagation velocity inside a human body should be different from one in free space. Because the variation of propagation velocity is mainly affected by the relative permittivity of human body tissues, instead of pre-measurement for the relative permittivity in advance, we simultaneously estimate not only the WCE location but also the relative permittivity information. For this purpose, this paper first derives the relative permittivity estimation model with measured RSSI information. Then, we pay attention to a particle filter algorithm with the TOA-based localization and the RSSI-based relative permittivity estimation. Our computer simulation results demonstrates that the proposed tracking methods with the particle filter can accomplish an excellent localization accuracy of around 2 mm without prior information of the relative permittivity of the human body tissues.
Hoard, C.J.; Holtschlag, D.J.; Duris, J.W.; James, D.A.; Obenauer, D.J.
2012-01-01
In 2009, the Michigan Department of Environmental Quality and the U.S. Geological Survey developed a plan to compare the effect of various streamgaging and water-quality collection techniques on streamflow and stream water-quality data for the Saginaw River, Michigan. The Saginaw River is the primary contributor of surface runoff to Saginaw Bay, Lake Huron, draining approximately 70 percent of the Saginaw Bay watershed. The U.S. Environmental Protection Agency has listed the Saginaw Bay system as an "Area of Concern" due to many factors, including excessive sediment and nutrient concentrations in the water. Current efforts to estimate loading of sediment and nutrients to Saginaw Bay utilize water-quality samples collected using a surface-grab technique and flow data that are uncertain during specific conditions. Comparisons of current flow and water-quality sampling techniques to alternative techniques were assessed between April 2009 and September 2009 at two locations in the Saginaw River. Streamflow estimated using acoustic Doppler current profiling technology was compared to a traditional stage-discharge technique. Complex conditions resulting from the influence of Saginaw Bay on the Saginaw River were able to be captured using the acoustic technology, while the traditional stage-discharge technique failed to quantify these effects. Water-quality samples were collected at two locations and on eight different dates, utilizing both surface-grab and depth-integrating multiple-vertical techniques. Sixteen paired samples were collected and analyzed for suspended sediment, turbidity, total phosphorus, total nitrogen, orthophosphate, nitrite, nitrate, and ammonia. Results indicate that concentrations of constituents associated with suspended material, such as suspended sediment, turbidity, and total phosphorus, are underestimated when samples are collected using the surface-grab technique. The median magnitude of the relative percent difference in concentration based on sampling technique was 37 percent for suspended sediment, 26 percent for turbidity, and 9.7 percent for total phosphorus samples collected at both. Acoustic techniques were also used to assist in the determination of the effectiveness of using acoustic-backscatter information for estimating the suspended-sediment concentration of the river water. Backscatter data was collected by use of an acoustic Doppler current profiler, and a Van Dorn manual sampler was simultaneously used to collect discrete water samples at 10 depths (3.5, 7.5, 11, 14, 15.5, 17.5, 19.5, 20.5, 22, and 24.5 ft below the water surface) along two vertical profiles near the center of the Saginaw River near Bay City. The Van Dorn samples were analyzed for suspended-sediment concentrations, and these data were then used to develop a relationship between acoustic-backscatter data. Acoustic-backscatter data was strongly correlated to sediment concentrations and, by using a linear regression, was able to explain 89 percent of the variability. Although this regression technique showed promise for using acoustic backscatter to estimate suspended-sediment concentration, attempts to compare suspended-sediment concentrations to the acoustic signal-to-noise ratio estimates, recorded at the fixed acoustic streamflow-gaging station near Bay City (04157061), resulted in a poor correlation.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
On Location Estimation Technique Based of the Time of Flight in Low-power Wireless Systems
NASA Astrophysics Data System (ADS)
Botta, Miroslav; Simek, Milan; Krajsa, Ondrej; Cervenka, Vladimir; Pal, Tamas
2015-04-01
This study deals with the distance estimation issue in low-power wireless systems being usually used for sensor networking and interconnecting the Internet of Things. There is an effort to locate or track these sensor entities for different needs the radio signal time of flight principle from the theoretical and practical side of application research is evaluated. Since these sensor devices are mainly targeted for low power consumption appliances, there is always need for optimization of any aspects needed for regular sensor operation. For the distance estimation we benefit from IEEE 802.15.4a technology, which offers the precise ranging capabilities. There is no need for additional hardware to be used for the ranging task and all fundamental measurements are acquired within the 15.4a standard compliant hardware in the real environment. The proposed work examines the problems and the solutions for implementation of distance estimation algorithms for WSN devices. The main contribution of the article is seen in this real testbed evaluation of the ranging technology.
Lee, Seung-Jae; Serre, Marc L; van Donkelaar, Aaron; Martin, Randall V; Burnett, Richard T; Jerrett, Michael
2012-12-01
A better understanding of the adverse health effects of chronic exposure to fine particulate matter (PM2.5) requires accurate estimates of PM2.5 variation at fine spatial scales. Remote sensing has emerged as an important means of estimating PM2.5 exposures, but relatively few studies have compared remote-sensing estimates to those derived from monitor-based data. We evaluated and compared the predictive capabilities of remote sensing and geostatistical interpolation. We developed a space-time geostatistical kriging model to predict PM2.5 over the continental United States and compared resulting predictions to estimates derived from satellite retrievals. The kriging estimate was more accurate for locations that were about 100 km from a monitoring station, whereas the remote sensing estimate was more accurate for locations that were > 100 km from a monitoring station. Based on this finding, we developed a hybrid map that combines the kriging and satellite-based PM2.5 estimates. We found that for most of the populated areas of the continental United States, geostatistical interpolation produced more accurate estimates than remote sensing. The differences between the estimates resulting from the two methods, however, were relatively small. In areas with extensive monitoring networks, the interpolation may provide more accurate estimates, but in the many areas of the world without such monitoring, remote sensing can provide useful exposure estimates that perform nearly as well.
Test-retest reliability of the multifocal photopic negative response.
Van Alstine, Anthony W; Viswanathan, Suresh
2017-02-01
To assess the test-retest reliability of the multifocal photopic negative response (mfPhNR) of normal human subjects. Multifocal electroretinograms were recorded from one eye of 61 healthy adult subjects on two separate days using a Visual Evoked Response Imaging System software version 4.3 (EDI, San Mateo, California). The visual stimulus delivered on a 75-Hz monitor consisted of seven equal-sized hexagons each subtending 12° of visual angle. The m-step exponent was 9, and the m-sequence was slowed to include at least 30 blank frames after each flash. Only the first slice of the first-order kernel was analyzed. The mfPhNR amplitude was measured at a fixed time in the trough from baseline (BT) as well as at the same fixed time in the trough from the preceding b-wave peak (PT). Additionally, we also analyzed BT normalized either to PT (BT/PT) or to the b-wave amplitude (BT/b-wave). The relative reliability of test-retest differences for each test location was estimated by the Wilcoxon matched-pair signed-rank test and intraclass correlation coefficients (ICC). Absolute test-retest reliability was estimated by Bland-Altman analysis. The test-retest amplitude differences for neither of the two measurement techniques were statistically significant as determined by Wilcoxon matched-pair signed-rank test. PT measurements showed greater ICC values than BT amplitude measurements for all test locations. For each measurement technique, the ICC value of the macular response was greater than that of the surrounding locations. The mean test-retest difference was close to zero for both techniques at each of the test locations, and while the coefficient of reliability (COR-1.96 times the standard deviation of the test-retest difference) was comparable for the two techniques at each test location when expressed in nanovolts, the %COR (COR normalized to the mean test and retest amplitudes) was superior for PT than BT measurements. The ICC and COR were comparable for the BT/PT and BT/b-wave ratios and were better than the ICC and COR for BT but worse than PT. mfPhNR amplitude measured at a fixed time in the trough from the preceding b-wave peak (PT) shows greater test-retest reliability when compared to amplitude measurement from baseline (BT) or BT amplitude normalized to either the PT or b-wave amplitudes.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
NASA Astrophysics Data System (ADS)
Lacava, T.; Faruolo, M.; Coviello, I.; Filizzola, C.; Pergola, N.; Tramutoli, V.
2014-12-01
Gas flaring is one of the most controversial energetic and environmental issues the Earth is facing, moreover contributing to the global warming and climate change. According to the World Bank, each year about 150 Billion Cubic Meter of gas are being flared globally, that is equivalent to the annual gas use of Italy and France combined. Besides, about 400 million tons of CO2 (representing about 1.2% of global CO2 emissions) are added annually into the atmosphere. Efforts to evaluate the impact of flaring on the surrounding environment are hampered by lack of official information on flare locations and volumes. Suitable satellite based techniques could offers a potential solution to this problem through the detection and subsequent mapping of flare locations as well as gas emissions estimation. In this paper a new methodological approach, based on the Robust Satellite Techniques (RST), a multi-temporal scheme of satellite data analysis, was developed to analyze and characterize the flaring activity of the largest Italian gas and oil pre-treatment plant (ENI-COVA) located in Val d'Agri (Basilicata) For this site, located in an anthropized area characterized by a large environmental complexity, flaring emissions are mainly related to emergency conditions (i.e. waste flaring), being the industrial process regulated by strict regional laws. With reference to the peculiar characteristics of COVA flaring, the RST approach was implemented on 13 years of EOS-MODIS (Earth Observing System - Moderate Resolution Imaging Spectroradiometer) infrared data to detect COVA-related thermal anomalies and to develop a regression model for gas flared volume estimation. The methodological approach, the whole processing chain and the preliminarily achieved results will be shown and discussed in this paper. In addition, the possible implementation of the proposed approach on the data acquired by the SUOMI NPP - VIIRS (National Polar-orbiting Partnership - Visible Infrared Imaging Radiometer Suite) and the expected improvements will be also discussed.
Multi-technique comparison of troposphere zenith delays and gradients during CONT08
NASA Astrophysics Data System (ADS)
Teke, Kamil; Böhm, Johannes; Nilsson, Tobias; Schuh, Harald; Steigenberger, Peter; Dach, Rolf; Heinkelmann, Robert; Willis, Pascal; Haas, Rüdiger; García-Espada, Susana; Hobiger, Thomas; Ichikawa, Ryuichi; Shimizu, Shingo
2011-07-01
CONT08 was a 15 days campaign of continuous Very Long Baseline Interferometry (VLBI) sessions during the second half of August 2008 carried out by the International VLBI Service for Geodesy and Astrometry (IVS). In this study, VLBI estimates of troposphere zenith total delays (ZTD) and gradients during CONT08 were compared with those derived from observations with the Global Positioning System (GPS), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), and water vapor radiometers (WVR) co-located with the VLBI radio telescopes. Similar geophysical models were used for the analysis of the space geodetic data, whereas the parameterization for the least-squares adjustment of the space geodetic techniques was optimized for each technique. In addition to space geodetic techniques and WVR, ZTD and gradients from numerical weather models (NWM) were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) (all sites), the Japan Meteorological Agency (JMA) and Cloud Resolving Storm Simulator (CReSS) (Tsukuba), and the High Resolution Limited Area Model (HIRLAM) (European sites). Biases, standard deviations, and correlation coefficients were computed between the troposphere estimates of the various techniques for all eleven CONT08 co-located sites. ZTD from space geodetic techniques generally agree at the sub-centimetre level during CONT08, and—as expected—the best agreement is found for intra-technique comparisons: between the Vienna VLBI Software and the combined IVS solutions as well as between the Center for Orbit Determination (CODE) solution and an IGS PPP time series; both intra-technique comparisons are with standard deviations of about 3-6 mm. The best inter space geodetic technique agreement of ZTD during CONT08 is found between the combined IVS and the IGS solutions with a mean standard deviation of about 6 mm over all sites, whereas the agreement with numerical weather models is between 6 and 20 mm. The standard deviations are generally larger at low latitude sites because of higher humidity, and the latter is also the reason why the standard deviations are larger at northern hemisphere stations during CONT08 in comparison to CONT02 which was observed in October 2002. The assessment of the troposphere gradients from the different techniques is not as clear because of different time intervals, different estimation properties, or different observables. However, the best inter-technique agreement is found between the IVS combined gradients and the GPS solutions with standard deviations between 0.2 and 0.7 mm.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Rani, Raj
2015-10-01
The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.
NASA Astrophysics Data System (ADS)
Houde, Jean-Francois
In the first essay of this dissertation, I study an empirical model of spatial competition. The main feature of my approach is to formally specify commuting paths as the "locations" of consumers in a Hotelling-type model of spatial competition. The main consequence of this location assumption is that the substitution patterns between stations depend in an intuitive way on the structure of the road network and the direction of traffic flows. The demand-side of the model is estimated by combining a model of traffic allocation with econometric techniques used to estimate models of demand for differentiated products (Berry, Levinsohn and Pakes (1995)). The estimated parameters are then used to evaluate the importance of commuting patterns in explaining the distribution of gasoline sales, and compare the economic predictions of the model with the standard home-location model. In the second and third essays, I examine empirically the effect of a price floor regulation on the dynamic and static equilibrium outcomes of the gasoline retail industry. In particular, in the second essay I study empirically the dynamic entry and exit decisions of gasoline stations, and measure the impact of a price floor on the continuation values of staying in the industry. In the third essay, I develop and estimate a static model of quantity competition subject to a price floor regulation. Both models are estimated using a rich panel dataset on the Quebec gasoline retail market before and after the implementation of a price floor regulation.
Robust infrared target tracking using discriminative and generative approaches
NASA Astrophysics Data System (ADS)
Asha, C. S.; Narasimhadhan, A. V.
2017-09-01
The process of designing an efficient tracker for thermal infrared imagery is one of the most challenging tasks in computer vision. Although a lot of advancement has been achieved in RGB videos over the decades, textureless and colorless properties of objects in thermal imagery pose hard constraints in the design of an efficient tracker. Tracking of an object using a single feature or a technique often fails to achieve greater accuracy. Here, we propose an effective method to track an object in infrared imagery based on a combination of discriminative and generative approaches. The discriminative technique makes use of two complementary methods such as kernelized correlation filter with spatial feature and AdaBoost classifier with pixel intesity features to operate in parallel. After obtaining optimized locations through discriminative approaches, the generative technique is applied to determine the best target location using a linear search method. Unlike the baseline algorithms, the proposed method estimates the scale of the target by Lucas-Kanade homography estimation. To evaluate the proposed method, extensive experiments are conducted on 17 challenging infrared image sequences obtained from LTIR dataset and a significant improvement of mean distance precision and mean overlap precision is accomplished as compared with the existing trackers. Further, a quantitative and qualitative assessment of the proposed approach with the state-of-the-art trackers is illustrated to clearly demonstrate an overall increase in performance.
Estimating urban flood risk - uncertainty in design criteria
NASA Astrophysics Data System (ADS)
Newby, M.; Franks, S. W.; White, C. J.
2015-06-01
The design of urban stormwater infrastructure is generally performed assuming that climate is static. For engineering practitioners, stormwater infrastructure is designed using a peak flow method, such as the Rational Method as outlined in the Australian Rainfall and Runoff (AR&R) guidelines and estimates of design rainfall intensities. Changes to Australian rainfall intensity design criteria have been made through updated releases of the AR&R77, AR&R87 and the recent 2013 AR&R Intensity Frequency Distributions (IFDs). The primary focus of this study is to compare the three IFD sets from 51 locations Australia wide. Since the release of the AR&R77 IFDs, the duration and number of locations for rainfall data has increased and techniques for data analysis have changed. Updated terminology coinciding with the 2013 IFD release has also resulted in a practical change to the design rainfall. For example, infrastructure that is designed for a 1 : 5 year ARI correlates with an 18.13% AEP, however for practical purposes, hydraulic guidelines have been updated with the more intuitive 20% AEP. The evaluation of design rainfall variation across Australia has indicated that the changes are dependent upon location, recurrence interval and rainfall duration. The changes to design rainfall IFDs are due to the application of differing data analysis techniques, the length and number of data sets and the change in terminology from ARI to AEP. Such changes mean that developed infrastructure has been designed to a range of different design criteria indicating the likely inadequacy of earlier developments to the current estimates of flood risk. In many cases, the under-design of infrastructure is greater than the expected impact of increased rainfall intensity under climate change scenarios.
Rover Slip Validation and Prediction Algorithm
NASA Technical Reports Server (NTRS)
Yen, Jeng
2009-01-01
A physical-based simulation has been developed for the Mars Exploration Rover (MER) mission that applies a slope-induced wheel-slippage to the rover location estimator. Using the digital elevation map from the stereo images, the computational method resolves the quasi-dynamic equations of motion that incorporate the actual wheel-terrain speed to estimate the gross velocity of the vehicle. Based on the empirical slippage measured by the Visual Odometry software of the rover, this algorithm computes two factors for the slip model by minimizing the distance of the predicted and actual vehicle location, and then uses the model to predict the next drives. This technique, which has been deployed to operate the MER rovers in the extended mission periods, can accurately predict the rover position and attitude, mitigating the risk and uncertainties in the path planning on high-slope areas.
Big data integration for regional hydrostratigraphic mapping
NASA Astrophysics Data System (ADS)
Friedel, M. J.
2013-12-01
Numerical models provide a way to evaluate groundwater systems, but determining the hydrostratigraphic units (HSUs) used in devising these models remains subjective, nonunique, and uncertain. A novel geophysical-hydrogeologic data integration scheme is proposed to constrain the estimation of continuous HSUs. First, machine-learning and multivariate statistical techniques are used to simultaneously integrate borehole hydrogeologic (lithology, hydraulic conductivity, aqueous field parameters, dissolved constituents) and geophysical (gamma, spontaneous potential, and resistivity) measurements. Second, airborne electromagnetic measurements are numerically inverted to obtain subsurface resistivity structure at randomly selected locations. Third, the machine-learning algorithm is trained using the borehole hydrostratigraphic units and inverted airborne resistivity profiles. The trained machine-learning algorithm is then used to estimate HSUs at independent resistivity profile locations. We demonstrate efficacy of the proposed approach to map the hydrostratigraphy of a heterogeneous surficial aquifer in northwestern Nebraska.
Wave-equation migration velocity inversion using passive seismic sources
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2015-12-01
Seismic monitoring at injection sites (e.g., CO2 sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits the fact that the P- and S-wave arrivals originate at the same time and location in the subsurface. We generate image volumes by back-propagating P- and S-wave data through initial Earth models and then applying a correlation-based extended-imaging condition. Energy focusing away from zero lag in the extended image volume is used as a (penalized) residual in an adjoint-state tomography scheme to update the P- and S-wave velocity models. We use an acousto-elastic approximation to greatly reduce the computational cost. Because the method requires neither an initial source location or origin time estimate nor picking of arrivals, it is suitable for low signal-to-noise datasets, such as microseismic data. Synthetic results show that with a realistic distribution of microseismic sources, P- and S-velocity perturbations can be recovered. Although demonstrated at an oil and gas reservoir scale, the technique can be applied to problems of all scales from geologic core samples to global seismology.
Bladed disc crack diagnostics using blade passage signals
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Koul, Ashok; Liang, Ming; Alavi, Elham
2012-12-01
One of the major potential faults in a turbo fan engine is the crack initiation and propagation in bladed discs under cyclic loads that could result in the breakdown of the engines if not detected at an early stage. Reliable fault detection techniques are therefore in demand to reduce maintenance cost and prevent catastrophic failures. Although a number of approaches have been reported in the literature, it remains very challenging to develop a reliable technique to accurately estimate the health condition of a rotating bladed disc. Correspondingly, this paper presents a novel technique for bladed disc crack detection through two sequential signal processing stages: (1) signal preprocessing that aims to eliminate the noises in the blade passage signals; (2) signal postprocessing that intends to identify the crack location. In the first stage, physics-based modeling and interpretation are established to help characterize the noises. The crack initiation can be determined based on the calculated health monitoring index derived from the sinusoidal effects. In the second stage, the crack is located through advanced detrended fluctuation analysis of the preprocessed data. The proposed technique is validated using a set of spin rig test data (i.e. tip clearance and time of arrival) that was acquired during a test conducted on a bladed military engine fan disc. The test results have demonstrated that the developed technique is an effective approach for identifying and locating the incipient crack that occurs at the root of a bladed disc.
An iterative matching and locating technique for borehole microseismic monitoring
NASA Astrophysics Data System (ADS)
Chen, H.; Meng, X.; Niu, F.; Tang, Y.
2016-12-01
Microseismic monitoring has been proven to be an effective and valuable technology to image hydraulic fracture geometry. The success of hydraulic fracturing monitoring relies on the detection and characterization (i.e., location and focal mechanism estimation) of a maximum number of induced microseismic events. All the events are important to quantify the stimulated reservior volume (SRV) and characterize the newly created fracture network. Detecting and locating low magnitude events, however, are notoriously difficult, particularly at a high noisy production environment. Here we propose an iterative matching and locating technique (iMLT) to obtain a maximum detection of small events and the best determination of their locations from continuous data recorded by a single azimuth downhole geophone array. As the downhole array is located in one azimuth, the regular M&L using the P-wave cross-correlation only is not able to resolve the location of a matched event relative to the template event. We thus introduce the polarization direction in the matching, which significantly improve the lateral resolution of the M&L method based on numerical simulations with synthetic data. Our synthetic tests further indicate that the inclusion of S-wave cross-correlation data can help better constrain the focal depth of the matched events. We apply this method to a dataset recorded during hydraulic fracturing treatment of a pilot horizontal well within the shale play in southwest China. Our approach yields a more than fourfold increase in the number of located events, compared with the original event catalog from traditional downhole processing.
Estimation of carbon emissions from wildfires in Alaskan boreal forests using AVHRR data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasischke, E.S.; French, N.H.F.; Bourgeau-Chavez, L.L
1993-06-01
The objectives of this research study were to evaluate the utility of using AVHRR data for locating and measuring the areal extent of wildfires in the boreal forests of Alaska and to estimate the amount of carbon being released during these fires. Techniques were developed to using the normalized difference vegetation signature derived from AVHRR data to detect and measure the area of fires in Alaska. A model was developed to estimate the amount of biomass/carbon being stored in Alaskan boreal forests, and the amount of carbon released during fires. The AVHRR analysis resulted in detection of > 83% ofmore » all forest fires greater than 2,000 ha in size in the years 1990 and 1991. The areal estimate derived from AVHRR data were 75% of the area mapped by the Alaska Fire Service for these years. Using fire areas and locations for 1954 through 1992, it was determined that on average, 13.0 gm-C-m-2 of boreal forest area is released during fires every year. This estimate is two to six times greater than previous reported estimates. Our conclusions are that the analysis of AVHRR data represents a viable means for detecting and mapping fires in boreal regions on a global basis.« less
Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin
2010-01-01
Nonlinear mixed-modeling methods were used to estimate parameters in an individual-tree basal area growth model for shortleaf pine (Pinus echinata Mill.). Shortleaf pine individual-tree growth data were available from over 200 permanently established 0.2-acre fixed-radius plots located in naturally-occurring even-aged shortleaf pine forests on the...
Reconstruction of an acoustic pressure field in a resonance tube by particle image velocimetry.
Kuzuu, K; Hasegawa, S
2015-11-01
A technique for estimating an acoustic field in a resonance tube is suggested. The estimation of an acoustic field in a resonance tube is important for the development of the thermoacoustic engine, and can be conducted employing two sensors to measure pressure. While this measurement technique is known as the two-sensor method, care needs to be taken with the location of pressure sensors when conducting pressure measurements. In the present study, particle image velocimetry (PIV) is employed instead of a pressure measurement by a sensor, and two-dimensional velocity vector images are extracted as sequential data from only a one- time recording made by a video camera of PIV. The spatial velocity amplitude is obtained from those images, and a pressure distribution is calculated from velocity amplitudes at two points by extending the equations derived for the two-sensor method. By means of this method, problems relating to the locations and calibrations of multiple pressure sensors are avoided. Furthermore, to verify the accuracy of the present method, the experiments are conducted employing the conventional two-sensor method and laser Doppler velocimetry (LDV). Then, results by the proposed method are compared with those obtained with the two-sensor method and LDV.
NASA Technical Reports Server (NTRS)
Cameron, J. R.
1972-01-01
The bone mineral content, BMC, determined by monoenergetic photon absorption technique, of 29 different locations on the long bones and vertebral columns of 24 skeletons was measured. Compressive tests were made on bone from these locations in which the maximum load and maximum stress were measured. Also the ultimate strain, modulus of elasticity and energy absorbed to failure were determined for compact bone from the femoral diaphysis and cancellous bone from the eighth through eleventh thoracic vertebrae. Correlations and predictive relationships between these parameters were examined to investigate the applicability of using the BMC at sites normally measured in vivo, i.e. radius and ulna in estimating the BMC and/or strength of the spine or femoral neck. It was found that the BMC at sites on the same bone were highly correlated r = 0.95 or better; the BMC at sites on different bones were also highly interrelated, r = 0.85. The BMC at various sites on the long bones could be estimated to between 10 and 15 per cent from the BMC of sites on the radius or ulna.
Determination of mixed mode (I/II) SIFs of cracked orthotropic materials
NASA Astrophysics Data System (ADS)
Chakraborty, D.; Chakraborty, Debaleena; Murthy, K. S. R. K.
2018-05-01
Strain gage techniques have been successfully but sparsely used for the determination of stress intensity factors (SIFs) of orthotropic materials. For mode I cases, few works have been reported on the strain gage based determination of mode I SIF of orthotropic materials. However, for mixed mode (I/II) cases, neither a theoretical development of a strain gage based technique nor any recommended guidelines for minimum number of strain gages and their locations were reported in the literature for determination of mixed mode SIFs. The authors for the first time came up with a theoretical proposition to successfully use strain gages for determination of mixed mode SIFs of orthotropic materials [1]. Based on these formulations, the present paper discusses a finite element (FE) based numerical simulation of the proposed strain gage technique employing [902/0]10S carbon-epoxy laminates with a slant edge crack. An FE based procedure has also been presented for determination of the optimal radial locations of the strain gages apriori to actual experiments. To substantiate the efficacy of the proposed technique, numerical simulations for strain gage based determination of mixed mode SIFs have been conducted. Results show that it is possible to accurately determine the mixed mode SIFs of orthotropic laminates when the strain gages are placed within the optimal radial locations estimated using the present formulation.
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1990-01-01
The automation of low-altitude rotorcraft flight depends on the ability to detect, locate, and navigate around obstacles lying in the rotorcraft's intended flightpath. Computer vision techniques provide a passive method of obstacle detection and range estimation, for obstacle avoidance. Several algorithms based on computer vision methods have been developed for this purpose using laboratory data; however, further development and validation of candidate algorithms require data collected from rotorcraft flight. A data base containing low-altitude imagery augmented with the rotorcraft and sensor parameters required for passive range estimation is not readily available. Here, the emphasis is on the methodology used to develop such a data base from flight-test data consisting of imagery, rotorcraft and sensor parameters, and ground-truth range measurements. As part of the data preparation, a technique for obtaining the sensor calibration parameters is described. The data base will enable the further development of algorithms for computer vision-based obstacle detection and passive range estimation, as well as provide a benchmark for verification of range estimates against ground-truth measurements.
Permeability Estimation Directly From Logging-While-Drilling Induced Polarization Data
NASA Astrophysics Data System (ADS)
Fiandaca, G.; Maurya, P. K.; Balbarini, N.; Hördt, A.; Christiansen, A. V.; Foged, N.; Bjerg, P. L.; Auken, E.
2018-04-01
In this study, we present the prediction of permeability from time domain spectral induced polarization (IP) data, measured in boreholes on undisturbed formations using the El-log logging-while-drilling technique. We collected El-log data and hydraulic properties on unconsolidated Quaternary and Miocene deposits in boreholes at three locations at a field site in Denmark, characterized by different electrical water conductivity and chemistry. The high vertical resolution of the El-log technique matches the lithological variability at the site, minimizing ambiguity in the interpretation originating from resolution issues. The permeability values were computed from IP data using a laboratory-derived empirical relationship presented in a recent study for saturated unconsolidated sediments, without any further calibration. A very good correlation, within 1 order of magnitude, was found between the IP-derived permeability estimates and those derived using grain size analyses and slug tests, with similar depth trends and permeability contrasts. Furthermore, the effect of water conductivity on the IP-derived permeability estimations was found negligible in comparison to the permeability uncertainties estimated from the inversion and the laboratory-derived empirical relationship.
Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar
2016-02-01
The design of surface water quality sampling location is a crucial decision-making process for rationalization of monitoring network. The quantity, quality, and types of available dataset (watershed characteristics and water quality data) may affect the selection of appropriate design methodology. The modified Sanders approach and multivariate statistical techniques [particularly factor analysis (FA)/principal component analysis (PCA)] are well-accepted and widely used techniques for design of sampling locations. However, their performance may vary significantly with quantity, quality, and types of available dataset. In this paper, an attempt has been made to evaluate performance of these techniques by accounting the effect of seasonal variation, under a situation of limited water quality data but extensive watershed characteristics information, as continuous and consistent river water quality data is usually difficult to obtain, whereas watershed information may be made available through application of geospatial techniques. A case study of Kali River, Western Uttar Pradesh, India, is selected for the analysis. The monitoring was carried out at 16 sampling locations. The discrete and diffuse pollution loads at different sampling sites were estimated and accounted using modified Sanders approach, whereas the monitored physical and chemical water quality parameters were utilized as inputs for FA/PCA. The designed optimum number of sampling locations for monsoon and non-monsoon seasons by modified Sanders approach are eight and seven while that for FA/PCA are eleven and nine, respectively. Less variation in the number and locations of designed sampling sites were obtained by both techniques, which shows stability of results. A geospatial analysis has also been carried out to check the significance of designed sampling location with respect to river basin characteristics and land use of the study area. Both methods are equally efficient; however, modified Sanders approach outperforms FA/PCA when limited water quality and extensive watershed information is available. The available water quality dataset is limited and FA/PCA-based approach fails to identify monitoring locations with higher variation, as these multivariate statistical approaches are data-driven. The priority/hierarchy and number of sampling sites designed by modified Sanders approach are well justified by the land use practices and observed river basin characteristics of the study area.
Automatic Brain Tumor Detection in T2-weighted Magnetic Resonance Images
NASA Astrophysics Data System (ADS)
Dvořák, P.; Kropatsch, W. G.; Bartušek, K.
2013-10-01
This work focuses on fully automatic detection of brain tumors. The first aim is to determine, whether the image contains a brain with a tumor, and if it does, localize it. The goal of this work is not the exact segmentation of tumors, but the localization of their approximate position. The test database contains 203 T2-weighted images of which 131 are images of healthy brain and the remaining 72 images contain brain with pathological area. The estimation, whether the image shows an afflicted brain and where a pathological area is, is done by multi resolution symmetry analysis. The first goal was tested by five-fold cross-validation technique with 100 repetitions to avoid the result dependency on sample order. This part of the proposed method reaches the true positive rate of 87.52% and the true negative rate of 93.14% for an afflicted brain detection. The evaluation of the second part of the algorithm was carried out by comparing the estimated location to the true tumor location. The detection of the tumor location reaches the rate of 95.83% of correct anomaly detection and the rate 87.5% of correct tumor location.
Erdmann, Włodzimierz S; Kowalczyk, Radosław
2015-01-02
There are several methods for obtaining location of the centre of mass of the whole body. They are based on cadaver data, using volume and density of body parts, using radiation and image techniques. Some researchers treated the trunk as a one part only, while others divided the trunk into few parts. In addition some researchers divided the trunk with planes perpendicular to the longitudinal trunk's axis, although the best approach is to obtain trunk parts as anatomical and functional elements. This procedure was used by Dempster and Erdmann. The latter elaborated personalized estimating of inertial quantities of the trunk, while Clauser et al. gave similar approach for extremities. The aim of the investigation was to merge both indirect methods in order to obtain accurate location of the centre of mass of the whole body. As a reference location a direct method based on reaction board procedure, i.e. with a body lying on a board supported on a scale was used. The location of the centre of mass using Clauser's and Erdmann's method appeared almost identical with the location obtained with a direct method. This approach can be used for several situations, especially for people of different morphology, for the bent trunk, and for asymmetrical movements. Copyright © 2014 Elsevier Ltd. All rights reserved.
Energy Harvesting Hybrid Acoustic-Optical Underwater Wireless Sensor Networks Localization.
Saeed, Nasir; Celik, Abdulkadir; Al-Naffouri, Tareq Y; Alouini, Mohamed-Slim
2017-12-26
Underwater wireless technologies demand to transmit at higher data rate for ocean exploration. Currently, large coverage is achieved by acoustic sensor networks with low data rate, high cost, high latency, high power consumption, and negative impact on marine mammals. Meanwhile, optical communication for underwater networks has the advantage of the higher data rate albeit for limited communication distances. Moreover, energy consumption is another major problem for underwater sensor networks, due to limited battery power and difficulty in replacing or recharging the battery of a sensor node. The ultimate solution to this problem is to add energy harvesting capability to the acoustic-optical sensor nodes. Localization of underwater sensor networks is of utmost importance because the data collected from underwater sensor nodes is useful only if the location of the nodes is known. Therefore, a novel localization technique for energy harvesting hybrid acoustic-optical underwater wireless sensor networks (AO-UWSNs) is proposed. AO-UWSN employs optical communication for higher data rate at a short transmission distance and employs acoustic communication for low data rate and long transmission distance. A hybrid received signal strength (RSS) based localization technique is proposed to localize the nodes in AO-UWSNs. The proposed technique combines the noisy RSS based measurements from acoustic communication and optical communication and estimates the final locations of acoustic-optical sensor nodes. A weighted multiple observations paradigm is proposed for hybrid estimated distances to suppress the noisy observations and give more importance to the accurate observations. Furthermore, the closed form solution for Cramer-Rao lower bound (CRLB) is derived for localization accuracy of the proposed technique.
Energy Harvesting Hybrid Acoustic-Optical Underwater Wireless Sensor Networks Localization
Saeed, Nasir; Celik, Abdulkadir; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2017-01-01
Underwater wireless technologies demand to transmit at higher data rate for ocean exploration. Currently, large coverage is achieved by acoustic sensor networks with low data rate, high cost, high latency, high power consumption, and negative impact on marine mammals. Meanwhile, optical communication for underwater networks has the advantage of the higher data rate albeit for limited communication distances. Moreover, energy consumption is another major problem for underwater sensor networks, due to limited battery power and difficulty in replacing or recharging the battery of a sensor node. The ultimate solution to this problem is to add energy harvesting capability to the acoustic-optical sensor nodes. Localization of underwater sensor networks is of utmost importance because the data collected from underwater sensor nodes is useful only if the location of the nodes is known. Therefore, a novel localization technique for energy harvesting hybrid acoustic-optical underwater wireless sensor networks (AO-UWSNs) is proposed. AO-UWSN employs optical communication for higher data rate at a short transmission distance and employs acoustic communication for low data rate and long transmission distance. A hybrid received signal strength (RSS) based localization technique is proposed to localize the nodes in AO-UWSNs. The proposed technique combines the noisy RSS based measurements from acoustic communication and optical communication and estimates the final locations of acoustic-optical sensor nodes. A weighted multiple observations paradigm is proposed for hybrid estimated distances to suppress the noisy observations and give more importance to the accurate observations. Furthermore, the closed form solution for Cramer-Rao lower bound (CRLB) is derived for localization accuracy of the proposed technique. PMID:29278405
Fišer, Jaromír; Zítek, Pavel; Skopec, Pavel; Knobloch, Jan; Vyhlídal, Tomáš
2017-05-01
The purpose of the paper is to achieve a constrained estimation of process state variables using the anisochronic state observer tuned by the dominant root locus technique. The anisochronic state observer is based on the state-space time delay model of the process. Moreover the process model is identified not only as delayed but also as non-linear. This model is developed to describe a material flow process. The root locus technique combined with the magnitude optimum method is utilized to investigate the estimation process. Resulting dominant roots location serves as a measure of estimation process performance. The higher the dominant (natural) frequency in the leftmost position of the complex plane the more enhanced performance with good robustness is achieved. Also the model based observer control methodology for material flow processes is provided by means of the separation principle. For demonstration purposes, the computer-based anisochronic state observer is applied to the strip temperatures estimation in the hot strip finishing mill composed of seven stands. This application was the original motivation to the presented research. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
Telis, Pamela A.
1992-01-01
Mississippi State water laws require that the 7-day, 10-year low-flow characteristic (7Q10) of streams be used as a criterion for issuing wastedischarge permits to dischargers to streams and for limiting withdrawals of water from streams. This report presents techniques for estimating the 7Q10 for ungaged sites on streams in Mississippi based on the availability of baseflow discharge measurements at the site, location of nearby gaged sites on the same stream, and drainage area of the ungaged site. These techniques may be used to estimate the 7Q10 at sites on natural, unregulated or partially regulated, and non-tidal streams. Low-flow characteristics for streams in the Mississippi River alluvial plain were not estimated because the annual lowflow data exhibit decreasing trends with time. Also presented are estimates of the 7Q10 for 493 gaged sites on Mississippi streams.Techniques for estimating the 7Q10 have been developed for ungaged sites with base-flow discharge measurements, for ungaged sites on gaged streams, and for ungaged sites on ungaged streams. For an ungaged site with one or more base-flow discharge measurements, base-flow discharge data at the ungaged site are related to concurrent discharge data at a nearby gaged site. For ungaged sites on gaged streams, several methods of transferring the 7Q10 from a gaged site to an ungaged site were developed; the resulting 7Q10 values are based on drainage area prorations for the sites. For ungaged sites on ungaged streams, the 7Q10 is estimated from a map developed for. this study that shows the unit 7Q10 (7Q10 per square mile of drainage area) for ungaged basins in the State. The mapped values were estimated from the unit 7Q10 determined for nearby gaged basins, adjusted on the basis of the geology and topography of the ungaged basins.
A linear least squares approach for evaluation of crack tip stress field parameters using DIC
NASA Astrophysics Data System (ADS)
Harilal, R.; Vyasarayani, C. P.; Ramji, M.
2015-12-01
In the present work, an experimental study is carried out to estimate the mixed-mode stress intensity factors (SIF) for different cracked specimen configurations using digital image correlation (DIC) technique. For the estimation of mixed-mode SIF's using DIC, a new algorithm is proposed for the extraction of crack tip location and coefficients in the multi-parameter displacement field equations. From those estimated coefficients, SIF could be extracted. The required displacement data surrounding the crack tip has been obtained using 2D-DIC technique. An open source 2D DIC software Ncorr is used for the displacement field extraction. The presented methodology has been used to extract mixed-mode SIF's for specimen configurations like single edge notch (SEN) specimen and centre slant crack (CSC) specimens made out of Al 2014-T6 alloy. The experimental results have been compared with the analytical values and they are found to be in good agreement, thereby confirming the accuracy of the algorithm being proposed.
CosmoSIS: A system for MC parameter estimation
Bridle, S.; Dodelson, S.; Jennings, E.; ...
2015-12-23
CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less
NASA Astrophysics Data System (ADS)
Jenkins, Colleen; Jordan, Jay; Carlson, Jeff
2007-02-01
This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.
NASA Astrophysics Data System (ADS)
Heimsch, Florian; Kreilein, Heiner; Rauf, Abdul; Knohl, Alexander
2016-04-01
Rainforests in general and montane rainforests in particular have rarely been studied over longer time periods. We aim to provide baseline information of a montane tropical forest's carbon uptake over time in order to quantify possible losses through land-use change. Thus we conducted a re-inventory of 22 10-year old forest inventory plots, giving us a rare opportunity to quantify carbon uptake over such a long time period by traditional methods. We discuss shortfalls of such techniques and why our estimate of 1.5 Mg/ha/a should be considered as the lower boundary and not the mean carbon uptake per year. At the same location as the inventory, CO2 fluxes were measured with the Eddy-Covariance technique. Measurements were conducted at 48m height with an LI 7500 open-path infrared gas analyser. We will compare carbon uptake estimates from these measurements to those of the more conventional inventory method and discuss, which factors are probably responsible for differences.
NASA Astrophysics Data System (ADS)
Kröhnert, M.; Anderson, R.; Bumberger, J.; Dietrich, P.; Harpole, W. S.; Maas, H.-G.
2018-05-01
Grassland ecology experiments in remote locations requiring quantitative analysis of the biomass in defined plots are becoming increasingly widespread, but are still limited by manual sampling methodologies. To provide a cost-effective automated solution for biomass determination, several photogrammetric techniques are examined to generate 3D point cloud representations of plots as a basis, to estimate aboveground biomass on grassland plots, which is a key ecosystem variable used in many experiments. Methods investigated include Structure from Motion (SfM) techniques for camera pose estimation with posterior dense matching as well as the usage of a Time of Flight (TOF) 3D camera, a laser light sheet triangulation system and a coded light projection system. In this context, plants of small scales (herbage) and medium scales are observed. In the first pilot study presented here, the best results are obtained by applying dense matching after SfM, ideal for integration into distributed experiment networks.
An experimental result of estimating an application volume by machine learning techniques.
Hasegawa, Tatsuhito; Koshino, Makoto; Kimura, Haruhiko
2015-01-01
In this study, we improved the usability of smartphones by automating a user's operations. We developed an intelligent system using machine learning techniques that periodically detects a user's context on a smartphone. We selected the Android operating system because it has the largest market share and highest flexibility of its development environment. In this paper, we describe an application that automatically adjusts application volume. Adjusting the volume can be easily forgotten because users need to push the volume buttons to alter the volume depending on the given situation. Therefore, we developed an application that automatically adjusts the volume based on learned user settings. Application volume can be set differently from ringtone volume on Android devices, and these volume settings are associated with each specific application including games. Our application records a user's location, the volume setting, the foreground application name and other such attributes as learning data, thereby estimating whether the volume should be adjusted using machine learning techniques via Weka.
A Comparison of seismic instrument noise coherence analysis techniques
Ringler, A.T.; Hutt, C.R.; Evans, J.R.; Sandoval, L.D.
2011-01-01
The self-noise of a seismic instrument is a fundamental characteristic used to evaluate the quality of the instrument. It is important to be able to measure this self-noise robustly, to understand how differences among test configurations affect the tests, and to understand how different processing techniques and isolation methods (from nonseismic sources) can contribute to differences in results. We compare two popular coherence methods used for calculating incoherent noise, which is widely used as an estimate of instrument self-noise (incoherent noise and self-noise are not strictly identical but in observatory practice are approximately equivalent; Holcomb, 1989; Sleeman et al., 2006). Beyond directly comparing these two coherence methods on similar models of seismometers, we compare how small changes in test conditions can contribute to incoherent-noise estimates. These conditions include timing errors, signal-to-noise ratio changes (ratios between background noise and instrument incoherent noise), relative sensor locations, misalignment errors, processing techniques, and different configurations of sensor types.
C-band station coordinate determination for the GEOS-C altimeter calibration area
NASA Technical Reports Server (NTRS)
Krabill, W. B.; Klosko, S. M.
1974-01-01
Dynamical orbital techniques were employed to estimate the center-of-mass station coordinates of six C-band radars located in the designated primary GEOS-C radar altimeter calibration area. This work was performed in support of the planned GEOS-C mission (December, 1974 launch). The sites included Bermuda, Grand Turk, Antigua, Wallops Island (Virginia), and Merritt Island (Florida). Two sites were estimated independently at Wallops Island yielding better than 40 cm relative height recovery, with better than 10 cm and 1 m (relative) recovery for phi and gamma respectively. Error analysis and comparisons with other investigators indicate that better than 2 m relative recovery was achieved at all sites. The data used were exclusively that from the estimated sites and included 18 orbital arcs which were less than two orbital revolutions in length, having successive tracks over the area. The techniques employed here, given their independence of global tracking support, can be effectively employed to improve various geodetic datums by providing very long and accurate baselines. The C-band data taken on GEOS-C should be employed to improve such geodetic datums as the European-1950 using similar techniques.
Ultrasound-guided thermocouple placement for cryosurgery.
Abramovits, W; Pruiksma, R; Bose, S
1996-09-01
Although cryosurgical methods have high cure rates, imprecise estimates of both skin lesion depth and destructive temperature front location result in subjective technique in skin malignancy treatments. We evaluated the possibility of newer ultrasound equipment to assist in the precise placement of thermocouples in human skin. DermaScan C ver. 3 ultrasonographic equipment fitted with a sharp focus probe with a frequency of 20 MHz and a scan length of 12.1 mm was used to locate thermocouples with 27- and 30-gauge needles. We successfully and reproducibly located thermocouples and thin needles, and accurately measured their distance from the skin surface. Ultrasound is a useful method for the accurate placement of thermocouples, and needles as thin as 30 gauge for monitoring in cryosurgery.
Improved microseismic event locations through large-N arrays and wave-equation imaging and inversion
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2016-12-01
The recent increased focus on small-scale seismicity, Mw < 4 has come about primarily for two reasons. First, there is an increase in induced seismicity related to injection operations primarily for wastewater disposal and hydraulic fracturing for oil and gas recovery and for geothermal energy production. While the seismicity associated with injection is sometimes felt, it is more often weak. Some weak events are detected on current sparse arrays; however, accurate location of the events often requires a larger number of (multi-component) sensors. This leads to the second reason for an increased focus on small magnitude seismicity: a greater number of seismometers are being deployed in large N-arrays. The greater number of sensors decreases the detection threshold and therefore significantly increases the number of weak events found. Overall, these two factors bring new challenges and opportunities. Many standard seismological location and inversion techniques are geared toward large, easily identifiable events recorded on a sparse number of stations. However, with large-N arrays we can detect small events by utilizing multi-trace processing techniques, and increased processing power equips us with tools that employ more complete physics for simultaneously locating events and inverting for P- and S-wave velocity structure. We present a method that uses large-N arrays and wave-equation-based imaging and inversion to jointly locate earthquakes and estimate the elastic velocities of the earth. The technique requires no picking and is thus suitable for weak events. We validate the methodology through synthetic and field data examples.
Satellite estimation of incident photosynthetically active radiation using ultraviolet reflectance
NASA Technical Reports Server (NTRS)
Eck, Thomas F.; Dye, Dennis G.
1991-01-01
A new satellite remote sensing method for estimating the amount of photosynthetically active radiation (PAR, 400-700 nm) incident at the earth's surface is described and tested. Potential incident PAR for clear sky conditions is computed from an existing spectral model. A major advantage of the UV approach over existing visible band approaches to estimating insolation is the improved ability to discriminate clouds from high-albedo background surfaces. UV spectral reflectance data from the Total Ozone Mapping Spectrometer (TOMS) were used to test the approach for three climatically distinct, midlatitude locations. Estimates of monthly total incident PAR from the satellite technique differed from values computed from ground-based pyranometer measurements by less than 6 percent. This UV remote sensing method can be applied to estimate PAR insolation over ocean and land surfaces which are free of ice and snow.
A hybrid localization technique for patient tracking.
Rodionov, Denis; Kolev, George; Bushminkin, Kirill
2013-01-01
Nowadays numerous technologies are employed for tracking patients and assets in hospitals or nursing homes. Each of them has advantages and drawbacks. For example, WiFi localization has relatively good accuracy but cannot be used in case of power outage or in the areas with poor WiFi coverage. Magnetometer positioning or cellular network does not have such problems but they are not as accurate as localization with WiFi. This paper describes technique that simultaneously employs different localization technologies for enhancing stability and average accuracy of localization. The proposed algorithm is based on fingerprinting method paired with data fusion and prediction algorithms for estimating the object location. The core idea of the algorithm is technology fusion using error estimation methods. For testing accuracy and performance of the algorithm testing simulation environment has been implemented. Significant accuracy improvement was showed in practical scenarios.
GNSS-SLR satellite co-location for the estimate of local ties
NASA Astrophysics Data System (ADS)
Bruni, Sara; Zerbini, Susanna; Errico, Maddalena; Santi, Efisio
2013-04-01
The current realization of the International Terrestrial Reference Frame (ITRF) is based on four different space-geodetic techniques, so that the benefits brought by each observing system to the definition of the frame can compensate for the drawbacks of the others and technique-specific systematic errors might be identified. The strategy used to combine the observations from the different techniques is then of prominent importance for the realization of a precise and stable reference frame. This study concentrates, in particular, on the combination of Satellite Laser Ranging (SLR) and Global Navigation Satellite System (GNSS) observations by exploiting satellite co-locations. This innovative approach is based on the fact that laser tracking of GNSS satellites, carrying on board laser reflector arrays, allows for the combination of optical and microwave signals in the determination of the spacecraft orbit. Besides, the use of satellite co-locations differs quite significantly from the traditional combination method in which each single technique solution is carried out autonomously and is interrelated in a second step. One of the benefits of the approach adopted in this study is that it allows for an independent validation of the local tie, i.e. of the vector connecting the SLR and GNSS reference points in a multi-techniques station. Typically, local ties are expressed by a single value, measured with ground-based geodetic techniques and taken as constant. In principle, however, local ties might show time variations likely caused by the different monumentation characteristics of the GNSS antennas with respect to those of a SLR system. This study evaluates the possibility of using the satellite co-location approach to generate local-ties time series by means of observations available for a selected network of ILRS stations. The data analyzed in this study were acquired as part of the NASA's Earth Science Data Systems and are archived and distributed by the Crustal Dynamics Data Information System (CDDIS).
Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode
2010-05-01
The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.
Using pairs of physiological models to estimate temporal variation in amphibian body temperature.
Roznik, Elizabeth A; Alford, Ross A
2014-10-01
Physical models are often used to estimate ectotherm body temperatures, but designing accurate models for amphibians is difficult because they can vary in cutaneous resistance to evaporative water loss. To account for this variability, a recently published technique requires a pair of agar models that mimic amphibians with 0% and 100% resistance to evaporative water loss; the temperatures of these models define the lower and upper boundaries of possible amphibian body temperatures for the location in which they are placed. The goal of our study was to develop a method for using these pairs of models to estimate parameters describing the distributions of body temperatures of frogs under field conditions. We radiotracked green-eyed treefrogs (Litoria serrata) and collected semi-continuous thermal data using both temperature-sensitive radiotransmitters with an automated datalogging receiver, and pairs of agar models placed in frog locations, and we collected discrete thermal data using a non-contact infrared thermometer when frogs were located. We first examined the accuracy of temperature-sensitive transmitters in estimating frog body temperatures by comparing transmitter data with direct temperature measurements taken simultaneously for the same individuals. We then compared parameters (mean, minimum, maximum, standard deviation) characterizing the distributions of temperatures of individual frogs estimated from data collected using each of the three methods. We found strong relationships between thermal parameters estimated from data collected using automated radiotelemetry and both types of thermal models. These relationships were stronger for data collected using automated radiotelemetry and impermeable thermal models, suggesting that in the field, L. serrata has a relatively high resistance to evaporative water loss. Our results demonstrate that placing pairs of thermal models in frog locations can provide accurate estimates of the distributions of temperatures experienced by individual frogs, and that comparing temperatures from model pairs to direct measurements collected simultaneously on frogs can be used to broadly characterize the skin resistance of a species, and to select which model type is most appropriate for estimating temperature distributions for that species. Copyright © 2014 Elsevier Ltd. All rights reserved.
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
Maturation of Structural Health Management Systems for Solid Rocket Motors
NASA Technical Reports Server (NTRS)
Quing, Xinlin; Beard, Shawn; Zhang, Chang
2011-01-01
Concepts of an autonomous and automated space-compliant diagnostic system were developed for conditioned-based maintenance (CBM) of rocket motors for space exploration vehicles. The diagnostic system will provide real-time information on the integrity of critical structures on launch vehicles, improve their performance, and greatly increase crew safety while decreasing inspection costs. Using the SMART Layer technology as a basis, detailed procedures and calibration techniques for implementation of the diagnostic system were developed. The diagnostic system is a distributed system, which consists of a sensor network, local data loggers, and a host central processor. The system detects external impact to the structure. The major functions of the system include an estimate of impact location, estimate of impact force at impacted location, and estimate of the structure damage at impacted location. This system consists of a large-area sensor network, dedicated multiple local data loggers with signal processing and data analysis software to allow for real-time, in situ monitoring, and longterm tracking of structural integrity of solid rocket motors. Specifically, the system could provide easy installation of large sensor networks, onboard operation under harsh environments and loading, inspection of inaccessible areas without disassembly, detection of impact events and impact damage in real-time, and monitoring of a large area with local data processing to reduce wiring.
A combined joint diagonalization-MUSIC algorithm for subsurface targets localization
NASA Astrophysics Data System (ADS)
Wang, Yinlin; Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon
2014-06-01
This paper presents a combined joint diagonalization (JD) and multiple signal classification (MUSIC) algorithm for estimating subsurface objects locations from electromagnetic induction (EMI) sensor data, without solving ill-posed inverse-scattering problems. JD is a numerical technique that finds the common eigenvectors that diagonalize a set of multistatic response (MSR) matrices measured by a time-domain EMI sensor. Eigenvalues from targets of interest (TOI) can be then distinguished automatically from noise-related eigenvalues. Filtering is also carried out in JD to improve the signal-to-noise ratio (SNR) of the data. The MUSIC algorithm utilizes the orthogonality between the signal and noise subspaces in the MSR matrix, which can be separated with information provided by JD. An array of theoreticallycalculated Green's functions are then projected onto the noise subspace, and the location of the target is estimated by the minimum of the projection owing to the orthogonality. This combined method is applied to data from the Time-Domain Electromagnetic Multisensor Towed Array Detection System (TEMTADS). Examples of TEMTADS test stand data and field data collected at Spencer Range, Tennessee are analyzed and presented. Results indicate that due to its noniterative mechanism, the method can be executed fast enough to provide real-time estimation of objects' locations in the field.
Relative azimuth inversion by way of damped maximum correlation estimates
Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.
2012-01-01
Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.
Yang, Yaowen; Divsholi, Bahador Sabet
2010-01-01
The electromechanical (EM) impedance technique using piezoelectric lead zirconate titanate (PZT) transducers for structural health monitoring (SHM) has attracted considerable attention in various engineering fields. In the conventional EM impedance technique, the EM admittance of a PZT transducer is used as a damage indicator. Statistical analysis methods such as root mean square deviation (RMSD) have been employed to associate the damage level with the changes in the EM admittance signatures, but it is difficult to determine the location of damage using such methods. This paper proposes a new approach by dividing the large frequency (30–400 kHz) range into sub-frequency intervals and calculating their respective RMSD values. The RMSD of the sub-frequency intervals (RMSD-S) will be used to study the severity and location of damage. An experiment is carried out on a real size concrete structure subjected to artificial damage. It is observed that damage close to the PZT changes the high frequency range RMSD-S significantly, while the damage far away from the PZT changes the RMSD-S in the low frequency range significantly. The relationship between the frequency range and the PZT sensing region is also presented. Finally, a damage identification scheme is proposed to estimate the location and severity of damage in concrete structures. PMID:22163548
Mid-Piacenzian sea surface temperature record from ODP Site 1115 in the western equatorial Pacific
Stoll, Danielle
2010-01-01
Planktic foraminifer assemblages and alkenone unsaturation ratios have been analyzed for the mid-Piacen-zian (3.3 to 2.9 Ma) section of Ocean Drilling Program (ODP) Site 1115B, located in the western equatorial Pacific off the coast of New Guinea. Cold and warm season sea surface temperature (SST) estimates were determined using a modern analog technique. ODP Site 1115 is located just south of the transition between the planktic foraminifer tropical and subtropical faunal provinces and approximates the southern boundary of the western equatorial Pacific (WEP) warm pool. Comparison of the faunal and alkenone SST estimates (presented here) with an existing nannofossil climate proxy shows similar trends. Results of this analysis show increased seasonal variability during the middle of the sampled section (3.22 to 3.10 Ma), suggesting a possible northward migration of both the subtropical faunal province and the southern boundary of the WEP warm pool.
Looking inside the microseismic cloud using seismic interferometry
NASA Astrophysics Data System (ADS)
Matzel, E.; Rhode, A.; Morency, C.; Templeton, D. C.; Pyle, M. L.
2015-12-01
Microseismicity provides a direct means of measuring the physical characteristics of active tectonic features such as fault zones. Thousands of microquakes are often associated with an active site. This cloud of microseismicity helps define the tectonically active region. When processed using novel geophysical techniques, we can isolate the energy sensitive to the faulting region, itself. The virtual seismometer method (VSM) is a technique of seismic interferometry that provides precise estimates of the GF between earthquakes. In many ways the converse of ambient noise correlation, it is very sensitive to the source parameters (location, mechanism and magnitude) and to the Earth structure in the source region. In a region with 1000 microseisms, we can calculate roughly 500,000 waveforms sampling the active zone. At the same time, VSM collapses the computation domain down to the size of the cloud of microseismicity, often by 2-3 orders of magnitude. In simple terms VSM involves correlating the waveforms from a pair of events recorded at an individual station and then stacking the results over all stations to obtain the final result. In the far-field, when most of the stations in a network fall along a line between the two events, the result is an estimate of the GF between the two, modified by the source terms. In this geometry each earthquake is effectively a virtual seismometer recording all the others. When applied to microquakes, this alignment is often not met, and we also need to address the effects of the geometry between the two microquakes relative to each seismometer. Nonetheless, the technique is quite robust, and highly sensitive to the microseismic cloud. Using data from the Salton Sea geothermal region, we demonstrate the power of the technique, illustrating our ability to scale the technique from the far-field, where sources are well separated, to the near field where their locations fall within each other's uncertainty ellipse. VSM provides better illumination of the complex subsurface by generating precise, high frequency estimates of the GF and resolution of seismic properties between every pair of events. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Donald D.; Gowardhan, Akshay; Cameron-Smith, Philip
2015-08-08
Here, a computational Bayesian inverse technique is used to quantify the effects of meteorological inflow uncertainty on tracer transport and source estimation in a complex urban environment. We estimate a probability distribution of meteorological inflow by comparing wind observations to Monte Carlo simulations from the Aeolus model. Aeolus is a computational fluid dynamics model that simulates atmospheric and tracer flow around buildings and structures at meter-scale resolution. Uncertainty in the inflow is propagated through forward and backward Lagrangian dispersion calculations to determine the impact on tracer transport and the ability to estimate the release location of an unknown source. Ourmore » uncertainty methods are compared against measurements from an intensive observation period during the Joint Urban 2003 tracer release experiment conducted in Oklahoma City.« less
Breast surface estimation for radar-based breast imaging systems.
Williams, Trevor C; Sill, Jeff M; Fear, Elise C
2008-06-01
Radar-based microwave breast-imaging techniques typically require the antennas to be placed at a certain distance from or on the breast surface. This requires prior knowledge of the breast location, shape, and size. The method proposed in this paper for obtaining this information is based on a modified tissue sensing adaptive radar algorithm. First, a breast surface detection scan is performed. Data from this scan are used to localize the breast by creating an estimate of the breast surface. If required, the antennas may then be placed at specified distances from the breast surface for a second tumor-sensing scan. This paper introduces the breast surface estimation and antenna placement algorithms. Surface estimation and antenna placement results are demonstrated on three-dimensional breast models derived from magnetic resonance images.
Structure identification within a transitioning swept-wing boundary layer
NASA Astrophysics Data System (ADS)
Chapman, Keith Lance
1997-08-01
Extensive measurements are made in a transitioning swept-wing boundary layer using hot-film, hot-wire and cross-wire anemometry. The crossflow-dominated flow contains stationary vortices that breakdown near mid-chord. The most amplified vortex wavelength is forced by the use of artificial roughness elements near the leading edge. Two-component velocity and spanwise surface shear-stress correlation measurements are made at two constant chord locations, before and after transition. Streamwise surface shear stresses are also measured through the entire transition region. Correlation techniques are used to identify stationary structures in the laminar regime and coherent structures in the turbulent regime. Basic techniques include observation of the spatial correlations and the spatially distributed auto-spectra. The primary and secondary instability mechanisms are identified in the spectra in all measured fields. The primary mechanism is seen to grow, cause transition and produce large-scale turbulence. The secondary mechanism grows through the entire transition region and produces the small-scale turbulence. Advanced techniques use linear stochastic estimation (LSE) and proper orthogonal decomposition (POD) to identify the spatio-temporal evolutions of structures in the boundary layer. LSE is used to estimate the instantaneous velocity fields using temporal data from just two spatial locations and the spatial correlations. Reference locations are selected using maximum RMS values to provide the best available estimates. POD is used to objectively determine modes characteristic of the measured flow based on energy. The stationary vortices are identified in the first laminar modes of each velocity component and shear component. Experimental evidence suggests that neighboring vortices interact and produce large coherent structures with spanwise periodicity at double the stationary vortex wavelength. An objective transition region detection method is developed using streamwise spatial POD solutions which isolate the growth of the primary and secondary instability mechanisms in the first and second modes, respectively. Temporal evolutions of dominant POD modes in all measured fields are calculated. These scalar POD coefficients contain the integrated characteristics of the entire field, greatly reducing the amount of data to characterize the instantaneous field. These modes may then be used to train future flow control algorithms based on neural networks.
Structure Identification Within a Transitioning Swept-Wing Boundary Layer
NASA Technical Reports Server (NTRS)
Chapman, Keith; Glauser, Mark
1996-01-01
Extensive measurements are made in a transitioning swept-wing boundary layer using hot-film, hot-wire and cross-wire anemometry. The crossflow-dominated flow contains stationary vortices that breakdown near mid-chord. The most amplified vortex wavelength is forced by the use of artificial roughness elements near the leading edge. Two-component velocity and spanwise surface shear-stress correlation measurements are made at two constant chord locations, before and after transition. Streamwise surface shear stresses are also measured through the entire transition region. Correlation techniques are used to identify stationary structures in the laminar regime and coherent structures in the turbulent regime. Basic techniques include observation of the spatial correlations and the spatially distributed auto-spectra. The primary and secondary instability mechanisms are identified in the spectra in all measured fields. The primary mechanism is seen to grow, cause transition and produce large-scale turbulence. The secondary mechanism grows through the entire transition region and produces the small-scale turbulence. Advanced techniques use Linear Stochastic Estimation (LSE) and Proper Orthogonal Decomposition (POD) to identify the spatio-temporal evolutions of structures in the boundary layer. LSE is used to estimate the instantaneous velocity fields using temporal data from just two spatial locations and the spatial correlations. Reference locations are selected using maximum RMS values to provide the best available estimates. POD is used to objectively determine modes characteristic of the measured flow based on energy. The stationary vortices are identified in the first laminar modes of each velocity component and shear component. Experimental evidence suggests that neighboring vortices interact and produce large coherent structures with spanwise periodicity at double the stationary vortex wavelength. An objective transition region detection method is developed using streamwise spatial POD solutions which isolate the growth of the primary and secondary instability mechanisms in the first and second modes, respectively. Temporal evolutions of dominant POD modes in all measured fields are calculated. These scalar POD coefficients contain the integrated characteristics of the entire field, greatly reducing the amount of data to characterize the instantaneous field. These modes may then be used to train future flow control algorithms based on neural networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lappin, A.R.; VanBuskirk, R.G.; Enniss, D.O.
1982-03-01
Thermal-conductivity and bulk-property measurements were made on welded and nonwelded silicic tuffs from the upper portion of Hole USW-G1, located near the southwestern margin of the Nevada Test Site. Bulk-property measurements were made by standard techniques. Thermal conductivities were measured at temperatures as high as 280{sup 0}C, confining pressures to 10 MPa, and pore pressures to 1.5 MPa. Extrapolation of measured saturated conductivities to zero porosity suggests that matrix conductivity of both zeolitized and devitrified tuffs is independent of stratigraphic position, depth, and probably location. This fact allows development of a thermal-conductivity stratigraphy for the upper portion of Hole G1.more » Estimates of saturated conductivities of zeolitized nonwelded tuffs and devitrified tuffs below the water table appear most reliable. Estimated conductivities of saturated densely welded devitrified tuffs above the water table are less reliable, due to both internal complexity and limited data presently available. Estimation of conductivity of dewatered tuffs requires use of different air thermal conductivities in devitrified and zeolitized samples. Estimated effects of in-situ fracturing generally appear negligible.« less
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk
2015-01-01
Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.
NASA Astrophysics Data System (ADS)
Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques
2015-12-01
In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.
Bingham, Adrian; Arjunan, Sridhar P; Kumar, Dinesh K
2017-07-01
In this study we investigated a technique for estimating the progression of localized muscle fatigue. This technique measures the dependence between motor units using high density surface electromyogram (HD-sEMG) and is based on the Normalized Mutual Information (NMI) measure. The NMI between every pair combination of the electrode array is computed to measure the interactions between electrodes. Participants in the experiment had an array of 64 electrodes (16 by 4) placed over the TA of their dominate leg such that the columns of the array ran parallel with the muscle fibers. The HD-sEMG was recorded whilst the participants maintained an isometric dorsiflexion with their dominate foot until task failure at 40% and 80% of their maximum voluntary contraction (MVC). The interactions between different locations over the muscle were computed using the recorded HD-sEMG signals. The results show that the average interactions between various locations over the TA significantly increased during fatigue at both levels of contraction. This can be attributed to the dependence in the motor units.
Radar QPE for hydrological design: Intensity-Duration-Frequency curves
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2015-04-01
Intensity-duration-frequency (IDF) curves are widely used in flood risk management since they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. They are estimated analyzing the extreme values of rainfall records, usually basing on raingauge data. This point-based approach raises two issues: first, hydrological design applications generally need IDF information for the entire catchment rather than a point, second, the representativeness of point measurements decreases with the distance from measure location, especially in regions characterized by steep climatological gradients. Weather radar, providing high resolution distributed rainfall estimates over wide areas, has the potential to overcome these issues. Two objections usually restrain this approach: (i) the short length of data records and (ii) the reliability of quantitative precipitation estimation (QPE) of the extremes. This work explores the potential use of weather radar estimates for the identification of IDF curves by means of a long length radar archive and a combined physical- and quantitative- adjustment of radar estimates. Shacham weather radar, located in the eastern Mediterranean area (Tel Aviv, Israel), archives data since 1990 providing rainfall estimates for 23 years over a region characterized by strong climatological gradients. Radar QPE is obtained correcting the effects of pointing errors, ground echoes, beam blockage, attenuation and vertical variations of reflectivity. Quantitative accuracy is then ensured with a range-dependent bias adjustment technique and reliability of radar QPE is assessed by comparison with gauge measurements. IDF curves are derived from the radar data using the annual extremes method and compared with gauge-based curves. Results from 14 study cases will be presented focusing on the effects of record length and QPE accuracy, exploring the potential application of radar IDF curves for ungauged locations and providing insights on the use of radar QPE for hydrological design studies.
Red-shouldered hawk occupancy surveys in central Minnesota, USA
Henneman, C.; McLeod, M.A.; Andersen, D.E.
2007-01-01
Forest-dwelling raptors are often difficult to detect because many species occur at low density or are secretive. Broadcasting conspecific vocalizations can increase the probability of detecting forest-dwelling raptors and has been shown to be an effective method for locating raptors and assessing their relative abundance. Recent advances in statistical techniques based on presence-absence data use probabilistic arguments to derive probability of detection when it is <1 and to provide a model and likelihood-based method for estimating proportion of sites occupied. We used these maximum-likelihood models with data from red-shouldered hawk (Buteo lineatus) call-broadcast surveys conducted in central Minnesota, USA, in 1994-1995 and 2004-2005. Our objectives were to obtain estimates of occupancy and detection probability 1) over multiple sampling seasons (yr), 2) incorporating within-season time-specific detection probabilities, 3) with call type and breeding stage included as covariates in models of probability of detection, and 4) with different sampling strategies. We visited individual survey locations 2-9 times per year, and estimates of both probability of detection (range = 0.28-0.54) and site occupancy (range = 0.81-0.97) varied among years. Detection probability was affected by inclusion of a within-season time-specific covariate, call type, and breeding stage. In 2004 and 2005 we used survey results to assess the effect that number of sample locations, double sampling, and discontinued sampling had on parameter estimates. We found that estimates of probability of detection and proportion of sites occupied were similar across different sampling strategies, and we suggest ways to reduce sampling effort in a monitoring program.
Damage Diagnosis in Semiconductive Materials Using Electrical Impedance Measurements
NASA Technical Reports Server (NTRS)
Ross, Richard W.; Hinton, Yolanda L.
2008-01-01
Recent aerospace industry trends have resulted in an increased demand for real-time, effective techniques for in-flight structural health monitoring. A promising technique for damage diagnosis uses electrical impedance measurements of semiconductive materials. By applying a small electrical current into a material specimen and measuring the corresponding voltages at various locations on the specimen, changes in the electrical characteristics due to the presence of damage can be assessed. An artificial neural network uses these changes in electrical properties to provide an inverse solution that estimates the location and magnitude of the damage. The advantage of the electrical impedance method over other damage diagnosis techniques is that it uses the material as the sensor. Simple voltage measurements can be used instead of discrete sensors, resulting in a reduction in weight and system complexity. This research effort extends previous work by employing finite element method models to improve accuracy of complex models with anisotropic conductivities and by enhancing the computational efficiency of the inverse techniques. The paper demonstrates a proof of concept of a damage diagnosis approach using electrical impedance methods and a neural network as an effective tool for in-flight diagnosis of structural damage to aircraft components.
Salinet, João L; Masca, Nicholas; Stafford, Peter J; Ng, G André; Schlindwein, Fernando S
2016-03-08
Areas with high frequency activity within the atrium are thought to be 'drivers' of the rhythm in patients with atrial fibrillation (AF) and ablation of these areas seems to be an effective therapy in eliminating DF gradient and restoring sinus rhythm. Clinical groups have applied the traditional FFT-based approach to generate the three-dimensional dominant frequency (3D DF) maps during electrophysiology (EP) procedures but literature is restricted on using alternative spectral estimation techniques that can have a better frequency resolution that FFT-based spectral estimation. Autoregressive (AR) model-based spectral estimation techniques, with emphasis on selection of appropriate sampling rate and AR model order, were implemented to generate high-density 3D DF maps of atrial electrograms (AEGs) in persistent atrial fibrillation (persAF). For each patient, 2048 simultaneous AEGs were recorded for 20.478 s-long segments in the left atrium (LA) and exported for analysis, together with their anatomical locations. After the DFs were identified using AR-based spectral estimation, they were colour coded to produce sequential 3D DF maps. These maps were systematically compared with maps found using the Fourier-based approach. 3D DF maps can be obtained using AR-based spectral estimation after AEGs downsampling (DS) and the resulting maps are very similar to those obtained using FFT-based spectral estimation (mean 90.23 %). There were no significant differences between AR techniques (p = 0.62). The processing time for AR-based approach was considerably shorter (from 5.44 to 5.05 s) when lower sampling frequencies and model order values were used. Higher levels of DS presented higher rates of DF agreement (sampling frequency of 37.5 Hz). We have demonstrated the feasibility of using AR spectral estimation methods for producing 3D DF maps and characterised their differences to the maps produced using the FFT technique, offering an alternative approach for 3D DF computation in human persAF studies.
Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation
NASA Astrophysics Data System (ADS)
Bonin, Jennifer; Chambers, Don
2013-07-01
The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.
Estimation of wave phase speed and nearshore bathymetry from video imagery
Stockdon, H.F.; Holman, R.A.
2000-01-01
A new remote sensing technique based on video image processing has been developed for the estimation of nearshore bathymetry. The shoreward propagation of waves is measured using pixel intensity time series collected at a cross-shore array of locations using remotely operated video cameras. The incident band is identified, and the cross-spectral matrix is calculated for this band. The cross-shore component of wavenumber is found as the gradient in phase of the first complex empirical orthogonal function of this matrix. Water depth is then inferred from linear wave theory's dispersion relationship. Full bathymetry maps may be measured by collecting data in a large array composed of both cross-shore and longshore lines. Data are collected hourly throughout the day, and a stable, daily estimate of bathymetry is calculated from the median of the hourly estimates. The technique was tested using 30 days of hourly data collected at the SandyDuck experiment in Duck, North Carolina, in October 1997. Errors calculated as the difference between estimated depth and ground truth data show a mean bias of -35 cm (rms error = 91 cm). Expressed as a fraction of the true water depth, the mean percent error was 13% (rms error = 34%). Excluding the region of known wave nonlinearities over the bar crest, the accuracy of the technique improved, and the mean (rms) error was -20 cm (75 cm). Additionally, under low-amplitude swells (wave height H ???1 m), the performance of the technique across the entire profile improved to 6% (29%) of the true water depth with a mean (rms) error of -12 cm (71 cm). Copyright 2000 by the American Geophysical Union.
Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B
2006-08-01
Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.
Alexander, Terry W.; Wilson, Gary L.
1995-01-01
A generalized least-squares regression technique was used to relate the 2- to 500-year flood discharges from 278 selected streamflow-gaging stations to statistically significant basin characteristics. The regression relations (estimating equations) were defined for three hydrologic regions (I, II, and III) in rural Missouri. Ordinary least-squares regression analyses indicate that drainage area (Regions I, II, and III) and main-channel slope (Regions I and II) are the only basin characteristics needed for computing the 2- to 500-year design-flood discharges at gaged or ungaged stream locations. The resulting generalized least-squares regression equations provide a technique for estimating the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year flood discharges on unregulated streams in rural Missouri. The regression equations for Regions I and II were developed from stream-flow-gaging stations with drainage areas ranging from 0.13 to 11,500 square miles and 0.13 to 14,000 square miles, and main-channel slopes ranging from 1.35 to 150 feet per mile and 1.20 to 279 feet per mile. The regression equations for Region III were developed from streamflow-gaging stations with drainage areas ranging from 0.48 to 1,040 square miles. Standard errors of estimate for the generalized least-squares regression equations in Regions I, II, and m ranged from 30 to 49 percent.
Lee, Karl K.; Risley, John C.
2002-03-19
Precipitation-runoff models, base-flow-separation techniques, and stream gain-loss measurements were used to study recharge and ground-water surface-water interaction as part of a study of the ground-water resources of the Willamette River Basin. The study was a cooperative effort between the U.S. Geological Survey and the State of Oregon Water Resources Department. Precipitation-runoff models were used to estimate the water budget of 216 subbasins in the Willamette River Basin. The models were also used to compute long-term average recharge and base flow. Recharge and base-flow estimates will be used as input to a regional ground-water flow model, within the same study. Recharge and base-flow estimates were made using daily streamflow records. Recharge estimates were made at 16 streamflow-gaging-station locations and were compared to recharge estimates from the precipitation-runoff models. Base-flow separation methods were used to identify the base-flow component of streamflow at 52 currently operated and discontinued streamflow-gaging-station locations. Stream gain-loss measurements were made on the Middle Fork Willamette, Willamette, South Yamhill, Pudding, and South Santiam Rivers, and were used to identify and quantify gaining and losing stream reaches both spatially and temporally. These measurements provide further understanding of ground-water/surface-water interactions.
NASA Astrophysics Data System (ADS)
Horstmann, T.; Harrington, R. M.; Cochran, E.; Shelly, D. R.
2013-12-01
Observations of non-volcanic tremor have become ubiquitous in recent years. In spite of the abundance of observations, locating tremor remains a difficult task because of the lack of distinctive phase arrivals. Here we use time-reverse-imaging techniques that do not require identifying phase arrivals to locate individual low-frequency-earthquakes (LFEs) within tremor episodes on the San Andreas fault near Cholame, California. Time windows of 1.5-second duration containing LFEs are selected from continuously recorded waveforms of the local seismic network filtered between 1-5 Hz. We propagate the time-reversed seismic signal back through the subsurface using a staggered-grid finite-difference code. Assuming all rebroadcasted waveforms result from similar wave fields at the source origin, we search for wave field coherence in time and space to obtain the source location and origin time where the constructive interference is a maximum. We use an interpolated velocity model with a grid spacing of 100 m and a 5 ms time step to calculate the relative curl field energy amplitudes for each rebroadcasted seismogram every 50 ms for each grid point in the model. Finally, we perform a grid search for coherency in the curl field using a sliding time window, and taking the absolute value of the correlation coefficient to account for differences in radiation pattern. The highest median cross-correlation coefficient value over at a given grid point indicates the source location for the rebroadcasted event. Horizontal location errors based on the spatial extent of the highest 10% cross-correlation coefficient are on the order of 4 km, and vertical errors on the order of 3 km. Furthermore, a test of the method using earthquake data shows that the method produces an identical hypocentral location (within errors) as that obtained by standard ray-tracing methods. We also compare the event locations to a LFE catalog that locates the LFEs from stacked waveforms of repeated LFEs identified by cross-correlation techniques [Shelly and Hardebeck, 2010]. The LFE catalog uses stacks of at least several hundred templates to identify phase arrivals used to estimate the location. We find epicentral locations for individual LFEs based on the time-reverse-imaging technique are within ~4 km relative to the LFE catalog [Shelly and Hardebeck, 2010]. LFEs locate between 15-25 km depth, and have similar focal depths found in previous studies of the region. Overall, the method can provide robust locations of individual LFEs without identifying and stacking hundreds of LFE templates; the locations are also more accurate than envelope location methods, which have errors on the order of tens of km [Horstmann et al., 2013].
Power strain imaging based on vibro-elastography techniques
NASA Astrophysics Data System (ADS)
Wen, Xu; Salcudean, S. E.
2007-03-01
This paper describes a new ultrasound elastography technique, power strain imaging, based on vibro-elastography (VE) techniques. With this method, tissue is compressed by a vibrating actuator driven by low-pass or band-pass filtered white noise, typically in the 0-20 Hz range. Tissue displacements at different spatial locations are estimated by correlation-based approaches on the raw ultrasound radio frequency signals and recorded in time sequences. The power spectra of these time sequences are computed by Fourier spectral analysis techniques. As the average of the power spectrum is proportional to the squared amplitude of the tissue motion, the square root of the average power over the range of excitation frequencies is used as a measure of the tissue displacement. Then tissue strain is determined by the least squares estimation of the gradient of the displacement field. The computation of the power spectra of the time sequences can be implemented efficiently by using Welch's periodogram method with moving windows or with accumulative windows with a forgetting factor. Compared to the transfer function estimation originally used in VE, the computation of cross spectral densities is not needed, which saves both the memory and computational times. Phantom experiments demonstrate that the proposed method produces stable and operator-independent strain images with high signal-to-noise ratio in real time. This approach has been also tested on a few patient data of the prostate region, and the results are encouraging.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Sato, Mitsuteru; Mihara, Masahiro; Ushio, Tomoo; Morimoto, Takeshi; Kikuchi, Hiroshi; Adachi, Toru; Suzuki, Makoto; Yamazaki, Atsushi; Takahashi, Yukihiro
2015-04-01
JEM-GLIMS is continuing the comprehensive nadir observations of lightning and TLEs using optical instruments and electromagnetic wave receivers since November 2012. For the period between November 20, 2012 and November 30, 2014, JEM-GLIMS succeeded in detecting 5,048 lightning events. A total of 567 events in 5,048 lightning events were TLEs, which were mostly elves events. To identify the sprite occurrences from the transient optical flash data, it is necessary to perform the following data analysis: (1) a subtraction of the appropriately scaled wideband camera data from the narrowband camera data; (2) a calculation of intensity ratio between different spectrophotometer channels; and (3) an estimation of the polarization and CMC for the parent CG discharges using ground-based ELF measurement data. From a synthetic comparison of these results, it is confirmed that JEM-GLISM succeeded in detecting sprite events. The VHF receiver (VITF) onboard JEM-GLIMS uses two patch-type antennas separated by a 1.6-m interval and can detect VHF pulses emitted by lightning discharges in the 70-100 MHz frequency range. Using both an interferometric technique and a group delay technique, we can estimate the source locations of VHF pulses excited by lightning discharges. In the event detected at 06:41:15.68565 UT on June 12, 2014 over central North America, sprite was distributed with a horizontal displacement of 20 km from the peak location of the parent lightning emission. In this event, a total of 180 VHF pulses were simultaneously detected by VITF. From the detailed data analysis of these VHF pulse data, it is found that the majority of the source locations were placed near the area of the dim lightning emission, which may imply that the VHF pulses were associated with the in-cloud lightning current. At the presentation, we will show detailed comparison between the spatiotemporal characteristics of sprite emission and source locations of VHF pulses excited by the parent lightning discharges of sprites.
Comparison of Sequential and Variational Data Assimilation
NASA Astrophysics Data System (ADS)
Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht
2017-04-01
Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.
IDENTIFICATION OF MEMBERS IN THE CENTRAL AND OUTER REGIONS OF GALAXY CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serra, Ana Laura; Diaferio, Antonaldo, E-mail: serra@ph.unito.it
2013-05-10
The caustic technique measures the mass of galaxy clusters in both their virial and infall regions and, as a byproduct, yields the list of cluster galaxy members. Here we use 100 galaxy clusters with mass M{sub 200} {>=} 10{sup 14} h {sup -1} M{sub Sun} extracted from a cosmological N-body simulation of a {Lambda}CDM universe to test the ability of the caustic technique to identify the cluster galaxy members. We identify the true three-dimensional members as the gravitationally bound galaxies. The caustic technique uses the caustic location in the redshift diagram to separate the cluster members from the interlopers. Wemore » apply the technique to mock catalogs containing 1000 galaxies in the field of view of 12 h {sup -1} Mpc on a side at the cluster location. On average, this sample size roughly corresponds to 180 real galaxy members within 3r{sub 200}, similar to recent redshift surveys of cluster regions. The caustic technique yields a completeness, the fraction of identified true members, f{sub c} = 0.95 {+-} 0.03, within 3r{sub 200}. The contamination, the fraction of interlopers in the observed catalog of members, increases from f{sub i}=0.020{sup +0.046}{sub -0.015} at r{sub 200} to f{sub i}=0.08{sup +0.11}{sub -0.05} at 3r{sub 200}. No other technique for the identification of the members of a galaxy cluster provides such large completeness and small contamination at these large radii. The caustic technique assumes spherical symmetry and the asphericity of the cluster is responsible for most of the spread of the completeness and the contamination. By applying the technique to an approximately spherical system obtained by stacking the individual clusters, the spreads decrease by at least a factor of two. We finally estimate the cluster mass within 3r{sub 200} after removing the interlopers: for individual clusters, the mass estimated with the virial theorem is unbiased and within 30% of the actual mass; this spread decreases to less than 10% for the spherically symmetric stacked cluster.« less
Challenges of model transferability to data-scarce regions (Invited)
NASA Astrophysics Data System (ADS)
Samaniego, L. E.
2013-12-01
Developing the ability to globally predict the movement of water on the land surface at spatial scales from 1 to 5 km constitute one of grand challenges in land surface modelling. Copying with this grand challenge implies that land surface models (LSM) should be able to make reliable predictions across locations and/or scales other than those used for parameter estimation. In addition to that, data scarcity and quality impose further difficulties in attaining reliable predictions of water and energy fluxes at the scales of interest. Current computational limitations impose also seriously limitations to exhaustively investigate the parameter space of LSM over large domains (e.g. greater than half a million square kilometers). Addressing these challenges require holistic approaches that integrate the best techniques available for parameter estimation, field measurements and remotely sensed data at their native resolutions. An attempt to systematically address these issues is the multiscale parameterisation technique (MPR) that links high resolution land surface characteristics with effective model parameters. This technique requires a number of pedo-transfer functions and a much fewer global parameters (i.e. coefficients) to be inferred by calibration in gauged basins. The key advantage of this technique is the quasi-scale independence of the global parameters which enables to estimate global parameters at coarser spatial resolutions and then to transfer them to (ungauged) areas and scales of interest. In this study we show the ability of this technique to reproduce the observed water fluxes and states over a wide range of climate and land surface conditions ranging from humid to semiarid and from sparse to dense forested regions. Results of transferability of global model parameters in space (from humid to semi-arid basins) and across scales (from coarser to finer) clearly indicate the robustness of this technique. Simulations with coarse data sets (e.g. EOBS forcing 25x25 km2, FAO soil map 1:5000000) using parameters obtained with high resolution information (REGNIE forcing 1x1 km2, BUEK soil map 1:1000000) in different climatic regions indicate the potential of MPR for prediction in data-scarce regions. In this presentation, we will also discuss how the transferability of global model parameters across scales and locations helps to identify deficiencies in model structure and regionalization functions.
A progress report on the ARRA-funded geotechnical site characterization project
NASA Astrophysics Data System (ADS)
Martin, A. J.; Yong, A.; Stokoe, K.; Di Matteo, A.; Diehl, J.; Jack, S.
2011-12-01
For the past 18 months, the 2009 American Recovery and Reinvestment Act (ARRA) has funded geotechnical site characterizations at 189 seismographic station sites in California and the central U.S. This ongoing effort applies methods involving surface-wave techniques, which include the horizontal-to-vertical spectral ratio (HVSR) technique and one or more of the following: spectral analysis of surface wave (SASW), active and passive multi-channel analysis of surface wave (MASW) and passive array microtremor techniques. From this multi-method approach, shear-wave velocity profiles (VS) and the time-averaged shear-wave velocity of the upper 30 meters (VS30) are estimated for each site. To accommodate the variability in local conditions (e.g., rural and urban soil locales, as well as weathered and competent rock sites), conventional field procedures are often modified ad-hoc to fit the unanticipated complexity at each location. For the majority of sites (>80%), fundamental-mode Rayleigh wave dispersion-based techniques are deployed and where complex geology is encountered, multiple test locations are made. Due to the presence of high velocity layers, about five percent of the locations require multi-mode inversion of Rayleigh wave (MASW-based) data or 3-D array-based inversion of SASW dispersion data, in combination with shallow P-wave seismic refraction and/or HVSR results. Where a strong impedance contrast (i.e. soil over rock) exists at shallow depth (about 10% of sites), dominant higher modes limit the use of Rayleigh wave dispersion techniques. Here, use of the Love wave dispersion technique, along with seismic refraction and/or HVSR data, is required to model the presence of shallow bedrock. At a small percentage of the sites, surface wave techniques are found not suitable for stand-alone deployment and site characterization is limited to the use of the seismic refraction technique. A USGS Open File Report-describing the surface geology, VS profile and the calculated VS30 for each site-will be prepared after the completion of the project in November 2011.
Estimation of shoreline position and change using airborne topographic lidar data
Stockdon, H.F.; Sallenger, A.H.; List, J.H.; Holman, R.A.
2002-01-01
A method has been developed for estimating shoreline position from airborne scanning laser data. This technique allows rapid estimation of objective, GPS-based shoreline positions over hundreds of kilometers of coast, essential for the assessment of large-scale coastal behavior. Shoreline position, defined as the cross-shore position of a vertical shoreline datum, is found by fitting a function to cross-shore profiles of laser altimetry data located in a vertical range around the datum and then evaluating the function at the specified datum. Error bars on horizontal position are directly calculated as the 95% confidence interval on the mean value based on the Student's t distribution of the errors of the regression. The technique was tested using lidar data collected with NASA's Airborne Topographic Mapper (ATM) in September 1997 on the Outer Banks of North Carolina. Estimated lidar-based shoreline position was compared to shoreline position as measured by a ground-based GPS vehicle survey system. The two methods agreed closely with a root mean square difference of 2.9 m. The mean 95% confidence interval for shoreline position was ?? 1.4 m. The technique has been applied to a study of shoreline change on Assateague Island, Maryland/Virginia, where three ATM data sets were used to assess the statistics of large-scale shoreline change caused by a major 'northeaster' winter storm. The accuracy of both the lidar system and the technique described provides measures of shoreline position and change that are ideal for studying storm-scale variability over large spatial scales.
Method and system for determining radiation shielding thickness and gamma-ray energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klann, Raymond T.; Vilim, Richard B.; de la Barrera, Sergio
2015-12-15
A system and method for determining the shielding thickness of a detected radiation source. The gamma ray spectrum of a radiation detector is utilized to estimate the shielding between the detector and the radiation source. The determination of the shielding may be used to adjust the information from known source-localization techniques to provide improved performance and accuracy of locating the source of radiation.
Graphic analysis of resources by numerical evaluation techniques (Garnet)
Olson, A.C.
1977-01-01
An interactive computer program for graphical analysis has been developed by the U.S. Geological Survey. The program embodies five goals, (1) economical use of computer resources, (2) simplicity for user applications, (3) interactive on-line use, (4) minimal core requirements, and (5) portability. It is designed to aid (1) the rapid analysis of point-located data, (2) structural mapping, and (3) estimation of area resources. ?? 1977.
NASA Astrophysics Data System (ADS)
Bostater, Charles R.; Oney, Taylor S.
2017-10-01
Hyperspectral images of coastal waters in urbanized regions were collected from fixed platform locations. Surf zone imagery, images of shallow bays, lagoons and coastal waters are processed to produce bidirectional reflectance factor (BRF) signatures corrected for changing viewing angles. Angular changes as a function of pixel location within a scene are used to estimate changes in pixel size and ground sampling areas. Diffuse calibration targets collected simultaneously from within the image scene provides the necessary information for calculating BRF signatures of the water surface and shorelines. Automated scanning using a pushbroom hyperspectral sensor allows imagery to be collected on the order of one minute or less for different regions of interest. Imagery is then rectified and georeferenced using ground control points within nadir viewing multispectral imagery via image to image registration techniques. This paper demonstrates the above as well as presenting how spectra can be extracted along different directions in the imagery. The extraction of BRF spectra along track lines allows the application of derivative reflectance spectroscopy for estimating chlorophyll-a, dissolved organic matter and suspended matter concentrations at or near the water surface. Imagery is presented demonstrating the techniques to identify subsurface features and targets within the littoral and surf zones.
Evidence of Long Range Dependence and Self-similarity in Urban Traffic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thakur, Gautam S; Helmy, Ahmed; Hui, Pan
2015-01-01
Transportation simulation technologies should accurately model traffic demand, distribution, and assignment parame- ters for urban environment simulation. These three param- eters significantly impact transportation engineering bench- mark process, are also critical in realizing realistic traffic modeling situations. In this paper, we model and charac- terize traffic density distribution of thousands of locations around the world. The traffic densities are generated from millions of images collected over several years and processed using computer vision techniques. The resulting traffic den- sity distribution time series are then analyzed. It is found using the goodness-of-fit test that the traffic density dis- tributions follows heavy-tailmore » models such as Log-gamma, Log-logistic, and Weibull in over 90% of analyzed locations. Moreover, a heavy-tail gives rise to long-range dependence and self-similarity, which we studied by estimating the Hurst exponent (H). Our analysis based on seven different Hurst estimators strongly indicate that the traffic distribution pat- terns are stochastically self-similar (0.5 H 1.0). We believe this is an important finding that will influence the design and development of the next generation traffic simu- lation techniques and also aid in accurately modeling traffic engineering of urban systems. In addition, it shall provide a much needed input for the development of smart cities.« less
NASA Astrophysics Data System (ADS)
Finneran, James J.
2003-04-01
An acoustic backscatter technique was used to estimate in vivo whole-lung resonant frequencies in a bottlenose dolphin (Tursiops truncatus) and a white whale (Delphinapterus leucas). Subjects were trained to submerge and position themselves near an underwater sound projector and a receiving hydrophone. Acoustic pressure measurements were made near the subjects' lungs while insonified with pure tones at frequencies from 16 to 100 Hz. Whole-lung resonant frequencies were estimated by comparing pressures measured near the subjects' lungs to those measured from the same location without the subject present. Experimentally measured resonant frequencies and damping ratios were much higher than those predicted using equivalent volume spherical air bubble models. The experimental technique, data analysis method, and discrepancy between the observed and predicted values will be discussed. The potential effects of depth on the resonance frequencies will also be discussed.
Hoenner, Xavier; Whiting, Scott D; Hindell, Mark A; McMahon, Clive R
2012-01-01
Accurately quantifying animals' spatial utilisation is critical for conservation, but has long remained an elusive goal due to technological impediments. The Argos telemetry system has been extensively used to remotely track marine animals, however location estimates are characterised by substantial spatial error. State-space models (SSM) constitute a robust statistical approach to refine Argos tracking data by accounting for observation errors and stochasticity in animal movement. Despite their wide use in ecology, few studies have thoroughly quantified the error associated with SSM predicted locations and no research has assessed their validity for describing animal movement behaviour. We compared home ranges and migratory pathways of seven hawksbill sea turtles (Eretmochelys imbricata) estimated from (a) highly accurate Fastloc GPS data and (b) locations computed using common Argos data analytical approaches. Argos 68(th) percentile error was <1 km for LC 1, 2, and 3 while markedly less accurate (>4 km) for LC ≤ 0. Argos error structure was highly longitudinally skewed and was, for all LC, adequately modelled by a Student's t distribution. Both habitat use and migration routes were best recreated using SSM locations post-processed by re-adding good Argos positions (LC 1, 2 and 3) and filtering terrestrial points (mean distance to migratory tracks ± SD = 2.2 ± 2.4 km; mean home range overlap and error ratio = 92.2% and 285.6 respectively). This parsimonious and objective statistical procedure however still markedly overestimated true home range sizes, especially for animals exhibiting restricted movements. Post-processing SSM locations nonetheless constitutes the best analytical technique for remotely sensed Argos tracking data and we therefore recommend using this approach to rework historical Argos datasets for better estimation of animal spatial utilisation for research and evidence-based conservation purposes.
NASA Astrophysics Data System (ADS)
Kilcommons, Liam M.; Redmon, Robert J.; Knipp, Delores J.
2017-08-01
We have developed a method for reprocessing the multidecadal, multispacecraft Defense Meteorological Satellite Program Special Sensor Magnetometer (DMSP SSM) data set and have applied it to 15 spacecraft years of data (DMSP Flight 16-18, 2010-2014). This Level-2 data set improves on other available SSM data sets with recalculated spacecraft locations and magnetic perturbations, artifact signal removal, representations of the observations in geomagnetic coordinates, and in situ auroral boundaries. Spacecraft locations have been recalculated using ground-tracking information. Magnetic perturbations (measured field minus modeled main field) are recomputed. The updated locations ensure the appropriate model field is used. We characterize and remove a slow-varying signal in the magnetic field measurements. This signal is a combination of ring current and measurement artifacts. A final artifact remains after processing: step discontinuities in the baseline caused by activation/deactivation of spacecraft electronics. Using coincident data from the DMSP precipitating electrons and ions instrument (SSJ4/5), we detect the in situ auroral boundaries with an improvement to the Redmon et al. (2010) algorithm. We embed the location of the aurora and an accompanying figure of merit in the Level-2 SSM data product. Finally, we demonstrate the potential of this new data set by estimating field-aligned current (FAC) density using the Minimum Variance Analysis technique. The FAC estimates are then expressed in dynamic auroral boundary coordinates using the SSJ-derived boundaries, demonstrating a dawn-dusk asymmetry in average FAC location relative to the equatorward edge of the aurora. The new SSM data set is now available in several public repositories.
Robustness of survival estimates for radio-marked animals
Bunck, C.M.; Chen, C.-L.
1992-01-01
Telemetry techniques are often used to study the survival of birds and mammals; particularly whcn mark-recapture approaches are unsuitable. Both parametric and nonparametric methods to estimate survival have becn developed or modified from other applications. An implicit assumption in these approaches is that the probability of re-locating an animal with a functioning transmitter is one. A Monte Carlo study was conducted to determine the bias and variance of the Kaplan-Meier estimator and an estimator based also on the assumption of constant hazard and to eva!uate the performance of the two-sample tests associated with each. Modifications of each estimator which allow a re-Iocation probability of less than one are described and evaluated. Generallv the unmodified estimators were biased but had lower variance. At low sample sizes all estimators performed poorly. Under the null hypothesis, the distribution of all test statistics reasonably approximated the null distribution when survival was low but not when it was high. The power of the two-sample tests were similar.
Lo Presti, Rossella; Barca, Emanuele; Passarella, Giuseppe
2010-01-01
Environmental time series are often affected by the "presence" of missing data, but when dealing statistically with data, the need to fill in the gaps estimating the missing values must be considered. At present, a large number of statistical techniques are available to achieve this objective; they range from very simple methods, such as using the sample mean, to very sophisticated ones, such as multiple imputation. A brand new methodology for missing data estimation is proposed, which tries to merge the obvious advantages of the simplest techniques (e.g. their vocation to be easily implemented) with the strength of the newest techniques. The proposed method consists in the application of two consecutive stages: once it has been ascertained that a specific monitoring station is affected by missing data, the "most similar" monitoring stations are identified among neighbouring stations on the basis of a suitable similarity coefficient; in the second stage, a regressive method is applied in order to estimate the missing data. In this paper, four different regressive methods are applied and compared, in order to determine which is the most reliable for filling in the gaps, using rainfall data series measured in the Candelaro River Basin located in South Italy.
Garner, Alan A; van den Berg, Pieter L
2017-10-16
New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.
MUSIC algorithm DoA estimation for cooperative node location in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Warty, Chirag; Yu, Richard Wai; ElMahgoub, Khaled; Spinsante, Susanna
In recent years the technological development has encouraged several applications based on distributed communications network without any fixed infrastructure. The problem of providing a collaborative early warning system for multiple mobile nodes against a fast moving object. The solution is provided subject to system level constraints: motion of nodes, antenna sensitivity and Doppler effect at 2.4 GHz and 5.8 GHz. This approach consists of three stages. The first phase consists of detecting the incoming object using a highly directive two element antenna at 5.0 GHz band. The second phase consists of broadcasting the warning message using a low directivity broad antenna beam using 2× 2 antenna array which then in third phase will be detected by receiving nodes by using direction of arrival (DOA) estimation technique. The DOA estimation technique is used to estimate the range and bearing of the incoming nodes. The position of fast arriving object can be estimated using the MUSIC algorithm for warning beam DOA estimation. This paper is mainly intended to demonstrate the feasibility of early detection and warning system using a collaborative node to node communication links. The simulation is performed to show the behavior of detecting and broadcasting antennas as well as performance of the detection algorithm. The idea can be further expanded to implement commercial grade detection and warning system
The spatial distribution of underage tobacco sales in Los Angeles.
Lipton, Robert; Banerjee, Aniruddha; Levy, David; Manzanilla, Nora; Cochrane, Michelle
2008-01-01
Underage tobacco sales is considered a serious public health problem in Los Angeles. Anecdotally, rates have been thought to be quite high. In this paper, using spatial statistical techniques, we describe underage tobacco sales, identifying areas with high levels of sales and hot spots controlling for sociodemographic measures. Six hundred eighty-nine tobacco outlets were investigated throughout the city of Los Angeles in 2001. We consider the factors that explain vendor location of illegal sales of tobacco to underage youth and focus on those areas with especially high rates of illegal sales when controlling for other independent measures. Using data from the census, the LA City Attorney's Office, and public records on school locations in Los Angeles, we employ general least-squares (GLS) estimators in order to avoid biased estimates. vendor location of underage tobacco compliance checks, violators, and nonviolators. Underage tobacco sales in Los Angeles were very high (33.5%) for the entire city in 2001. In many zip codes this rate is considerably higher (60%-100%). When conducting spatial modeling, lower income and ethnicity were strongly associated with increases in underage tobacco sales. Hotspot areas of underage tobacco sales also had much lower mean family income and a much higher percentage of foreign born and greater population density. Spatial techniques were used to better identify areas where vendors sell tobacco to underage youth. Lower income areas were much more likely to both have higher rates of underage tobacco sales and to be a hot spot for such sales. Population density is also significantly associated with underage tobacco sales. The study's limitations are noted.
Magnitude and Frequency of Floods on Nontidal Streams in Delaware
Ries, Kernell G.; Dillow, Jonathan J.A.
2006-01-01
Reliable estimates of the magnitude and frequency of annual peak flows are required for the economical and safe design of transportation and water-conveyance structures. This report, done in cooperation with the Delaware Department of Transportation (DelDOT) and the Delaware Geological Survey (DGS), presents methods for estimating the magnitude and frequency of floods on nontidal streams in Delaware at locations where streamgaging stations monitor streamflow continuously and at ungaged sites. Methods are presented for estimating the magnitude of floods for return frequencies ranging from 2 through 500 years. These methods are applicable to watersheds exhibiting a full range of urban development conditions. The report also describes StreamStats, a web application that makes it easy to obtain flood-frequency estimates for user-selected locations on Delaware streams. Flood-frequency estimates for ungaged sites are obtained through a process known as regionalization, using statistical regression analysis, where information determined for a group of streamgaging stations within a region forms the basis for estimates for ungaged sites within the region. One hundred and sixteen streamgaging stations in and near Delaware with at least 10 years of non-regulated annual peak-flow data available were used in the regional analysis. Estimates for gaged sites are obtained by combining the station peak-flow statistics (mean, standard deviation, and skew) and peak-flow estimates with regional estimates of skew and flood-frequency magnitudes. Example flood-frequency estimate calculations using the methods presented in the report are given for: (1) ungaged sites, (2) gaged locations, (3) sites upstream or downstream from a gaged location, and (4) sites between gaged locations. Regional regression equations applicable to ungaged sites in the Piedmont and Coastal Plain Physiographic Provinces of Delaware are presented. The equations incorporate drainage area, forest cover, impervious area, basin storage, housing density, soil type A, and mean basin slope as explanatory variables, and have average standard errors of prediction ranging from 28 to 72 percent. Additional regression equations that incorporate drainage area and housing density as explanatory variables are presented for use in defining the effects of urbanization on peak-flow estimates throughout Delaware for the 2-year through 500-year recurrence intervals, along with suggestions for their appropriate use in predicting development-affected peak flows. Additional topics associated with the analyses performed during the study are also discussed, including: (1) the availability and description of more than 30 basin and climatic characteristics considered during the development of the regional regression equations; (2) the treatment of increasing trends in the annual peak-flow series identified at 18 gaged sites, with respect to their relations with maximum 24-hour precipitation and housing density, and their use in the regional analysis; (3) calculation of the 90-percent confidence interval associated with peak-flow estimates from the regional regression equations; and (4) a comparison of flood-frequency estimates at gages used in a previous study, highlighting the effects of various improved analytical techniques.
Cheng, Xuemin; Hao, Qun; Xie, Mengdi
2016-04-07
Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.
Program Analyzes Radar Altimeter Data
NASA Technical Reports Server (NTRS)
Vandemark, Doug; Hancock, David; Tran, Ngan
2004-01-01
A computer program has been written to perform several analyses of radar altimeter data. The program was designed to improve on previous methods of analysis of altimeter engineering data by (1) facilitating and accelerating the analysis of large amounts of data in a more direct manner and (2) improving the ability to estimate performance of radar-altimeter instrumentation and provide data corrections. The data in question are openly available to the international scientific community and can be downloaded from anonymous file-transfer- protocol (FTP) locations that are accessible via links from altimetry Web sites. The software estimates noise in range measurements, estimates corrections for electromagnetic bias, and performs statistical analyses on various parameters for comparison of different altimeters. Whereas prior techniques used to perform similar analyses of altimeter range noise require comparison of data from repetitions of satellite ground tracks, the present software uses a high-pass filtering technique to obtain similar results from single satellite passes. Elimination of the requirement for repeat-track analysis facilitates the analysis of large amounts of satellite data to assess subtle variations in range noise.
Error assessment of local tie vectors in space geodesy
NASA Astrophysics Data System (ADS)
Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald
2014-05-01
For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.
Diagnostics of Loss of Coolant Accidents Using SVC and GMDH Models
NASA Astrophysics Data System (ADS)
Lee, Sung Han; No, Young Gyu; Na, Man Gyun; Ahn, Kwang-Il; Park, Soo-Yong
2011-02-01
As a means of effectively managing severe accidents at nuclear power plants, it is important to identify and diagnose accident initiating events within a short time interval after the accidents by observing the major measured signals. The main objective of this study was to diagnose loss of coolant accidents (LOCAs) using artificial intelligence techniques, such as SVC (support vector classification) and GMDH (group method of data handling). In this study, the methodologies of SVC and GMDH models were utilized to discover the break location and estimate the break size of the LOCA, respectively. The 300 accident simulation data (based on MAAP4) were used to develop the SVC and GMDH models, and the 33 test data sets were used to independently confirm whether or not the SVC and GMDH models work well. The measured signals from the reactor coolant system, steam generators, and containment at a nuclear power plant were used as inputs to the models, and the 60 sec time-integrated values of the input signals were used as inputs into the SVC and GMDH models. The simulation results confirmed that the proposed SVC model can identify the break location and the proposed GMDH models can estimate the break size accurately. In addition, even if the measurement errors exist and safety systems actuate, the proposed SVC and GMDH models can discover the break locations without a misclassification and accurately estimate the break size.
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics.
Boedo, J A; Rudakov, D L
2017-03-01
We present a method to calculate the ion saturation current, I sat , for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat . It is noted that the I sat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e . We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuously biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and its use in reducing arcs.
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boedo, J. A.; Rudakov, D. L.
Here we present a method to calculate the ion saturation current, I sat, for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat. It is noted that the Isat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e. We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuouslymore » biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and it’s use in reducing arcs.« less
Estimation of plasma ion saturation current and reduced tip arcing using Langmuir probe harmonics
Boedo, J. A.; Rudakov, D. L.
2017-03-20
Here we present a method to calculate the ion saturation current, I sat, for Langmuir probes at high frequency (>100 kHz) using the harmonics technique and we compare that to a direct measurement of I sat. It is noted that the Isat estimation can be made directly by the ratio of harmonic amplitudes, without explicitly calculating T e. We also demonstrate that since the probe tips using the harmonic method are oscillating near the floating potential, drawing little power, this method reduces tip heating and arcing and allows plasma density measurements at a plasma power flux that would cause continuouslymore » biased tips to arc. A multi-probe array is used, with two spatially separated tips employing the harmonics technique and measuring the amplitude of at least two harmonics per tip. A third tip, located between the other two, measures the ion saturation current directly. We compare the measured and calculated ion saturation currents for a variety of plasma conditions and demonstrate the validity of the technique and it’s use in reducing arcs.« less
NASA Astrophysics Data System (ADS)
Mercado, Karla Patricia E.
Tissue engineering holds great promise for the repair or replacement of native tissues and organs. Further advancements in the fabrication of functional engineered tissues are partly dependent on developing new and improved technologies to monitor the properties of engineered tissues volumetrically, quantitatively, noninvasively, and nondestructively over time. Currently, engineered tissues are evaluated during fabrication using histology, biochemical assays, and direct mechanical tests. However, these techniques destroy tissue samples and, therefore, lack the capability for real-time, longitudinal monitoring. The research reported in this thesis developed nondestructive, noninvasive approaches to characterize the structural, biological, and mechanical properties of 3-D engineered tissues using high-frequency quantitative ultrasound and elastography technologies. A quantitative ultrasound technique, using a system-independent parameter known as the integrated backscatter coefficient (IBC), was employed to visualize and quantify structural properties of engineered tissues. Specifically, the IBC was demonstrated to estimate cell concentration and quantitatively detect differences in the microstructure of 3-D collagen hydrogels. Additionally, the feasibility of an ultrasound elastography technique called Single Tracking Location Acoustic Radiation Force Impulse (STL-ARFI) imaging was demonstrated for estimating the shear moduli of 3-D engineered tissues. High-frequency ultrasound techniques can be easily integrated into sterile environments necessary for tissue engineering. Furthermore, these high-frequency quantitative ultrasound techniques can enable noninvasive, volumetric characterization of the structural, biological, and mechanical properties of engineered tissues during fabrication and post-implantation.
NASA Astrophysics Data System (ADS)
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.
Estimation of Shear Wave Speed in the Rhesus Macaques Uterine Cervix
Huang, Bin; Drehfal, Lindsey C.; Rosado-Mendez, Ivan M.; Guerrero, Quinton W.; Palmeri, Mark L.; Simmons, Heather A.; Feltovich, Helen; Hall, Timothy J.
2016-01-01
Cervical softness is a critical parameter in pregnancy. Clinically, preterm birth is associated with premature cervical softening and post-dates birth is associated with delayed cervical softening. In practice, the assessment of softness is subjective, based on digital examination. Fortunately, objective, quantitative techniques to assess softness and other parameters associated with microstructural cervical change are emerging. One of these is shear wave speed (SWS) estimation. In principle, this allows objective characterization of stiffness because waves travel more slowly in softer tissue. We are studying SWS in humans and rhesus macaques, the latter in order to accelerate translation from bench to bedside. For the current study, we estimated SWS in ex vivo cervices of rhesus macaques, n=24 nulliparous (never given birth) and n=9 multiparous (delivered at least 1 baby). Misoprostol (a prostaglandin used to soften human cervices prior to gynecological procedures) was administered to 13 macaques prior to necropsy (nulliparous: 7, multiparous: 6). SWS measurements were made at predetermined locations from the distal to proximal end of the cervix on both the anterior and posterior cervix, with 5 repeat measures at each location. The intent was to explore macaque cervical microstructure, including biological and spatial variability, to elucidate the similarities and differences between the macaque and the human cervix in order to facilitate future in vivo studies. We found that SWS is dependent on location in the normal nonpregnant macaque cervix, as in the human cervix. Unlike the human cervix, we detected no difference between ripened and unripened rhesus macaque cervix samples, nor nulliparous versus multiparous samples, although we observed a trend toward stiffer tissue in nulliparous samples. We found rhesus macaque cervix to be much stiffer than human, which is important for technique refinement. These findings are useful for guiding study of cervical microstructure in both humans and macaques. PMID:26886979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maraldo, Maja V., E-mail: dra.maraldo@gmail.com; Dabaja, Bouthaina S.; Filippi, Andrea R.
Purpose: Early-stage Hodgkin lymphoma (HL) is a rare disease, and the location of lymphoma varies considerably between patients. Here, we evaluate the variability of radiation therapy (RT) plans among 5 International Lymphoma Radiation Oncology Group (ILROG) centers with regard to beam arrangements, planning parameters, and estimated doses to the critical organs at risk (OARs). Methods: Ten patients with stage I-II classic HL with masses of different sizes and locations were selected. On the basis of the clinical information, 5 ILROG centers were asked to create RT plans to a prescribed dose of 30.6 Gy. A postchemotherapy computed tomography scan with precontouredmore » clinical target volume (CTV) and OARs was provided for each patient. The treatment technique and planning methods were chosen according to each center's best practice in 2013. Results: Seven patients had mediastinal disease, 2 had axillary disease, and 1 had disease in the neck only. The median age at diagnosis was 34 years (range, 21-74 years), and 5 patients were male. Of the resulting 50 treatment plans, 15 were planned with volumetric modulated arc therapy (1-4 arcs), 16 with intensity modulated RT (3-9 fields), and 19 with 3-dimensional conformal RT (2-4 fields). The variations in CTV-to-planning target volume margins (5-15 mm), maximum tolerated dose (31.4-40 Gy), and plan conformity (conformity index 0-3.6) were significant. However, estimated doses to OARs were comparable between centers for each patient. Conclusions: RT planning for HL is challenging because of the heterogeneity in size and location of disease and, additionally, to the variation in choice of treatment techniques and field arrangements. Adopting ILROG guidelines and implementing universal dose objectives could further standardize treatment techniques and contribute to lowering the dose to the surrounding OARs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulisek, Jonathan A.; Schweppe, John E.; Stave, Sean C.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this, we have developed a new technique for real-time estimation ofmore » background gamma radiation from aerial measurements. This method is built upon on the noise-adjusted singular value decomposition (NASVD) technique that was previously developed for estimating the potassium (K), uranium (U), and thorium (T) concentrations in soil post-flight. The method can be calibrated using K, U, and T spectra determined from radiation transport simulations along with basis functions, which may be determined empirically by applying maximum likelihood estimation (MLE) to previously measured airborne gamma-ray spectra. The method was applied to both measured and simulated airborne gamma-ray spectra, with and without man-made radiological source injections. Compared to schemes based on simple averaging, this technique was less sensitive to background contamination from the injected man-made sources and may be particularly useful when the gamma-ray background frequently changes during the course of the flight.« less
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1988-01-01
This thesis reviews the technique established to clear channels in the Power Spectral Estimate by applying linear combinations of well known window functions to the autocorrelation function. The need for windowing the auto correlation function is due to the fact that the true auto correlation is not generally used to obtain the Power Spectral Estimate. When applied, the windows serve to reduce the effect that modifies the auto correlation by truncating the data and possibly the autocorrelation has on the Power Spectral Estimate. It has been shown in previous work that a single channel has been cleared, allowing for the detection of a small peak in the presence of a large peak in the Power Spectral Estimate. The utility of this method is dependent on the robustness of it on different input situations. We extend the analysis in this paper, to include clearing up to three channels. We examine the relative positions of the spikes to each other and also the effect of taking different percentages of lags of the auto correlation in the Power Spectral Estimate. This method could have application wherever the Power Spectrum is used. An example of this is beam forming for source location, where a small target can be located next to a large target. Other possibilities extend into seismic data processing. As the method becomes more automated other applications may present themselves.
Estimating the volatilization of ammonia from synthetic nitrogenous fertilizers used in China.
Zhang, Yisheng; Luan, Shengji; Chen, Liaoliao; Shao, Min
2011-03-01
Although it has long been recognized that significant amounts of nitrogen, typically in the form of ammonia (NH(3)) applied as fertilizer, are lost to the atmosphere, accurate estimates are lacking for many locations. In this study, a detailed, bottom-up method for estimating NH(3) emissions from synthetic fertilizers in China was used. The total amount emitted in 2005 in China was estimated to be 3.55 Tg NH(3)-N, with an uncertainty of ± 50%. This estimate was considerably lower than previously published values. Emissions from urea and ammonium bicarbonate accounted for 64.3% and 26.5%, respectively, of the 2005 total. The NH(3) emission inventory incorporated 2448 county-level data points, categorized on a monthly basis, and was developed with more accurate activity levels and emission factors than had been used in previous assessments. There was considerable variability in the emissions within a province. The NH(3) emissions generally peaked in the spring and summer, accounting for 30.1% and 48.8%, respectively, of total emissions in 2005. The peaks correlated with crop planting and fertilization schedules. The NH(3) regional distribution pattern showed strong correspondence with planting techniques and local arable land areas. The regions with the highest atmospheric losses are located in eastern China, especially the North China Plain and the Taihu region. Copyright © 2010 Elsevier Ltd. All rights reserved.
Volcanic tremor and local earthquakes at Copahue volcanic complex, Southern Andes, Argentina
NASA Astrophysics Data System (ADS)
Ibáñez, J. M.; Del Pezzo, E.; Bengoa, C.; Caselli, A.; Badi, G.; Almendros, J.
2008-07-01
In the present paper we describe the results of a seismic field survey carried out at Copahue Volcano, Southern Andes, Argentina, using a small-aperture, dense seismic antenna. Copahue Volcano is an active volcano that exhibited a few phreatic eruptions in the last 20 years. The aim of this experiment was to record and classify the background seismic activity of this volcanic area, and locate the sources of local earthquakes and volcanic tremor. Data consist of several volcano-tectonic (VT) earthquakes, and many samples of back-ground seismic noise. We use both ordinary spectral, and multi-spectral techniques to measure the spectral content, and an array technique [Zero Lag Cross Correlation technique] to measure the back-azimuth and apparent slowness of the signals propagating across the array. We locate VT earthquakes using a procedure based on the estimate of slowness vector components and S-P time. VT events are located mainly along the border of the Caviahue caldera lake, positioned at the South-East of Copahue volcano, in a depth interval of 1-3 km below the surface. The background noise shows the presence of many transients with high correlation among the array stations in the frequency band centered at 2.5 Hz. These transients are superimposed to an uncorrelated background seismic signal. Array solutions for these transients show a predominant slowness vector pointing to the exploited geothermal field of "Las Maquinitas" and "Copahue Village", located about 6 km north of the array site. We interpret this coherent signal as a tremor generated by the activity of the geothermal field.
Tran, Duong D; Huang, Wei; Bohn, Alexander C; Wang, Delin; Gong, Zheng; Makris, Nicholas C; Ratilal, Purnima
2014-06-01
Sperm whales in the New England continental shelf and slope were passively localized, in both range and bearing, and classified using a single low-frequency (<2500 Hz), densely sampled, towed horizontal coherent hydrophone array system. Whale bearings were estimated using time-domain beamforming that provided high coherent array gain in sperm whale click signal-to-noise ratio. Whale ranges from the receiver array center were estimated using the moving array triangulation technique from a sequence of whale bearing measurements. Multiple concurrently vocalizing sperm whales, in the far-field of the horizontal receiver array, were distinguished and classified based on their horizontal spatial locations and the inter-pulse intervals of their vocalized click signals. The dive profile was estimated for a sperm whale in the shallow waters of the Gulf of Maine with 160 m water-column depth located close to the array's near-field where depth estimation was feasible by employing time difference of arrival of the direct and multiply reflected click signals received on the horizontal array. By accounting for transmission loss modeled using an ocean waveguide-acoustic propagation model, the sperm whale detection range was found to exceed 60 km in low to moderate sea state conditions after coherent array processing.
NASA Astrophysics Data System (ADS)
Rui, Zhenhua
This study analyzes historical cost data of 412 pipelines and 220 compressor stations. On the basis of this analysis, the study also evaluates the feasibility of an Alaska in-state gas pipeline using Monte Carlo simulation techniques. Analysis of pipeline construction costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary by diameter, length, volume, year, and location. Overall average learning rates for pipeline material and labor costs are 6.1% and 12.4%, respectively. Overall average cost shares for pipeline material, labor, miscellaneous, and right of way (ROW) are 31%, 40%, 23%, and 7%, respectively. Regression models are developed to estimate pipeline component costs for different lengths, cross-sectional areas, and locations. An analysis of inaccuracy in pipeline cost estimation demonstrates that the cost estimation of pipeline cost components is biased except for in the case of total costs. Overall overrun rates for pipeline material, labor, miscellaneous, ROW, and total costs are 4.9%, 22.4%, -0.9%, 9.1%, and 6.5%, respectively, and project size, capacity, diameter, location, and year of completion have different degrees of impacts on cost overruns of pipeline cost components. Analysis of compressor station costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary in terms of capacity, year, and location. Average learning rates for compressor station material and labor costs are 12.1% and 7.48%, respectively. Overall average cost shares of material, labor, miscellaneous, and ROW are 50.6%, 27.2%, 21.5%, and 0.8%, respectively. Regression models are developed to estimate compressor station component costs in different capacities and locations. An investigation into inaccuracies in compressor station cost estimation demonstrates that the cost estimation for compressor stations is biased except for in the case of material costs. Overall average overrun rates for compressor station material, labor, miscellaneous, land, and total costs are 3%, 60%, 2%, -14%, and 11%, respectively, and cost overruns for cost components are influenced by location and year of completion to different degrees. Monte Carlo models are developed and simulated to evaluate the feasibility of an Alaska in-state gas pipeline by assigning triangular distribution of the values of economic parameters. Simulated results show that the construction of an Alaska in-state natural gas pipeline is feasible at three scenarios: 500 million cubic feet per day (mmcfd), 750 mmcfd, and 1000 mmcfd.
Rupture Propagation Imaging of Fluid Induced Events at the Basel EGS Project
NASA Astrophysics Data System (ADS)
Folesky, Jonas; Kummerow, Jörn; Shapiro, Serge A.
2014-05-01
The analysis of rupture properties using rupture propagation imaging techniques is a fast developing field of research in global seismology. Usually rupture fronts of large to megathrust earthquakes are subject of recent studies, like e.g. the 2004 Sumatra-Andaman earthquake or the 2011 Tohoku, Japan earthquake. The back projection technique is the most prominent technique in this field. Here the seismograms recorded at an array or at a seismic network are back shifted to a grid of possible source locations via a special stacking procedure. This can provide information on the energy release and energy distribution of the rupture which then can be used to find estimates of event properties like location, rupture direction, rupture speed or length. The procedure is fast and direct and it only relies on a reasonable velocity model. Thus it is a good way to rapidly estimate the rupture properties and it can be used to confirm independently achieved event information. We adopted the back projection technique and put it in a microseismic context. We demonstrated its usage for multiple synthetic ruptures within a reservoir model of microseismic scale in earlier works. Our motivation hereby is the occurrence of relatively large, induced seismic events at a number of stimulated geothermal reservoirs or waste disposal sites, having magnitudes ML ≥ 3.4 and yielding rupture lengths of several hundred meters. We use the configuration of the seismic network and reservoir properties of the Basel Geothermal Site to build a synthetic model of a rupture by modeling the wave field of multiple spatio-temporal separated single sources using Finite-Difference modeling. The focus of this work is the application of the Back Projection technique and the demonstration of its feasibility to retrieve the rupture properties of real fluid induced events. We take four microseismic events with magnitudes from ML 3.1 to 3.4 and reconstruct source parameters like location, orientation and length. By comparison with our synthetic results as well as independent localization studies and source mechanism studies in this area we can show, that the obtained results are reasonable and that the application of back projection imaging is not only possible for microseismic datasets of respective quality, but that it provides important additional insights in the rupture process.
Si, Liang; Wang, Qian
2016-01-01
Through the use of the wave reflection from any damage in a structure, a Hilbert spectral analysis-based rapid multi-damage identification (HSA-RMDI) technique with piezoelectric wafer sensor arrays (PWSA) is developed to monitor and identify the presence, location and severity of damage in carbon fiber composite structures. The capability of the rapid multi-damage identification technique to extract and estimate hidden significant information from the collected data and to provide a high-resolution energy-time spectrum can be employed to successfully interpret the Lamb waves interactions with single/multiple damage. Nevertheless, to accomplish the precise positioning and effective quantification of multiple damage in a composite structure, two functional metrics from the RMDI technique are proposed and used in damage identification, which are the energy density metric and the energy time-phase shift metric. In the designed damage experimental tests, invisible damage to the naked eyes, especially delaminations, were detected in the leftward propagating waves as well as in the selected sensor responses, where the time-phase shift spectra could locate the multiple damage whereas the energy density spectra were used to quantify the multiple damage. The increasing damage was shown to follow a linear trend calculated by the RMDI technique. All damage cases considered showed completely the developed RMDI technique potential as an effective online damage inspection and assessment tool. PMID:27153070
NASA Astrophysics Data System (ADS)
Matsumoto, H.; Haralabus, G.; Zampolli, M.; Özel, N. M.
2016-12-01
Underwater acoustic signal waveforms recorded during the 2015 Chile earthquake (Mw 8.3) by the hydrophones of hydroacoustic station HA03, located at the Juan Fernandez Islands, are analyzed. HA03 is part of the Comprehensive Nuclear-Test-Ban Treaty International Monitoring System. The interest in the particular data set stems from the fact that HA03 is located only approximately 700 km SW from the epicenter of the earthquake. This makes it possible to study aspects of the signal associated with the tsunamigenic earthquake, which would be more difficult to detect had the hydrophones been located far from the source. The analysis shows that the direction of arrival of the T phase can be estimated by means of a three-step preprocessing technique which circumvents spatial aliasing caused by the hydrophone spacing, the latter being large compared to the wavelength. Following this preprocessing step, standard frequency-wave number analysis (F-K analysis) can accurately estimate back azimuth and slowness of T-phase signals. The data analysis also shows that the dispersive tsunami signals can be identified by the water-column hydrophones at the time when the tsunami surface gravity wave reaches the station.
Development of a method for personal, spatiotemporal exposure assessment.
Adams, Colby; Riggs, Philip; Volckens, John
2009-07-01
This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm.
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-31
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point's received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner.
Collaborative Indoor Access Point Localization Using Autonomous Mobile Robot Swarm
Awad, Fahed; Naserllah, Muhammad; Omar, Ammar; Abu-Hantash, Alaa; Al-Taj, Abrar
2018-01-01
Localization of access points has become an important research problem due to the wide range of applications it addresses such as dismantling critical security threats caused by rogue access points or optimizing wireless coverage of access points within a service area. Existing proposed solutions have mostly relied on theoretical hypotheses or computer simulation to demonstrate the efficiency of their methods. The techniques that rely on estimating the distance using samples of the received signal strength usually assume prior knowledge of the signal propagation characteristics of the indoor environment in hand and tend to take a relatively large number of uniformly distributed random samples. This paper presents an efficient and practical collaborative approach to detect the location of an access point in an indoor environment without any prior knowledge of the environment. The proposed approach comprises a swarm of wirelessly connected mobile robots that collaboratively and autonomously collect a relatively small number of non-uniformly distributed random samples of the access point’s received signal strength. These samples are used to efficiently and accurately estimate the location of the access point. The experimental testing verified that the proposed approach can identify the location of the access point in an accurate and efficient manner. PMID:29385042
Investigation of the tip clearance flow inside and at the exit of a compressor rotor passage
NASA Technical Reports Server (NTRS)
Pandya, A.; Lakshminarayana, B.
1982-01-01
The nature of the tip clearance flow in a moderately loaded compressor rotor is studied. The measurements were taken inside the clearance between the annulus-wall casing and the rotor blade tip. These measurements were obtained using a stationary two-sensor hot-wire probe in combination with an ensemble averaging technique. The flowfield was surveyed at various radial locations and at ten axial locations, four of which were inside the blade passage in the clearance region and the remaining six outside the passage. Variations of the mean flow properties in the tangential and the radial directions at various axial locations were derived from the data. Variation of the leakage velocity at different axial stations and the annulus-wall boundary layer profiles from passage-averaged mean velocities were also estimated.
Sousa, Marcelo R; Jones, Jon P; Frind, Emil O; Rudolph, David L
2013-01-01
In contaminant travel from ground surface to groundwater receptors, the time taken in travelling through the unsaturated zone is known as the unsaturated zone time lag. Depending on the situation, this time lag may or may not be significant within the context of the overall problem. A method is presented for assessing the importance of the unsaturated zone in the travel time from source to receptor in terms of estimates of both the absolute and the relative advective times. A choice of different techniques for both unsaturated and saturated travel time estimation is provided. This method may be useful for practitioners to decide whether to incorporate unsaturated processes in conceptual and numerical models and can also be used to roughly estimate the total travel time between points near ground surface and a groundwater receptor. This method was applied to a field site located in a glacial aquifer system in Ontario, Canada. Advective travel times were estimated using techniques with different levels of sophistication. The application of the proposed method indicates that the time lag in the unsaturated zone is significant at this field site and should be taken into account. For this case, sophisticated and simplified techniques lead to similar assessments when the same knowledge of the hydraulic conductivity field is assumed. When there is significant uncertainty regarding the hydraulic conductivity, simplified calculations did not lead to a conclusive decision. Copyright © 2012 Elsevier B.V. All rights reserved.
Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.
Herrero, David; Martínez, Humberto
2011-01-01
This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.
Multiple-Event Seismic Location Using the Markov-Chain Monte Carlo Technique
NASA Astrophysics Data System (ADS)
Myers, S. C.; Johannesson, G.; Hanley, W.
2005-12-01
We develop a new multiple-event location algorithm (MCMCloc) that utilizes the Markov-Chain Monte Carlo (MCMC) method. Unlike most inverse methods, the MCMC approach produces a suite of solutions, each of which is consistent with observations and prior estimates of data and model uncertainties. Model parameters in MCMCloc consist of event hypocenters, and travel-time predictions. Data are arrival time measurements and phase assignments. Posteriori estimates of event locations, path corrections, pick errors, and phase assignments are made through analysis of the posteriori suite of acceptable solutions. Prior uncertainty estimates include correlations between travel-time predictions, correlations between measurement errors, the probability of misidentifying one phase for another, and the probability of spurious data. Inclusion of prior constraints on location accuracy allows direct utilization of ground-truth locations or well-constrained location parameters (e.g. from InSAR) that aid in the accuracy of the solution. Implementation of a correlation structure for travel-time predictions allows MCMCloc to operate over arbitrarily large geographic areas. Transition in behavior between a multiple-event locator for tightly clustered events and a single-event locator for solitary events is controlled by the spatial correlation of travel-time predictions. We test the MCMC locator on a regional data set of Nevada Test Site nuclear explosions. Event locations and origin times are known for these events, allowing us to test the features of MCMCloc using a high-quality ground truth data set. Preliminary tests suggest that MCMCloc provides excellent relative locations, often outperforming traditional multiple-event location algorithms, and excellent absolute locations are attained when constraints from one or more ground truth event are included. When phase assignments are switched, we find that MCMCloc properly corrects the error when predicted arrival times are separated by several seconds. In cases where the predicted arrival times are within the combined uncertainty of prediction and measurement errors, MCMCloc determines the probability of one or the other phase assignment and propagates this uncertainty into all model parameters. We find that MCMCloc is a promising method for simultaneously locating large, geographically distributed data sets. Because we incorporate prior knowledge on many parameters, MCMCloc is ideal for combining trusted data with data of unknown reliability. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-ABS-215048
Wild turkey poult survival in southcentral Iowa
Hubbard, M.W.; Garner, D.L.; Klaas, E.E.
1999-01-01
Poult survival is key to understanding annual change in wild turkey (Meleagris gallopavo) populations. Survival of eastern wild turkey poults (M. g. silvestris) 0-4 weeks posthatch was studied in southcentral Iowa during 1994-97. Survival estimates of poults were calculated based on biweekly flush counts and daily locations acquired via radiotelemetry. Poult survival averaged 0.52 ?? 0.14% (?? ?? SE) for telemetry counts and 0.40 ?? 0.15 for flush counts. No within-year or across-year differences were detected between estimation techniques. More than 72% (n = 32) of documented poult mortality occurred ???14 days posthatch, and mammalian predation accounted for 92.9% of documented mortality. If mortality agents are not of concern, we suggest biologists conduct 4-week flush counts to obtain poult survival estimates for use in population models and development of harvest recommendations.
NASA Astrophysics Data System (ADS)
Zhang, Dongbo; Peng, Yinghui; Yi, Yao; Shang, Xingyu
2013-10-01
Detection of red lesions [hemorrhages (HRs) and microaneurysms (MAs)] is crucial for the diagnosis of early diabetic retinopathy. A method based on background estimation and adapted to specific characteristics of HRs and MAs is proposed. Candidate red lesions are located by background estimation and Mahalanobis distance measure and then some adaptive postprocessing techniques, which include vessel detection, nonvessel exclusion based on shape analysis, and noise points exclusion by double-ring filter (only used for MAs detection), are conducted to remove nonlesion pixels. The method is evaluated on our collected image dataset, and experimental results show that it is better than or approximate to other previous approaches. It is effective to reduce the false-positive and false-negative results that arise from incomplete and inaccurate vessel structure.
NASA Astrophysics Data System (ADS)
Ibraheem, Ismael M.; Elawadi, Eslam A.; El-Qady, Gad M.
2018-03-01
The Wadi El Natrun area in Egypt is located west of the Nile Delta on both sides of the Cairo-Alexandria desert road, between 30°00‧ and 30°40‧N latitude, and 29°40‧ and 30°40‧E longitude. The name refers to the NW-SE trending depression located in the area and containing lakes that produce natron salt. In spite of the area is promising for oil and gas exploration as well as agricultural projects, Geophysical studies carried out in the area is limited to the regional seismic surveys accomplished by oil companies. This study presents the interpretation of the airborne magnetic data to map the structure architecture and depth to the basement of the study area. This interpretation was facilitated by applying different data enhancement and processing techniques. These techniques included filters (regional-residual separation), derivatives and depth estimation using spectral analysis and Euler deconvolution. The results were refined using 2-D forward modeling along three profiles. Based on the depth estimation techniques, the estimated depth to the basement surface, ranges from 2.25 km to 5.43 km while results of the two-dimensional forward modeling show that the depth of the basement surface ranges from 2.2 km to 4.8 km. The dominant tectonic trends in the study area at deep levels are NW (Suez Trend), NNW, NE, and ENE (Syrian Arc System trend). The older ENE trend, which dominates the northwestern desert is overprinted in the study area by relatively recent NW and NE trends, whereas the tectonic trends at shallow levels are NW, ENE, NNE (Aqaba Trend), and NE. The predominant structure trend for both deep and shallow structures is the NW trend. The results of this study can be used to better understand deep-seated basement structures and to support decisions with regard to the development of agriculture, industrial areas, as well as oil and gas exploration in northern Egypt.
NASA Astrophysics Data System (ADS)
Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd
2018-02-01
In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.
NASA Technical Reports Server (NTRS)
Talley, Tom
2003-01-01
Johnson Space Center (JSC) is designing a small, remotely controlled vehicle that will carry two color and one black and white video cameras in space. The device will launch and retrieve from the Space Vehicle and be used for remote viewing. Off the shelf cellular technology is being used as the basis for communication system design. Existing plans include using multiple antennas to make simultaneous estimates of the azimuth of the MiniAERCam from several sites on the Space Station and use triangulation to find the location of the device. Adding range detection capability to each of the nodes on the Space Vehicle would allow an estimate of the location of the MiniAERCam to be made at each Communication And Telemetry Box (CATBox) independent of all the other communication nodes. This project will investigate the techniques used by the Global Positioning System (GPS) to achieve accurate positioning information and adapt those strategies that are appropriate to the design of the CATBox range determination system.
Multi Seasonal and Diurnal Characterization of Sensible Heat Flux in an Arid Land Environment
NASA Astrophysics Data System (ADS)
Al-Mashharawi, S.; Aragon, B.; McCabe, M.
2017-12-01
In sparsely vegetated arid and semi-arid regions, the available energy is transformed primarily into sensible heat, with little to no energy partitioned into latent heat. The characterization of bare soil arid environments are rather poorly understood in the context of both local, regional and global energy budgets. Using data from a long-term surface layer scintillometer and co-located meteorological installation, we examine the diurnal and seasonal patterns of sensible heat flux and the net radiation to soil heat flux ratio. We do this over a bare desert soil located adjacent to an irrigated agricultural field in the central region of Saudi Arabia. The results of this exploratory analysis can be used to inform upon remote sensing techniques for surface flux estimation, to derive and monitor soil heat flux dynamics, estimate the heat transfer resistance and the thermal roughness length over bare soils, and to better inform efforts that model the advective effects that complicate the accurate representation of agricultural energy budgets in the arid zone.
Methods to estimate historical daily streamflow for ungaged stream locations in Minnesota
Lorenz, David L.; Ziegeweid, Jeffrey R.
2016-03-14
Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water; however, streamgages cannot be installed at every location where streamflow information is needed. Therefore, methods for estimating streamflow at ungaged stream locations need to be developed. This report presents a statewide study to develop methods to estimate the structure of historical daily streamflow at ungaged stream locations in Minnesota. Historical daily mean streamflow at ungaged locations in Minnesota can be estimated by transferring streamflow data at streamgages to the ungaged location using the QPPQ method. The QPPQ method uses flow-duration curves at an index streamgage, relying on the assumption that exceedance probabilities are equivalent between the index streamgage and the ungaged location, and estimates the flow at the ungaged location using the estimated flow-duration curve. Flow-duration curves at ungaged locations can be estimated using recently developed regression equations that have been incorporated into StreamStats (http://streamstats.usgs.gov/), which is a U.S. Geological Survey Web-based interactive mapping tool that can be used to obtain streamflow statistics, drainage-basin characteristics, and other information for user-selected locations on streams.
Visible light communication and indoor positioning using a-SiCH device as receiver
NASA Astrophysics Data System (ADS)
Vieira, M. A.; Vieira, M.; Louro, P.; Vieira, P.; Fantoni, A.
2017-08-01
An indoor positioning system were trichromatic white LEDs are used both for illumination proposes and as transmitters and an optical processor, based on a-SiC:H technology, as mobile receiver is presented. OOK modulation scheme is used, and it provides a good trade-off between system performance and implementation complexity. The relationship between the transmitted data and the received digital output levels is decoded. The system topology for positioning is a self-positioning system in which the measuring unit is mobile. This unit receives the signals of several transmitters in known locations, and has the capability to compute its location based on the measured signals. LED bulbs work as transmitters, sending information together with different IDs related to their physical locations. A triangular topology for the unit cell is analysed. A 2D localization design, demonstrated by a prototype implementation is presented. Fine-grained indoor localization is tested. The received signal is used in coded multiplexing techniques for supporting communications and navigation concomitantly on the same channel. The position is estimated through the visible multilateration metodh using several non-collinear transmitters. The location and motion information is found by mapping position and estimates the location areas. Data analysis showed that by using a pinpin double photodiode based on a a-SiC:H heterostucture as receiver, and RBGLEDs as transmitters it is possible not only to determine the mobile target's position but also to infer the motion direction over time, along with the received information in each position.
NASA Technical Reports Server (NTRS)
Olds, John R.; Marcus, Leland
2002-01-01
This paper is written in support of the on-going research into conceptual space vehicle design conducted at the Space Systems Design Laboratory (SSDL) at the Georgia Institute of Technology. Research at the SSDL follows a sequence of a number of the traditional aerospace disciplines. The sequence of disciplines and interrelationship among them is shown in the Design Structure Matrix (DSM). The discipline of Weights and Sizing occupies a central location in the design of a new space vehicle. Weights and Sizing interact, either in a feed forward or feed back manner, with every other discipline in the DSM. Because of this principle location, accuracy in Weights and Sizing is integral to producing an accurate model of a space vehicle concept. Instead of using conceptual level techniques, a simplified Finite Element Analysis (FEA) technique is described as applied to the problem of the Liquid Oxygen (LOX) tank bending loads applied to the forward Liquid Hydrogen (LH2) tank of the Georgia Tech Air Breathing Launch Vehicle (ABLV).
Valtierra, Robert D; Glynn Holt, R; Cholewiak, Danielle; Van Parijs, Sofie M
2013-09-01
Multipath localization techniques have not previously been applied to baleen whale vocalizations due to difficulties in application to tonal vocalizations. Here it is shown that an autocorrelation method coupled with the direct reflected time difference of arrival localization technique can successfully resolve location information. A derivation was made to model the autocorrelation of a direct signal and its overlapping reflections to illustrate that an autocorrelation may be used to extract reflection information from longer duration signals containing a frequency sweep, such as some calls produced by baleen whales. An analysis was performed to characterize the difference in behavior of the autocorrelation when applied to call types with varying parameters (sweep rate, call duration). The method's feasibility was tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The method was then used to estimate the depth and range of a single North Atlantic right whale (Eubalaena glacialis) and humpback whale (Megaptera novaeangliae) from two separate experiments.
NASA Technical Reports Server (NTRS)
Kuntz, Todd A.; Wadley, Haydn N. G.; Black, David R.
1993-01-01
An X-ray technique for the measurement of internal residual strain gradients near the continuous reinforcements of metal matrix composites has been investigated. The technique utilizes high intensity white X-ray radiation from a synchrotron radiation source to obtain energy spectra from small (0.001 cu mm) volumes deep within composite samples. The viability of the technique was tested using a model system with 800 micron Al203 fibers and a commercial purity titanium matrix. Good agreement was observed between the measured residual radial and hoop strain gradients and those estimated from a simple elastic concentric cylinders model. The technique was then used to assess the strains near (SCS-6) silicon carbide fibers in a Ti-14Al-21Nb matrix after consolidation processing. Reasonable agreement between measured and calculated strains was seen provided the probe volume was located 50 microns or more from the fiber/matrix interface.
The Role of Experience in Location Estimation: Target Distributions Shift Location Memory Biases
ERIC Educational Resources Information Center
Lipinski, John; Simmering, Vanessa R.; Johnson, Jeffrey S.; Spencer, John P.
2010-01-01
Research based on the Category Adjustment model concluded that the spatial distribution of target locations does not influence location estimation responses [Huttenlocher, J., Hedges, L., Corrigan, B., & Crawford, L. E. (2004). Spatial categories and the estimation of location. "Cognition, 93", 75-97]. This conflicts with earlier results showing…
Diffusion MRI: literature review in salivary gland tumors.
Attyé, A; Troprès, I; Rouchy, R-C; Righini, C; Espinoza, S; Kastler, A; Krainik, A
2017-07-01
Surgical resection is currently the best treatment for salivary gland tumors. A reliable magnetic resonance imaging mapping, encompassing tumor grade, location, and extension may assist safe and effective tumor resection and provide better information for patients regarding potential risks and morbidity after surgical intervention. However, direct examination of the tumor grade and extension using conventional morphological MRI remains difficult, often requiring contrast media injection and complex algorithms on perfusion imaging to estimate the degree of malignancy. In addition, contrast-enhanced MRI technique may be problematic due to the recently demonstrated gadolinium accumulation in the dentate nucleus of the cerebellum. Significant developments in magnetic resonance diffusion imaging, involving voxel-based quantitative analysis through the measurement of the apparent diffusion coefficient, have enhanced our knowledge on the different histopathological salivary tumor grades. Other diffusion imaging-derived techniques, including high-order tractography models, have recently demonstrated their usefulness in assessing the facial nerve location in parotid tumor context. All of these imaging techniques do not require contrast media injection. Our review starts by outlining the physical basis of diffusion imaging, before discussing findings from diagnostic studies testing its usefulness in assessing salivary glands tumors with diffusion MRI. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Richardson, Claire; Rutherford, Shannon; Agranovski, Igor
2018-06-01
Given the significance of mining as a source of particulates, accurate characterization of emissions is important for the development of appropriate emission estimation techniques for use in modeling predictions and to inform regulatory decisions. The currently available emission estimation methods for Australian open-cut coal mines relate primarily to total suspended particulates and PM 10 (particulate matter with an aerodynamic diameter <10 μm), and limited data are available relating to the PM 2.5 (<2.5 μm) size fraction. To provide an initial analysis of the appropriateness of the currently available emission estimation techniques, this paper presents results of sampling completed at three open-cut coal mines in Australia. The monitoring data demonstrate that the particulate size fraction varies for different mining activities, and that the region in which the mine is located influences the characteristics of the particulates emitted to the atmosphere. The proportion of fine particulates in the sample increased with distance from the source, with the coarse fraction being a more significant proportion of total suspended particulates close to the source of emissions. In terms of particulate composition, the results demonstrate that the particulate emissions are predominantly sourced from naturally occurring geological material, and coal comprises less than 13% of the overall emissions. The size fractionation exhibited by the sampling data sets is similar to that adopted in current Australian emission estimation methods but differs from the size fractionation presented in the U.S. Environmental Protection Agency methodology. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Development of region-specific emission estimation techniques for PM 10 and PM 2.5 from open-cut coal mines is necessary to allow accurate prediction of particulate emissions to inform regulatory decisions and for use in modeling predictions. Comprehensive air quality monitoring was undertaken, and corresponding recommendations were provided.
Bombing Target Identification from Limited Transect Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Barry L.; Hathaway, John E.; Pulsipher, Brent A.
2006-08-07
A series of sensor data combined with geostatistical techniques were used to determine likely target areas for a historic military aerial bombing range. Primary data consisted of magnetic anomaly information from limited magnetometer transects across the site. Secondary data included airborne LIDAR, orthophotography, and other general site characterization information. Identification of likely target areas relied primarily upon kriging estimates of magnetic anomaly densities across the site. Secondary information, such as impact crater locations, was used to refine the boundary delineations.
Semantic Image Based Geolocation Given a Map (Author’s Initial Manuscript)
2016-09-01
novel technique for detection and identification of building facades from geo-tagged reference view using the map and geometry of the building facades. We...2D map of the environment, and geometry of building facades. We evaluate our approach for building identification and geo-localization on a new...location recognition and building identification is done by matching the query view to a reference set, followed by estimation of 3D building facades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zolotov, D. A., E-mail: zolotovden@crys.ras.ru; Buzmakov, A. V.; Elfimov, D. A.
2017-01-15
The spatial arrangement of single linear defects in a Si single crystal (input surface (111)) has been investigated by X-ray topo-tomography using laboratory X-ray sources. The experimental technique and the procedure of reconstructing a 3D image of dislocation half-loops near the Si crystal surface are described. The sizes of observed linear defects with a spatial resolution of about 10 μm are estimated.
Yoshino, Masanori; Nakatomi, Hirofumi; Kin, Taichi; Saito, Toki; Shono, Naoyuki; Nomura, Seiji; Nakagawa, Daichi; Takayanagi, Shunsaku; Imai, Hideaki; Oyama, Hiroshi; Saito, Nobuhito
2017-07-01
Successful resection of hemangioblastoma depends on preoperative assessment of the precise locations of feeding arteries and draining veins. Simultaneous 3D visualization of feeding arteries, draining veins, and surrounding structures is needed. The present study evaluated the usefulness of high-resolution 3D multifusion medical imaging (hr-3DMMI) for preoperative planning of hemangioblastoma. The hr-3DMMI combined MRI, MR angiography, thin-slice CT, and 3D rotated angiography. Surface rendering was mainly used for the creation of hr-3DMMI using multiple thresholds to create 3D models, and processing took approximately 3-5 hours. This hr-3DMMI technique was used in 5 patients for preoperative planning and the imaging findings were compared with the operative findings. Hr-3DMMI could simulate the whole 3D tumor as a unique sphere and show the precise penetration points of both feeding arteries and draining veins with the same spatial relationships as the original tumor. All feeding arteries and draining veins were found intraoperatively at the same position as estimated preoperatively, and were occluded as planned preoperatively. This hr-3DMMI technique could demonstrate the precise locations of feeding arteries and draining veins preoperatively and estimate the appropriate route for resection of the tumor. Hr-3DMMI is expected to be a very useful support tool for surgery of hemangioblastoma.
Variability of 137Cs inventory at a reference site in west-central Iran.
Bazshoushtari, Nasim; Ayoubi, Shamsollah; Abdi, Mohammad Reza; Mohammadi, Mohammad
2016-12-01
137 Cs technique has been widely used for the evaluation rates and patterns of soil erosion and deposition. This technique requires an accurate estimate of the values of 137 Cs inventory at the reference site. This study was conducted to evaluate the variability of the inventory of 137 Cs regarding to the sampling program including sample size, distance and sampling method at a reference site located in vicinity of Fereydan district in Isfahan province, west-central Iran. Two 3 × 8 grids were established comprising large grid (35 m length and 8 m width), and small grid (24 m length and 6 m width). At each grid intersection two soil samples were collected from 0 to 15 cm and 15-30 cm depths, totally 96 soil samples from 48 sampling points. Coefficients of variation for 137 Cs inventory in the soil samples was relatively low (CV = 15%), and the sampling distance and methods used did not significantly affect the 137 Cs inventories across the studied reference site. To obtain a satisfactory estimate of the mean 137 Cs activity in the reference sites, particularly those located in the semiarid regions, it is recommended to collect at least four samples along in a grid pattern 3 m apart. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Progress in Turbulence Detection via GNSS Occultation Data
NASA Technical Reports Server (NTRS)
Cornman, L. B.; Goodrich, R. K.; Axelrad, P.; Barlow, E.
2012-01-01
The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.
3D kinematics of mobile-bearing total knee arthroplasty using X-ray fluoroscopy.
Yamazaki, Takaharu; Futai, Kazuma; Tomita, Tetsuya; Sato, Yoshinobu; Yoshikawa, Hideki; Tamura, Shinichi; Sugamoto, Kazuomi
2015-04-01
Total knee arthroplasty (TKA) 3D kinematic analysis requires 2D/3D image registration of X-ray fluoroscopic images and a computer-aided design (CAD) model of the knee implant. However, these techniques cannot provide information on the radiolucent polyethylene insert, since the insert silhouette does not appear clearly in X-ray images. Therefore, it is difficult to obtain the 3D kinematics of the polyethylene insert, particularly the mobile-bearing insert. A technique for 3D kinematic analysis of a mobile-bearing insert used in TKA was developed using X-ray fluoroscopy. The method was tested and a clinical application was evaluated. Tantalum beads and a CAD model of the mobile-bearing TKA insert are used for 3D pose estimation of the mobile-bearing insert used in TKA using X-ray fluoroscopy. The insert model was created using four identical tantalum beads precisely located at known positions in a polyethylene insert using a specially designed insertion device. Finally, the 3D pose of the insert model was estimated using a feature-based 2D/3D registration technique, using the silhouette of beads in fluoroscopic images and the corresponding CAD insert model. In vitro testing for the repeatability of the positioning of the tantalum beads and computer simulations for 3D pose estimation of the mobile-bearing insert were performed. The pose estimation accuracy achieved was sufficient for analyzing mobile-bearing TKA kinematics (RMS error: within 1.0 mm and 1.0°, except for medial-lateral translation). In a clinical application, nine patients with mobile-bearing TKA were investigated and analyzed with respect to a deep knee bending motion. A 3D kinematic analysis technique was developed that enables accurate quantitative evaluation of mobile-bearing TKA kinematics. This method may be useful for improving implant design and optimizing TKA surgical techniques.
Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.
2011-01-01
Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..
Tele-Autonomous control involving contact. Final Report Thesis; [object localization
NASA Technical Reports Server (NTRS)
Shao, Lejun; Volz, Richard A.; Conway, Lynn; Walker, Michael W.
1990-01-01
Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed.
Array observations and analyses of Cascadia deep tremor
NASA Astrophysics Data System (ADS)
McCausland, W. A.; Malone, S.; Creager, K.; Crosson, R.; La Rocca, M.; Saccoretti, G.
2004-12-01
The July 8-24, 2004 Cascadia Episodic Tremor and Slip (ETS) event was observed using three small aperture seismic arrays located near Sooke, BC, Sequim, WA, and on Lopez Island, WA. Initial tremor burst epicenters located in the Strait of Juan de Fuca and were calculated using the relative arrivals of band-passed, rectified regional network signals. Most subsequent epicenters migrated to the northwest along Vancouver Island and a few occurred in the central to southern Puget Sound. Tremor bursts lasting on the order of a few seconds can be identified across the stations of any of the three arrays. Individual bursts from distinct back-azimuths often occur within five seconds of each other, indicating the presence of spatially distributed but near simultaneous tremor. None of this was visible at such a fine scale using Pacific Northwest Seismograph Network (PNSN). Several array processing techniques, including beam-forming, zero-lag cross correlation and multiple signal classification (MUSIC), are being investigated to determine the optimal technique for exploring the temporal and spatial evolution of the tremor signals during the whole ETS. The back-azimuth and slowness of consecutive time windows for a one half-hour period of strong tremor were calculated using beam-forming with a linear stack, with an nth-root stack, and using zero-lag cross-correlation. Results for each array and each method yield consistent estimates of back azimuth and slowness. Beam-forming with a nonlinear stack produces results similar to the linear case but with larger uncertainty. Among the arrays, the back-azimuths give a reasonable estimate of the tremor epicenter that is consistent with the network determined epicentral locations.
NASA Astrophysics Data System (ADS)
McBride, William R.; McBride, Daniel R.
2016-08-01
The Daniel K Inouye Solar Telescope (DKIST) will be the largest solar telescope in the world, providing a significant increase in the resolution of solar data available to the scientific community. Vibration mitigation is critical in long focal-length telescopes such as the Inouye Solar Telescope, especially when adaptive optics are employed to correct for atmospheric seeing. For this reason, a vibration error budget has been implemented. Initially, the FRFs for the various mounting points of ancillary equipment were estimated using the finite element analysis (FEA) of the telescope structures. FEA analysis is well documented and understood; the focus of this paper is on the methods involved in estimating a set of experimental (measured) transfer functions of the as-built telescope structure for the purpose of vibration management. Techniques to measure low-frequency single-input-single-output (SISO) frequency response functions (FRF) between vibration source locations and image motion on the focal plane are described. The measurement equipment includes an instrumented inertial-mass shaker capable of operation down to 4 Hz along with seismic accelerometers. The measurement of vibration at frequencies below 10 Hz with good signal-to-noise ratio (SNR) requires several noise reduction techniques including high-performance windows, noise-averaging, tracking filters, and spectral estimation. These signal-processing techniques are described in detail.
NASA Astrophysics Data System (ADS)
Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen
2015-01-01
Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.
Si, Liang; Baier, Horst
2015-07-08
For the future design of smart aerospace structures, the development and application of a reliable, real-time and automatic monitoring and diagnostic technique is essential. Thus, with distributed sensor networks, a real-time automatic structural health monitoring (SHM) technique is designed and investigated to monitor and predict the locations and force magnitudes of unforeseen foreign impacts on composite structures and to estimate in real time mode the structural state when impacts occur. The proposed smart impact visualization inspection (IVI) technique mainly consists of five functional modules, which are the signal data preprocessing (SDP), the forward model generator (FMG), the impact positioning calculator (IPC), the inverse model operator (IMO) and structural state estimator (SSE). With regard to the verification of the practicality of the proposed IVI technique, various structure configurations are considered, which are a normal CFRP panel and another CFRP panel with "orange peel" surfaces and a cutout hole. Additionally, since robustness against several background disturbances is also an essential criterion for practical engineering demands, investigations and experimental tests are carried out under random vibration interfering noise (RVIN) conditions. The accuracy of the predictions for unknown impact events on composite structures using the IVI technique is validated under various structure configurations and under changing environmental conditions. The evaluated errors all fall well within a satisfactory limit range. Furthermore, it is concluded that the IVI technique is applicable for impact monitoring, diagnosis and assessment of aerospace composite structures in complex practical engineering environments.
Real-Time Impact Visualization Inspection of Aerospace Composite Structures with Distributed Sensors
Si, Liang; Baier, Horst
2015-01-01
For the future design of smart aerospace structures, the development and application of a reliable, real-time and automatic monitoring and diagnostic technique is essential. Thus, with distributed sensor networks, a real-time automatic structural health monitoring (SHM) technique is designed and investigated to monitor and predict the locations and force magnitudes of unforeseen foreign impacts on composite structures and to estimate in real time mode the structural state when impacts occur. The proposed smart impact visualization inspection (IVI) technique mainly consists of five functional modules, which are the signal data preprocessing (SDP), the forward model generator (FMG), the impact positioning calculator (IPC), the inverse model operator (IMO) and structural state estimator (SSE). With regard to the verification of the practicality of the proposed IVI technique, various structure configurations are considered, which are a normal CFRP panel and another CFRP panel with “orange peel” surfaces and a cutout hole. Additionally, since robustness against several background disturbances is also an essential criterion for practical engineering demands, investigations and experimental tests are carried out under random vibration interfering noise (RVIN) conditions. The accuracy of the predictions for unknown impact events on composite structures using the IVI technique is validated under various structure configurations and under changing environmental conditions. The evaluated errors all fall well within a satisfactory limit range. Furthermore, it is concluded that the IVI technique is applicable for impact monitoring, diagnosis and assessment of aerospace composite structures in complex practical engineering environments. PMID:26184196
Su, Jason G; Jerrett, Michael; Meng, Ying-Ying; Pickett, Melissa; Ritz, Beate
2015-02-15
Epidemiological studies investigating relationships between environmental exposures from air pollution and health typically use residential addresses as a single point for exposure, while environmental exposures in transit, at work, school or other locations are largely ignored. Personal exposure monitors measure individuals' exposures over time; however, current personal monitors are intrusive and cannot be operated at a large scale over an extended period of time (e.g., for a continuous three months) and can be very costly. In addition, spatial locations typically cannot be identified when only personal monitors are used. In this paper, we piloted a study that applied momentary location tracking services supplied by smart phones to identify an individual's location in space-time for three consecutive months (April 28 to July 28, 2013) using available Wi-Fi networks. Individual exposures in space-time to the traffic-related pollutants Nitrogen Oxides (NOX) were estimated by superimposing an annual mean NOX concentration surface modeled using the Land Use Regression (LUR) modeling technique. Individual's exposures were assigned to stationary (including home, work and other stationary locations) and in-transit (including commute and other travel) locations. For the individual, whose home/work addresses were known and the commute route was fixed, it was found that 95.3% of the time, the individual could be accurately identified in space-time. The ambient concentration estimated at the home location was 21.01 ppb. When indoor/outdoor infiltration, indoor sources of air pollution and time spent outdoors were taken into consideration, the individual's cumulative exposures were 28.59 ppb and 96.49 ppb, assuming a respective indoor/outdoor ratio of 1.33 and 5.00. Integrating momentary location tracking services with fixed-site field monitoring, plus indoor-outdoor air exchange calibration, makes exposure assessment of a very large population over an extended time period feasible. Copyright © 2014 Elsevier B.V. All rights reserved.
The contribution of China's Grain to Green Programto carbon and water cycles
NASA Astrophysics Data System (ADS)
Yuan, W.
2017-12-01
The Chinese government started implementation of the Grain for Green Project (GGP) in 1999, aiming to convert cropland to forestland to mitigate soil erosion problems in areas across the country. Although the project has generated substantial environmental benefits, such as erosion reduction, carbon sequestration and water quality improvements, the magnitude of these benefits has not yet been well quantified due to the lack of location specific data describing the afforestation efforts. Remote sensing is well suited to detect afforestation locations, a prerequisite for estimating the impacts of the project on carbon and water cycles. In this study, we first examined the practicability of using the Moderate Resolution Imaging Spectroradiometer (MODIS) land cover product to detect afforestation locations; however, the results showed that the MODIS product failed to distinguish the afforestation areas of GGP. Then, we used a normalized difference vegetation index (NDVI) time series analysis approach for detecting afforestation locations, applying statistical data to determine the NDVI threshold of converted croplands. The technique provided the necessary information for location of afforestation implemented under GGP, explaining 85% of conversion from cropland to forestlands across all provinces. Second, we estimated the changes in carbon fluxes and stocks caused by forests converted from croplands under the GGP using a process-based ecosystem model (i.e., IBIS). Our results showed that the converted areas from croplands to forests under the GGP program could sequester 110.45 Tg C by 2020, and 524.36 Tg C by the end of this century. The sequestration capacity showed substantial spatial variations with large sequestration in southern China. The economic benefits of carbon sequestration from the GGP were also estimated according to the current carbon price. The estimated economic benefits ranged from 8.84 to 44.20 billion from 2000 through 2100, which may exceed the current total investment ($38.99 billion) on the program. As the GGP program continues and forests grow, the impact of this program will be even larger in the future, making a more considerable contribution to China's carbon sink over the upcoming decades.
A study of various methods for calculating locations of lightning events
NASA Technical Reports Server (NTRS)
Cannon, John R.
1995-01-01
This article reports on the results of numerical experiments on finding the location of lightning events using different numerical methods. The methods include linear least squares, nonlinear least squares, statistical estimations, cluster analysis and angular filters and combinations of such techniques. The experiments involved investigations of methods for excluding fake solutions which are solutions that appear to be reasonable but are in fact several kilometers distant from the actual location. Some of the conclusions derived from the study are that bad data produces fakes, that no fool-proof method of excluding fakes was found, that a short base-line interferometer under development at Kennedy Space Center to measure the direction cosines of an event shows promise as a filter for excluding fakes. The experiments generated a number of open questions, some of which are discussed at the end of the report.
NASA Technical Reports Server (NTRS)
Liu, G.
1985-01-01
One of the major concerns in the design of an active control system is obtaining the information needed for effective feedback. This involves the combination of sensing and estimation. A sensor location index is defined as the weighted sum of the mean square estimation errors in which the sensor locations can be regarded as estimator design parameters. The design goal is to choose these locations to minimize the sensor location index. The choice of the number of sensors is a tradeoff between the estimation quality based upon the same performance index and the total costs of installing and maintaining extra sensors. An experimental study for choosing the sensor location was conducted on an aeroelastic system. The system modeling which includes the unsteady aerodynamics model developed by Stephen Rock was improved. Experimental results verify the trend of the theoretical predictions of the sensor location index for different sensor locations at various wind speeds.
Zhang, Yawei; Yin, Fang-Fang; Zhang, You; Ren, Lei
2017-05-07
The purpose of this study is to develop an adaptive prior knowledge guided image estimation technique to reduce the scan angle needed in the limited-angle intrafraction verification (LIVE) system for 4D-CBCT reconstruction. The LIVE system has been previously developed to reconstruct 4D volumetric images on-the-fly during arc treatment for intrafraction target verification and dose calculation. In this study, we developed an adaptive constrained free-form deformation reconstruction technique in LIVE to further reduce the scanning angle needed to reconstruct the 4D-CBCT images for faster intrafraction verification. This technique uses free form deformation with energy minimization to deform prior images to estimate 4D-CBCT based on kV-MV projections acquired in extremely limited angle (orthogonal 3°) during the treatment. Note that the prior images are adaptively updated using the latest CBCT images reconstructed by LIVE during treatment to utilize the continuity of the respiratory motion. The 4D digital extended-cardiac-torso (XCAT) phantom and a CIRS 008A dynamic thoracic phantom were used to evaluate the effectiveness of this technique. The reconstruction accuracy of the technique was evaluated by calculating both the center-of-mass-shift (COMS) and 3D volume-percentage-difference (VPD) of the tumor in reconstructed images and the true on-board images. The performance of the technique was also assessed with varied breathing signals against scanning angle, lesion size, lesion location, projection sampling interval, and scanning direction. In the XCAT study, using orthogonal-view of 3° kV and portal MV projections, this technique achieved an average tumor COMS/VPD of 0.4 ± 0.1 mm/5.5 ± 2.2%, 0.6 ± 0.3 mm/7.2 ± 2.8%, 0.5 ± 0.2 mm/7.1 ± 2.6%, 0.6 ± 0.2 mm/8.3 ± 2.4%, for baseline drift, amplitude variation, phase shift, and patient breathing signal variation, respectively. In the CIRS phantom study, this technique achieved an average tumor COMS/VPD of 0.7 ± 0.1 mm/7.5 ± 1.3% for a 3 cm lesion and 0.6 ± 0.2 mm/11.4 ± 1.5% for a 2 cm lesion in the baseline drift case. The average tumor COMS/VPD were 0.5 ± 0.2 mm/10.8 ± 1.4%, 0.4 ± 0.3 mm/7.3 ± 2.9%, 0.4 ± 0.2 mm/7.4 ± 2.5%, 0.4 ± 0.2 mm/7.3 ± 2.8% for the four real patient breathing signals, respectively. Results demonstrated that the adaptive prior knowledge guided image estimation technique with LIVE system is robust against scanning angle, lesion size, location and scanning direction. It can estimate on-board images accurately with as little as 6 projections in orthogonal-view 3° angle. In conclusion, adaptive prior knowledge guided image reconstruction technique accurately estimates 4D-CBCT images using extremely-limited angle and projections. This technique greatly improves the efficiency and accuracy of LIVE system for ultrafast 4D intrafraction verification of lung SBRT treatments.
Local regression type methods applied to the study of geophysics and high frequency financial data
NASA Astrophysics Data System (ADS)
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
NASA Astrophysics Data System (ADS)
Jana, S.; Chakraborty, R.; Maitra, A.
2017-12-01
Nowcasting of lightning activities during intense convective events using a single electric field monitor (EFM) has been carried out at a tropical location, Kolkata (22.65oN, 88.45oE). Before and at the onset of heavy lightning, certain changes of electric field (EF) can be related to high liquid water content (LWC) and low cloud base height (CBH). The present study discusses the utility of EF observation to show a few aspects of convective events. Large convective cloud showed by high LWC and low CBH can be detected from EF variation which could be a precursor of upcoming convective events. Suitable values of EF gradient can be used as an indicator of impending lightning events. An EF variation of 0.195 kV/m/min can predict lightning within 17.5 km radius with a probability of detection (POD) of 91% and false alarm rate (FAR) of 8% with a lead time of 45 min. The total number of predicted lightning strikes is nearly 9 times less than that measured by the lightning detector. This prediction technique can, therefore, give an estimate of cloud to ground (CG) and intra cloud (IC) lighting occurrences within the surrounding area. This prediction technique involving POD, FAR and lead time information shows a better prediction capability compared to the techniques reported earlier. Thus an EFM can be effectively used for prediction of lightning events at a tropical location.
Minimum Detectable Activity for Tomographic Gamma Scanning System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkataraman, Ram; Smith, Susan; Kirkpatrick, J. M.
2015-01-01
For any radiation measurement system, it is useful to explore and establish the detection limits and a minimum detectable activity (MDA) for the radionuclides of interest, even if the system is to be used at far higher values. The MDA serves as an important figure of merit, and often a system is optimized and configured so that it can meet the MDA requirements of a measurement campaign. The non-destructive assay (NDA) systems based on gamma ray analysis are no exception and well established conventions, such the Currie method, exist for estimating the detection limits and the MDA. However, the Tomographicmore » Gamma Scanning (TGS) technique poses some challenges for the estimation of detection limits and MDAs. The TGS combines high resolution gamma ray spectrometry (HRGS) with low spatial resolution image reconstruction techniques. In non-imaging gamma ray based NDA techniques measured counts in a full energy peak can be used to estimate the activity of a radionuclide, independently of other counting trials. However, in the case of the TGS each “view” is a full spectral grab (each a counting trial), and each scan consists of 150 spectral grabs in the transmission and emission scans per vertical layer of the item. The set of views in a complete scan are then used to solve for the radionuclide activities on a voxel by voxel basis, over 16 layers of a 10x10 voxel grid. Thus, the raw count data are not independent trials any more, but rather constitute input to a matrix solution for the emission image values at the various locations inside the item volume used in the reconstruction. So, the validity of the methods used to estimate MDA for an imaging technique such as TGS warrant a close scrutiny, because the pair-counting concept of Currie is not directly applicable. One can also raise questions as to whether the TGS, along with other image reconstruction techniques which heavily intertwine data, is a suitable method if one expects to measure samples whose activities are at or just above MDA levels. The paper examines methods used to estimate MDAs for a TGS system, and explores possible solutions that can be rigorously defended.« less
Lopez-Iturri, Peio; de Miguel-Bilbao, Silvia; Aguirre, Erik; Azpilicueta, Leire; Falcone, Francisco; Ramos, Victoria
2015-01-01
The electromagnetic field leakage levels of nonionizing radiation from a microwave oven have been estimated within a complex indoor scenario. By employing a hybrid simulation technique, based on coupling full wave simulation with an in-house developed deterministic 3D ray launching code, estimations of the observed electric field values can be obtained for the complete indoor scenario. The microwave oven can be modeled as a time- and frequency-dependent radiating source, in which leakage, basically from the microwave oven door, is propagated along the complete indoor scenario interacting with all of the elements present in it. This method can be of aid in order to assess the impact of such devices on expected exposure levels, allowing adequate minimization strategies such as optimal location to be applied. PMID:25705676
Tigers and their prey: Predicting carnivore densities from prey abundance
Karanth, K.U.; Nichols, J.D.; Kumar, N.S.; Link, W.A.; Hines, J.E.
2004-01-01
The goal of ecology is to understand interactions that determine the distribution and abundance of organisms. In principle, ecologists should be able to identify a small number of limiting resources for a species of interest, estimate densities of these resources at different locations across the landscape, and then use these estimates to predict the density of the focal species at these locations. In practice, however, development of functional relationships between abundances of species and their resources has proven extremely difficult, and examples of such predictive ability are very rare. Ecological studies of prey requirements of tigers Panthera tigris led us to develop a simple mechanistic model for predicting tiger density as a function of prey density. We tested our model using data from a landscape-scale long-term (1995-2003) field study that estimated tiger and prey densities in 11 ecologically diverse sites across India. We used field techniques and analytical methods that specifically addressed sampling and detectability, two issues that frequently present problems in macroecological studies of animal populations. Estimated densities of ungulate prey ranged between 5.3 and 63.8 animals per km2. Estimated tiger densities (3.2-16.8 tigers per 100 km2) were reasonably consistent with model predictions. The results provide evidence of a functional relationship between abundances of large carnivores and their prey under a wide range of ecological conditions. In addition to generating important insights into carnivore ecology and conservation, the study provides a potentially useful model for the rigorous conduct of macroecological science.
NASA Astrophysics Data System (ADS)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; Carlson, Thomas J.
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on measurements obtained from sensors (i.e., receivers) is an important research area that is attracting much interest. In this paper, we review several representative localization algorithms that use time of arrivals (TOAs) and time difference of arrivals (TDOAs) to achieve high signal source position estimation accuracy when a transmitter is in the line-of-sight of a receiver. Circular (TOA) and hyperbolic (TDOA) position estimation approaches both use nonlinear equations that relate the known locations of receivers and unknown locations of transmitters. Estimation of the location of transmitters using the standard nonlinear equations may not be very accurate because of receiver location errors, receiver measurement errors, and computational efficiency challenges that result in high computational burdens. Least squares and maximum likelihood based algorithms have become the most popular computational approaches to transmitter location estimation. In this paper, we summarize the computational characteristics and position estimation accuracies of various positioning algorithms. By improving methods for estimating the time-of-arrival of transmissions at receivers and transmitter location estimation algorithms, transmitter location estimation may be applied across a range of applications and technologies such as radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.
NASA Astrophysics Data System (ADS)
Bhattacharjee, Sudipta; Deb, Debasis
2016-07-01
Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.
Wang, Dengjiang; Zhang, Weifang; Wang, Xiangyu; Sun, Bo
2016-01-01
This study presents a novel monitoring method for hole-edge corrosion damage in plate structures based on Lamb wave tomographic imaging techniques. An experimental procedure with a cross-hole layout using 16 piezoelectric transducers (PZTs) was designed. The A0 mode of the Lamb wave was selected, which is sensitive to thickness-loss damage. The iterative algebraic reconstruction technique (ART) method was used to locate and quantify the corrosion damage at the edge of the hole. Hydrofluoric acid with a concentration of 20% was used to corrode the specimen artificially. To estimate the effectiveness of the proposed method, the real corrosion damage was compared with the predicted corrosion damage based on the tomographic method. The results show that the Lamb-wave-based tomographic method can be used to monitor the hole-edge corrosion damage accurately. PMID:28774041
Eickenberg, Michael; Rowekamp, Ryan J.; Kouh, Minjoon; Sharpee, Tatyana O.
2012-01-01
Our visual system is capable of recognizing complex objects even when their appearances change drastically under various viewing conditions. Especially in the higher cortical areas, the sensory neurons reflect such functional capacity in their selectivity for complex visual features and invariance to certain object transformations, such as image translation. Due to the strong nonlinearities necessary to achieve both the selectivity and invariance, characterizing and predicting the response patterns of these neurons represents a formidable computational challenge. A related problem is that such neurons are poorly driven by randomized inputs, such as white noise, and respond strongly only to stimuli with complex high-order correlations, such as natural stimuli. Here we describe a novel two-step optimization technique that can characterize both the shape selectivity and the range and coarseness of position invariance from neural responses to natural stimuli. One step in the optimization involves finding the template as the maximally informative dimension given the estimated spatial location where the response could have been triggered within each image. The estimates of the locations that triggered the response are subsequently updated in the next step. Under the assumption of a monotonic relationship between the firing rate and stimulus projections on the template at a given position, the most likely location is the one that has the largest projection on the estimate of the template. The algorithm shows quick convergence during optimization, and the estimation results are reliable even in the regime of small signal-to-noise ratios. When we apply the algorithm to responses of complex cells in the primary visual cortex (V1) to natural movies, we find that responses of the majority of cells were significantly better described by translation invariant models based on one template compared with position-specific models with several relevant features. PMID:22734487
Algorithm based on the short-term Rényi entropy and IF estimation for noisy EEG signals analysis.
Lerga, Jonatan; Saulig, Nicoletta; Mozetič, Vladimir
2017-01-01
Stochastic electroencephalogram (EEG) signals are known to be nonstationary and often multicomponential. Detecting and extracting their components may help clinicians to localize brain neurological dysfunctionalities for patients with motor control disorders due to the fact that movement-related cortical activities are reflected in spectral EEG changes. A new algorithm for EEG signal components detection from its time-frequency distribution (TFD) has been proposed in this paper. The algorithm utilizes the modification of the Rényi entropy-based technique for number of components estimation, called short-term Rényi entropy (STRE), and upgraded by an iterative algorithm which was shown to enhance existing approaches. Combined with instantaneous frequency (IF) estimation, the proposed method was applied to EEG signal analysis both in noise-free and noisy environments for limb movements EEG signals, and was shown to be an efficient technique providing spectral description of brain activities at each electrode location up to moderate additive noise levels. Furthermore, the obtained information concerning the number of EEG signal components and their IFs show potentials to enhance diagnostics and treatment of neurological disorders for patients with motor control illnesses. Copyright © 2016 Elsevier Ltd. All rights reserved.
Three main paradigms of simultaneous localization and mapping (SLAM) problem
NASA Astrophysics Data System (ADS)
Imani, Vandad; Haataja, Keijo; Toivanen, Pekka
2018-04-01
Simultaneous Localization and Mapping (SLAM) is one of the most challenging research areas within computer and machine vision for automated scene commentary and explanation. The SLAM technique has been a developing research area in the robotics context during recent years. By utilizing the SLAM method robot can estimate the different positions of the robot at the distinct points of time which can indicate the trajectory of robot as well as generate a map of the environment. SLAM has unique traits which are estimating the location of robot and building a map in the various types of environment. SLAM is effective in different types of environment such as indoor, outdoor district, Air, Underwater, Underground and Space. Several approaches have been investigated to use SLAM technique in distinct environments. The purpose of this paper is to provide an accurate perceptive review of case history of SLAM relied on laser/ultrasonic sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons. In the future, use intelligent methods and some new idea will be used on visual SLAM to estimate the motion intelligent underwater robot and building a feature map of marine environment.
Rainfall estimation from microwave links in São Paulo, Brazil.
NASA Astrophysics Data System (ADS)
Rios Gaona, Manuel Felipe; Overeem, Aart; Leijnse, Hidde; Uijlenhoet, Remko
2017-04-01
Rainfall estimation from microwave link networks has been successfully demonstrated in countries such as the Netherlands, Israel and Germany. The path-averaged rainfall intensity can be computed from the signal attenuation between cell phone towers. Although this technique is still in development, it offers great opportunities to retrieve rainfall rates at high spatiotemporal resolutions very close to the ground surface. High spatiotemporal resolutions and closer-to-ground measurements are highly appreciated, especially in urban catchments where high-impact events such as flash-floods develop in short time scales. We evaluate here this rainfall measurement technique for a tropical climate, something that has hardly been done previously. This is highly relevant since many countries with few surface rainfall observations are located in the tropics. The test-bed is the Brazilian city of São Paulo. The performance of 16 microwave links was evaluated, from a network of 200 links, for the last 3 months of 2014. The open software package RAINLINK was employed to obtain link rainfall estimates. The evaluation was done through a dense automatic gauge network. Results are promising and encouraging, especially for short links for which a high correlation (> 0.9) and a low bias (< 5%) were obtained.
Vital Sign Monitoring Through the Back Using an UWB Impulse Radar With Body Coupled Antennas.
Schires, Elliott; Georgiou, Pantelis; Lande, Tor Sverre
2018-04-01
Radar devices can be used in nonintrusive situations to monitor vital sign, through clothes or behind walls. By detecting and extracting body motion linked to physiological activity, accurate simultaneous estimations of both heart rate (HR) and respiration rate (RR) is possible. However, most research to date has focused on front monitoring of superficial motion of the chest. In this paper, body penetration of electromagnetic (EM) wave is investigated to perform back monitoring of human subjects. Using body-coupled antennas and an ultra-wideband (UWB) pulsed radar, in-body monitoring of lungs and heart motion was achieved. An optimised location of measurement in the back of a subject is presented, to enhance signal-to-noise ratio and limit attenuation of reflected radar signals. Phase-based detection techniques are then investigated for back measurements of vital sign, in conjunction with frequency estimation methods that reduce the impact of parasite signals. Finally, an algorithm combining these techniques is presented to allow robust and real-time estimation of both HR and RR. Static and dynamic tests were conducted, and demonstrated the possibility of using this sensor in future health monitoring systems, especially in the form of a smart car seat for driver monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wahid, Ali, E-mail: ali.wahid@live.com; Salim, Ahmed Mohamed Ahmed, E-mail: mohamed.salim@petronas.com.my; Yusoff, Wan Ismail Wan, E-mail: wanismail-wanyusoff@petronas.com.my
2016-02-01
Geostatistics or statistical approach is based on the studies of temporal and spatial trend, which depend upon spatial relationships to model known information of variable(s) at unsampled locations. The statistical technique known as kriging was used for petrophycial and facies analysis, which help to assume spatial relationship to model the geological continuity between the known data and the unknown to produce a single best guess of the unknown. Kriging is also known as optimal interpolation technique, which facilitate to generate best linear unbiased estimation of each horizon. The idea is to construct a numerical model of the lithofacies and rockmore » properties that honor available data and further integrate with interpreting seismic sections, techtonostratigraphy chart with sea level curve (short term) and regional tectonics of the study area to find the structural and stratigraphic growth history of the NW Bonaparte Basin. By using kriging technique the models were built which help to estimate different parameters like horizons, facies, and porosities in the study area. The variograms were used to determine for identification of spatial relationship between data which help to find the depositional history of the North West (NW) Bonaparte Basin.« less
Flight of the bumble bee: Buzzes predict pollination services.
Miller-Struttmann, Nicole E; Heise, David; Schul, Johannes; Geib, Jennifer C; Galen, Candace
2017-01-01
Multiple interacting factors drive recent declines in wild and managed bees, threatening their pollination services. Widespread and intensive monitoring could lead to more effective management of wild and managed bees. However, tracking their dynamic populations is costly. We tested the effectiveness of an inexpensive, noninvasive and passive acoustic survey technique for monitoring bumble bee behavior and pollination services. First, we assessed the relationship between the first harmonic of the flight buzz (characteristic frequency) and pollinator functional traits that influence pollination success using flight cage experiments and a literature search. We analyzed passive acoustic survey data from three locations on Pennsylvania Mountain, Colorado to estimate bumble bee activity. We developed an algorithm based on Computational Auditory Scene Analysis that identified and quantified the number of buzzes recorded in each location. We then compared visual and acoustic estimates of bumble bee activity. Using pollinator exclusion experiments, we tested the power of buzz density to predict pollination services at the landscape scale for two bumble bee pollinated alpine forbs (Trifolium dasyphyllum and T. parryi). We found that the characteristic frequency was correlated with traits known to affect pollination efficacy, explaining 30-52% of variation in body size and tongue length. Buzz density was highly correlated with visual estimates of bumble bee density (r = 0.97), indicating that acoustic signals are predictive of bumble bee activity. Buzz density predicted seed set in two alpine forbs when bumble bees were permitted access to the flowers, but not when they were excluded from visiting. Our results indicate that acoustic signatures of flight can be deciphered to monitor bee activity and pollination services to bumble bee pollinated plants. We propose that applications of this technique could assist scientists and farmers in rapidly detecting and responding to bee population declines.
Flight of the bumble bee: Buzzes predict pollination services
Heise, David; Schul, Johannes; Geib, Jennifer C.; Galen, Candace
2017-01-01
Multiple interacting factors drive recent declines in wild and managed bees, threatening their pollination services. Widespread and intensive monitoring could lead to more effective management of wild and managed bees. However, tracking their dynamic populations is costly. We tested the effectiveness of an inexpensive, noninvasive and passive acoustic survey technique for monitoring bumble bee behavior and pollination services. First, we assessed the relationship between the first harmonic of the flight buzz (characteristic frequency) and pollinator functional traits that influence pollination success using flight cage experiments and a literature search. We analyzed passive acoustic survey data from three locations on Pennsylvania Mountain, Colorado to estimate bumble bee activity. We developed an algorithm based on Computational Auditory Scene Analysis that identified and quantified the number of buzzes recorded in each location. We then compared visual and acoustic estimates of bumble bee activity. Using pollinator exclusion experiments, we tested the power of buzz density to predict pollination services at the landscape scale for two bumble bee pollinated alpine forbs (Trifolium dasyphyllum and T. parryi). We found that the characteristic frequency was correlated with traits known to affect pollination efficacy, explaining 30–52% of variation in body size and tongue length. Buzz density was highly correlated with visual estimates of bumble bee density (r = 0.97), indicating that acoustic signals are predictive of bumble bee activity. Buzz density predicted seed set in two alpine forbs when bumble bees were permitted access to the flowers, but not when they were excluded from visiting. Our results indicate that acoustic signatures of flight can be deciphered to monitor bee activity and pollination services to bumble bee pollinated plants. We propose that applications of this technique could assist scientists and farmers in rapidly detecting and responding to bee population declines. PMID:28591213
Study of the cell activity in three-dimensional cell culture by using Raman spectroscopy
NASA Astrophysics Data System (ADS)
Arunngam, Pakajiraporn; Mahardika, Anggara; Hiroko, Matsuyoshi; Andriana, Bibin Bintang; Tabata, Yasuhiko; Sato, Hidetoshi
2018-02-01
The purpose of this study is to develop a estimation technique of local cell activity in cultured 3D cell aggregate with gelatin hydrogel microspheres by using Raman spectroscopy. It is an invaluable technique allowing real-time, nondestructive, and invasive measurement. Cells in body generally exist in 3D structure, which physiological cell-cell interaction enhances cell survival and biological functions. Although a 3D cell aggregate is a good model of the cells in living tissues, it was difficult to estimate their physiological conditions because there is no effective technique to make observation of intact cells in the 3D structure. In this study, cell aggregates were formed by MC3T-E1 (pre-osteoblast) cells and gelatin hydrogel microspheres. In appropriate condition MC3T-E1 cells can differentiate into osteoblast. We assume that the activity of the cell would be different according to the location in the aggregate because the cells near the surface of the aggregate have more access to oxygen and nutrient. Raman imaging technique was applied to measure 3D image of the aggregate. The concentration of the hydroxyapatite (HA) is generated by osteoblast was estimated with a strong band at 950-970 cm-1 which assigned to PO43- in HA. It reflects an activity of the specific site in the cell aggregate. The cell density in this specific site was analyzed by multivariate analysis of the 3D Raman image. Hence, the ratio between intensity and cell density in the site represents the cell activity.
NASA Astrophysics Data System (ADS)
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
Model-based damage evaluation of layered CFRP structures
NASA Astrophysics Data System (ADS)
Munoz, Rafael; Bochud, Nicolas; Rus, Guillermo; Peralta, Laura; Melchor, Juan; Chiachío, Juan; Chiachío, Manuel; Bond, Leonard J.
2015-03-01
An ultrasonic evaluation technique for damage identification of layered CFRP structures is presented. This approach relies on a model-based estimation procedure that combines experimental data and simulation of ultrasonic damage-propagation interactions. The CFPR structure, a [0/90]4s lay-up, has been tested in an immersion through transmission experiment, where a scan has been performed on a damaged specimen. Most ultrasonic techniques in industrial practice consider only a few features of the received signals, namely, time of flight, amplitude, attenuation, frequency contents, and so forth. In this case, once signals are captured, an algorithm is used to reconstruct the complete signal waveform and extract the unknown damage parameters by means of modeling procedures. A linear version of the data processing has been performed, where only Young modulus has been monitored and, in a second nonlinear version, the first order nonlinear coefficient β was incorporated to test the possibility of detection of early damage. The aforementioned physical simulation models are solved by the Transfer Matrix formalism, which has been extended from linear to nonlinear harmonic generation technique. The damage parameter search strategy is based on minimizing the mismatch between the captured and simulated signals in the time domain in an automated way using Genetic Algorithms. Processing all scanned locations, a C-scan of the parameter of each layer can be reconstructed, obtaining the information describing the state of each layer and each interface. Damage can be located and quantified in terms of changes in the selected parameter with a measurable extension. In the case of the nonlinear coefficient of first order, evidence of higher sensitivity to damage than imaging the linearly estimated Young Modulus is provided.
NASA Technical Reports Server (NTRS)
Ahn, C.; Ziemke, J. R.; Chandra, S.; Bhartia, P. K.
2002-01-01
A recently developed technique called cloud slicing used for deriving upper tropospheric ozone from the Nimbus 7 Total Ozone Mapping Spectrometer (TOMS) instrument combined together with temperature-humidity and infrared radiometer (THIR) is no longer applicable to the Earth Probe TOMS (EPTOMS) because EPTOMS does not have an instrument to measure cloud top temperatures. For continuing monitoring of tropospheric ozone between 200-500hPa and testing the feasibility of this technique across spacecrafts, EPTOMS data are co-located in time and space with the Geostationary Operational Environmental Satellite (GOES)-8 infrared data for 2001 and early 2002, covering most of North and South America (45S-45N and 120W-30W). The maximum column amounts for the mid-latitudinal sites of the northern hemisphere are found in the March-May season. For the mid-latitudinal sites of the southern hemisphere, the highest column amounts are found in the September-November season, although overall seasonal variability is smaller than those of the northern hemisphere. The tropical sites show the weakest seasonal variability compared to higher latitudes. The derived results for selected sites are cross validated qualitatively with the seasonality of ozonesonde observations and the results from THIR analyses over the 1979-1984 time period due to the lack of available ozonesonde measurements to study sites for 2001. These comparisons show a reasonably good agreement among THIR, ozonesonde observations, and cloud slicing-derived column ozone. With very limited co-located EPTOMS/GOES data sets, the cloud slicing technique is still viable to derive the upper tropospheric column ozone. Two new variant approaches, High-Low (HL) cloud slicing and ozone profile derivation from cloud slicing are introduced to estimate column ozone amounts using the entire cloud information in the troposphere.
De Rosario, Helios; Page, Álvaro; Besa, Antonio
2017-09-06
The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hoenner, Xavier; Whiting, Scott D.; Hindell, Mark A.; McMahon, Clive R.
2012-01-01
Accurately quantifying animals’ spatial utilisation is critical for conservation, but has long remained an elusive goal due to technological impediments. The Argos telemetry system has been extensively used to remotely track marine animals, however location estimates are characterised by substantial spatial error. State-space models (SSM) constitute a robust statistical approach to refine Argos tracking data by accounting for observation errors and stochasticity in animal movement. Despite their wide use in ecology, few studies have thoroughly quantified the error associated with SSM predicted locations and no research has assessed their validity for describing animal movement behaviour. We compared home ranges and migratory pathways of seven hawksbill sea turtles (Eretmochelys imbricata) estimated from (a) highly accurate Fastloc GPS data and (b) locations computed using common Argos data analytical approaches. Argos 68th percentile error was <1 km for LC 1, 2, and 3 while markedly less accurate (>4 km) for LC ≤0. Argos error structure was highly longitudinally skewed and was, for all LC, adequately modelled by a Student’s t distribution. Both habitat use and migration routes were best recreated using SSM locations post-processed by re-adding good Argos positions (LC 1, 2 and 3) and filtering terrestrial points (mean distance to migratory tracks ± SD = 2.2±2.4 km; mean home range overlap and error ratio = 92.2% and 285.6 respectively). This parsimonious and objective statistical procedure however still markedly overestimated true home range sizes, especially for animals exhibiting restricted movements. Post-processing SSM locations nonetheless constitutes the best analytical technique for remotely sensed Argos tracking data and we therefore recommend using this approach to rework historical Argos datasets for better estimation of animal spatial utilisation for research and evidence-based conservation purposes. PMID:22808241
The Determination of Oil Slick Thickness By Means of Multifrequency Passive Microwave Techniques
1974-06-30
on an all-weather, day or night, and real time basis. As such It should prove a useful tool in the confinement, control, and clean up of marine oil...ol thickness. This approach has the attraction that it readily lends itsell to a real - time , on-board estimation ol oil slick volume using an...surface oil slicks, locate the thick regions, and measure their thickness and volume on an all-weather, day or night, and real time basis. As such it
Neutron dosimetry of the Little Boy device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pederson, R.A.; Plassmann, E.A.
1984-01-01
Neutron dose rates at several angular locations and at distances out to 0.5 mile have been measured during critical operation of the Little Boy replica. We used modified remmetes and thermoluminescent dosimetry techniques for the measurements. The present status of our analysis is presented including estimates of the neutron-dose-relaxation length in air and the variation of the neutron-to-gamma-ray dose ratio with distance from the replica. These results are preliminary and are subject to detector calibration measurements.
Subsurface structures of buried features in the lunar Procellarum region
NASA Astrophysics Data System (ADS)
Wang, Wenrui; Heki, Kosuke
2017-07-01
The Gravity Recovery and Interior Laboratory (GRAIL) mission unraveled numbers of features showing strong gravity anomalies without prominent topographic signatures in the lunar Procellarum region. These features, located in different geologic units, are considered to have complex subsurface structures reflecting different evolution processes. By using the GRAIL level-1 data, we estimated the free-air and Bouguer gravity anomalies in several selected regions including such intriguing features. With the three-dimensional inversion technique, we recovered subsurface density structures in these regions.
Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C
2005-07-01
To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.
Garonne River monitoring from Signal-to-Noise Ratio data collected by a single geodetic receiver
NASA Astrophysics Data System (ADS)
Roussel, Nicolas; Frappart, Frédéric; Darrozes, José; Ramillien, Guillaume; Bonneton, Philippe; Bonneton, Natalie; Detandt, Guillaume; Roques, Manon; Orseau, Thomas
2016-04-01
GNSS-Reflectometry (GNSS-R) altimetry has demonstrated a strong potential for water level monitoring through the last decades. Interference Pattern Technique (IPT) based on the analysis of the Signal-to-Noise Ratio (SNR) estimated by a GNSS receiver, presents the main advantage of being applicable everywhere by using a single geodetic antenna and a classical GNSS receiver. Such a technique has already been tested in various configurations of acquisition of surface-reflected GNSS signals with an accuracy of a few centimeters. Nevertheless, classical SNR analysis method used to estimate the variations of the reflecting surface height h(t) has a limited domain of validity due to its variation rate dh/dt(t) assumed to be negligible. In [1], authors solve this problem with a "dynamic SNR method" taking the dynamic of the surface into account to conjointly estimate h(t) and dh/dt(t) over areas characterized by high amplitudes of tides. If the performance of this dynamic SNR method is already well-established for ocean monitoring [1], it was not validated in continental areas (i.e., river monitoring). We carried out a field study during 3 days in August and September, 2015, using a GNSS antenna to measure the water level variations in the Garonne River (France) in Podensac located 140 km downstream of the estuary mouth. In this site, the semi-diurnal tide amplitude reaches ~5 m. The antenna was located ~10 m above the water surface, and reflections of the GNSS electromagnetic waves on the Garonne River occur until 140 m from the antenna. Both classical SNR method and dynamic SNR method are tested and results are compared. [1] N. Roussel, G. Ramillien, F. Frappart, J. Darrozes, A. Gay, R. Biancale, N. Striebig, V. Hanquiez, X. Bertin, D. Allain : "Sea level monitoring and sea state estimate using a single geodetic receiver", Remote Sensing of Environment 171 (2015) 261-277.
Estimating the costs of tsetse control options: an example for Uganda.
Shaw, A P M; Torr, S J; Waiswa, C; Cecchi, G; Wint, G R W; Mattioli, R C; Robinson, T P
2013-07-01
Decision-making and financial planning for tsetse control is complex, with a particularly wide range of choices to be made on location, timing, strategy and methods. This paper presents full cost estimates for eliminating or continuously controlling tsetse in a hypothetical area of 10,000km(2) located in south-eastern Uganda. Four tsetse control techniques were analysed: (i) artificial baits (insecticide-treated traps/targets), (ii) insecticide-treated cattle (ITC), (iii) aerial spraying using the sequential aerosol technique (SAT) and (iv) the addition of the sterile insect technique (SIT) to the insecticide-based methods (i-iii). For the creation of fly-free zones and using a 10% discount rate, the field costs per km(2) came to US$283 for traps (4 traps per km(2)), US$30 for ITC (5 treated cattle per km(2) using restricted application), US$380 for SAT and US$758 for adding SIT. The inclusion of entomological and other preliminary studies plus administrative overheads adds substantially to the overall cost, so that the total costs become US$482 for traps, US$220 for ITC, US$552 for SAT and US$993 - 1365 if SIT is added following suppression using another method. These basic costs would apply to trouble-free operations dealing with isolated tsetse populations. Estimates were also made for non-isolated populations, allowing for a barrier covering 10% of the intervention area, maintained for 3 years. Where traps were used as a barrier, the total cost of elimination increased by between 29% and 57% and for ITC barriers the increase was between 12% and 30%. In the case of continuous tsetse control operations, costs were estimated over a 20-year period and discounted at 10%. Total costs per km(2) came to US$368 for ITC, US$2114 for traps, all deployed continuously, and US$2442 for SAT applied at 3-year intervals. The lower costs compared favourably with the regular treatment of cattle with prophylactic trypanocides (US$3862 per km(2) assuming four doses per annum at 45 cattle per km(2)). Throughout the study, sensitivity analyses were conducted to explore the impact on cost estimates of different densities of ITC and traps, costs of baseline studies and discount rates. The present analysis highlights the cost differentials between the different intervention techniques, whilst attesting to the significant progress made over the years in reducing field costs. Results indicate that continuous control activities can be cost-effective in reducing tsetse populations, especially where the creation of fly-free zones is challenging and reinvasion pressure high. Copyright © 2013 Food and Agriculture Organization of the United Nations. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
England, John F.; Julien, Pierre Y.; Velleux, Mark L.
2014-03-01
Traditionally, deterministic flood procedures such as the Probable Maximum Flood have been used for critical infrastructure design. Some Federal agencies now use hydrologic risk analysis to assess potential impacts of extreme events on existing structures such as large dams. Extreme flood hazard estimates and distributions are needed for these efforts, with very low annual exceedance probabilities (⩽10-4) (return periods >10,000 years). An integrated data-modeling hydrologic hazard framework for physically-based extreme flood hazard estimation is presented. Key elements include: (1) a physically-based runoff model (TREX) coupled with a stochastic storm transposition technique; (2) hydrometeorological information from radar and an extreme storm catalog; and (3) streamflow and paleoflood data for independently testing and refining runoff model predictions at internal locations. This new approach requires full integration of collaborative work in hydrometeorology, flood hydrology and paleoflood hydrology. An application on the 12,000 km2 Arkansas River watershed in Colorado demonstrates that the size and location of extreme storms are critical factors in the analysis of basin-average rainfall frequency and flood peak distributions. Runoff model results are substantially improved by the availability and use of paleoflood nonexceedance data spanning the past 1000 years at critical watershed locations.
NASA Astrophysics Data System (ADS)
O'Connor, M.; Eads, R.
2007-12-01
Watersheds in the northern California Coast Range have been designated as "impaired" with respect to water quality because of excessive sediment loads and/or high water temperature. Sediment budget techniques have typically been used by regulatory authorities to estimate current erosion rates and to develop targets for future desired erosion rates. This study examines erosion rates estimated by various methods for portions of the Gualala River watershed, designated as having water quality impaired by sediment under provisions of the Clean Water Act Section 303(d), located in northwest Sonoma County (~90 miles north of San Francisco). The watershed is underlain by Jurassic age sedimentary and meta-sedimentary rocks of the Franciscan formation. The San Andreas Fault passes through the western edge of watershed, and other active faults are present. A substantial portion of the watershed is mantled by rock slides and earth flows, many of which are considered dormant. The Coast Range is geologically young, and rapid rates of uplift are believed to have contributed to high erosion rates. This study compares quantitative erosion rate estimates developed at different spatial and temporal scales. It is motivated by a proposed vineyard development project in the watershed, and the need to document conditions in the project area, assess project environmental impacts and meet regulatory requirements pertaining to water quality. Erosion rate estimates were previously developed using sediment budget techniques for relatively large drainage areas (~100 to 1,000 km2) by the North Coast Regional Water Quality Control Board and US EPA and by the California Geological Survey. In this study, similar sediment budget techniques were used for smaller watersheds (~3 to 8 km2), and were supplemented by a suspended sediment monitoring program utilizing Turbidity Threshold Sampling techniques (as described in a companion study in this session). The duration of the monitoring program to date spanned the winter runoff seasons of Water Years 2006 and 2007. These were unusually wet and dry years, respectively, providing perspective on the range of measured sediment yield in relation to sediment budget estimates. The measured suspended sediment yields were substantially lower than predicted by sediment budget methods. Variation in geomorphic processes over time and space and methodological problems of sediment budgets may be responsible for these apparent discrepancies. The implications for water quality policy are discussed.
NASA Astrophysics Data System (ADS)
Herman, Matthew R.; Nejadhashemi, A. Pouyan; Abouali, Mohammad; Hernandez-Suarez, Juan Sebastian; Daneshvar, Fariborz; Zhang, Zhen; Anderson, Martha C.; Sadeghi, Ali M.; Hain, Christopher R.; Sharifi, Amirreza
2018-01-01
As the global demands for the use of freshwater resources continues to rise, it has become increasingly important to insure the sustainability of this resources. This is accomplished through the use of management strategies that often utilize monitoring and the use of hydrological models. However, monitoring at large scales is not feasible and therefore model applications are becoming challenging, especially when spatially distributed datasets, such as evapotranspiration, are needed to understand the model performances. Due to these limitations, most of the hydrological models are only calibrated for data obtained from site/point observations, such as streamflow. Therefore, the main focus of this paper is to examine whether the incorporation of remotely sensed and spatially distributed datasets can improve the overall performance of the model. In this study, actual evapotranspiration (ETa) data was obtained from the two different sets of satellite based remote sensing data. One dataset estimates ETa based on the Simplified Surface Energy Balance (SSEBop) model while the other one estimates ETa based on the Atmosphere-Land Exchange Inverse (ALEXI) model. The hydrological model used in this study is the Soil and Water Assessment Tool (SWAT), which was calibrated against spatially distributed ETa and single point streamflow records for the Honeyoey Creek-Pine Creek Watershed, located in Michigan, USA. Two different techniques, multi-variable and genetic algorithm, were used to calibrate the SWAT model. Using the aforementioned datasets, the performance of the hydrological model in estimating ETa was improved using both calibration techniques by achieving Nash-Sutcliffe efficiency (NSE) values >0.5 (0.73-0.85), percent bias (PBIAS) values within ±25% (±21.73%), and root mean squared error - observations standard deviation ratio (RSR) values <0.7 (0.39-0.52). However, the genetic algorithm technique was more effective with the ETa calibration while significantly reducing the model performance for estimating the streamflow (NSE: 0.32-0.52, PBIAS: ±32.73%, and RSR: 0.63-0.82). Meanwhile, using the multi-variable technique, the model performance for estimating the streamflow was maintained with a high level of accuracy (NSE: 0.59-0.61, PBIAS: ±13.70%, and RSR: 0.63-0.64) while the evapotranspiration estimations were improved. Results from this assessment shows that incorporation of remotely sensed and spatially distributed data can improve the hydrological model performance if it is coupled with a right calibration technique.
NASA Astrophysics Data System (ADS)
Chahal, M. K.; Brown, D. J.; Brooks, E. S.; Campbell, C.; Cobos, D. R.; Vierling, L. A.
2012-12-01
Estimating soil moisture content continuously over space and time using geo-statistical techniques supports the refinement of process-based watershed hydrology models and the application of soil process models (e.g. biogeochemical models predicting greenhouse gas fluxes) to complex landscapes. In this study, we model soil profile volumetric moisture content for five agricultural fields with loess soils in the Palouse region of Eastern Washington and Northern Idaho. Using a combination of stratification and space-filling techniques, we selected 42 representative and distributed measurement locations in the Cook Agronomy Farm (Pullman, WA) and 12 locations each in four additional grower fields that span the precipitation gradient across the Palouse. At each measurement location, soil moisture was measured on an hourly basis at five different depths (30, 60, 90, 120, and 150 cm) using Decagon 5-TE/5-TM soil moisture sensors (Decagon Devices, Pullman, WA, USA). This data was collected over three years for the Cook Agronomy Farm and one year for each of the grower fields. In addition to ordinary kriging, we explored the correlation of volumetric water content with external, spatially exhaustive indices derived from terrain models, optical remote sensing imagery, and proximal soil sensing data (electromagnetic induction and VisNIR penetrometer)
Estimated areal extent of colonies of black-tailed prairie dogs in the northern Great Plains
Sidle, John G.; Johnson, Douglas H.; Euliss, Betty R.
2001-01-01
During 1997–1998, we undertook an aerial survey, with an aerial line-intercept technique, to estimate the extent of colonies of black-tailed prairie dogs (Cynomys ludovicianus) in the northern Great Plains states of Nebraska, North Dakota, South Dakota, and Wyoming. We stratified the survey based on knowledge of colony locations, computed 2 types of estimates for each stratum, and combined ratio estimates for high-density strata with average density estimates for low-density strata. Estimates of colony areas for black-tailed prairie dogs were derived from the average percentages of lines intercepting prairie dog colonies and ratio estimators. We selected the best estimator based on the correlation between length of transect line and length of intercepted colonies. Active colonies of black-tailed prairie dogs occupied 2,377.8 km2 ± 186.4 SE, whereas inactive colonies occupied 560.4 ± 89.2 km2. These data represent the 1st quantitative assessment of black-tailed prairie dog colonies in the northern Great Plains. The survey dispels popular notions that millions of hectares of colonies of black-tailed prairie dogs exist in the northern Great Plains and can form the basis for future survey efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Daniel S.; Linte, Cristian; Chen, Elvis C. S.
Purpose: Although robot-assisted coronary artery bypass grafting (RA-CABG) has gained more acceptance worldwide, its success still depends on the surgeon's experience and expertise, and the conversion rate to full sternotomy is in the order of 15%-25%. One of the reasons for conversion is poor pre-operative planning, which is based solely on pre-operative computed tomography (CT) images. In this paper, the authors propose a technique to estimate the global peri-operative displacement of the heart and to predict the intra-operative target vessel location, validated via both an in vitro and a clinical study. Methods: As the peri-operative heart migration during RA-CABG hasmore » never been reported in the literatures, a simple in vitro validation study was conducted using a heart phantom. To mimic the clinical workflow, a pre-operative CT as well as peri-operative ultrasound images at three different stages in the procedure (Stage{sub 0}--following intubation; Stage{sub 1}--following lung deflation; and Stage{sub 2}--following thoracic insufflation) were acquired during the experiment. Following image acquisition, a rigid-body registration using iterative closest point algorithm with the robust estimator was employed to map the pre-operative stage to each of the peri-operative ones, to estimate the heart migration and predict the peri-operative target vessel location. Moreover, a clinical validation of this technique was conducted using offline patient data, where a Monte Carlo simulation was used to overcome the limitations arising due to the invisibility of the target vessel in the peri-operative ultrasound images. Results: For the in vitro study, the computed target registration error (TRE) at Stage{sub 0}, Stage{sub 1}, and Stage{sub 2} was 2.1, 3.3, and 2.6 mm, respectively. According to the offline clinical validation study, the maximum TRE at the left anterior descending (LAD) coronary artery was 4.1 mm at Stage{sub 0}, 5.1 mm at Stage{sub 1}, and 3.4 mm at Stage{sub 2}. Conclusions: The authors proposed a method to measure and validate peri-operative shifts of the heart during RA-CABG. In vitro and clinical validation studies were conducted and yielded a TRE in the order of 5 mm for all cases. As the desired clinical accuracy imposed by this procedure is on the order of one intercostal space (10-15 mm), our technique suits the clinical requirements. The authors therefore believe this technique has the potential to improve the pre-operative planning by updating peri-operative migration patterns of the heart and, consequently, will lead to reduced conversion to conventional open thoracic procedures.« less
Prych, Edmund A.
1995-01-01
Long-term average deep-percolation rates of water from precipitation on the U.S. Department of Energy Hanford Site in semiarid south-central Washington, as estimated by a chloride mass-balance method, range from 0.008 to 0.30 mm/yr (millimeters per year) at nine locations covered by a variety of fine-grain soils and vegetated with sagebrush and other deep-rooted plants plus sparse shallow-rooted grasses. Deep-percolation rates estimated using a chlorine-36 bomb-pulse method at three of the nine locations range from 2.1 to 3.4 mm/yr. Because the mass-balance method may underestimate percolation rates and the bomb-pulse method probably overestimates percolation rates, estimates by the two methods probably bracket actual rates. These estimates, as well as estimates by previous investigators who used different methods, are a small fraction of mean annual precipitation, which ranges from about 160 to 210 mm/yr at the different test locations. Estimates by the mass-balance method at four locations in an area that is vegetated only with sparse shallow-rooted grasses range from 0.39 to 2.0 mm/yr. Chlorine-36 data at one location in this area were sufficient only to determine that the upper limit of deep percolation is more than 5.1 mm/yr. Although estimates for locations in this area are larger than the estimates for locations with deep-rooted plants, they are at the lower end of the range of estimates for this area made by previous investigators.
Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F
2014-01-01
Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km) was nearly half that of LS estimates (11.6 ± 8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.
Silva, Mónica A.; Jonsen, Ian; Russell, Deborah J. F.; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F.
2014-01-01
Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to “true” GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6±5.6 km) was nearly half that of LS estimates (11.6±8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales’ behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates. PMID:24651252
Surface Location In Scene Content Analysis
NASA Astrophysics Data System (ADS)
Hall, E. L.; Tio, J. B. K.; McPherson, C. A.; Hwang, J. J.
1981-12-01
The purpose of this paper is to describe techniques and algorithms for the location in three dimensions of planar and curved object surfaces using a computer vision approach. Stereo imaging techniques are demonstrated for planar object surface location using automatic segmentation, vertex location and relational table matching. For curved surfaces, the locations of corresponding 'points is very difficult. However, an example using a grid projection technique for the location of the surface of a curved cup is presented to illustrate a solution. This method consists of first obtaining the perspective transformation matrix from the images, then using these matrices to compute the three dimensional point locations of the grid points on the surface. These techniques may be used in object location for such applications as missile guidance, robotics, and medical diagnosis and treatment.
Convolution-based estimation of organ dose in tube current modulated CT
NASA Astrophysics Data System (ADS)
Tian, Xiaoyu; Segars, W. Paul; Dixon, Robert L.; Samei, Ehsan
2016-05-01
Estimating organ dose for clinical patients requires accurate modeling of the patient anatomy and the dose field of the CT exam. The modeling of patient anatomy can be achieved using a library of representative computational phantoms (Samei et al 2014 Pediatr. Radiol. 44 460-7). The modeling of the dose field can be challenging for CT exams performed with a tube current modulation (TCM) technique. The purpose of this work was to effectively model the dose field for TCM exams using a convolution-based method. A framework was further proposed for prospective and retrospective organ dose estimation in clinical practice. The study included 60 adult patients (age range: 18-70 years, weight range: 60-180 kg). Patient-specific computational phantoms were generated based on patient CT image datasets. A previously validated Monte Carlo simulation program was used to model a clinical CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). A practical strategy was developed to achieve real-time organ dose estimation for a given clinical patient. CTDIvol-normalized organ dose coefficients ({{h}\\text{Organ}} ) under constant tube current were estimated and modeled as a function of patient size. Each clinical patient in the library was optimally matched to another computational phantom to obtain a representation of organ location/distribution. The patient organ distribution was convolved with a dose distribution profile to generate {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} values that quantified the regional dose field for each organ. The organ dose was estimated by multiplying {{≤ft(\\text{CTD}{{\\text{I}}\\text{vol}}\\right)}\\text{organ, \\text{convolution}}} with the organ dose coefficients ({{h}\\text{Organ}} ). To validate the accuracy of this dose estimation technique, the organ dose of the original clinical patient was estimated using Monte Carlo program with TCM profiles explicitly modeled. The discrepancy between the estimated organ dose and dose simulated using TCM Monte Carlo program was quantified. We further compared the convolution-based organ dose estimation method with two other strategies with different approaches of quantifying the irradiation field. The proposed convolution-based estimation method showed good accuracy with the organ dose simulated using the TCM Monte Carlo simulation. The average percentage error (normalized by CTDIvol) was generally within 10% across all organs and modulation profiles, except for organs located in the pelvic and shoulder regions. This study developed an improved method that accurately quantifies the irradiation field under TCM scans. The results suggested that organ dose could be estimated in real-time both prospectively (with the localizer information only) and retrospectively (with acquired CT data).
Linear estimation of coherent structures in wall-bounded turbulence at Re τ = 2000
NASA Astrophysics Data System (ADS)
Oehler, S.; Garcia–Gutiérrez, A.; Illingworth, S.
2018-04-01
The estimation problem for a fully-developed turbulent channel flow at Re τ = 2000 is considered. Specifically, a Kalman filter is designed using a Navier–Stokes-based linear model. The estimator uses time-resolved velocity measurements at a single wall-normal location (provided by DNS) to estimate the time-resolved velocity field at other wall-normal locations. The estimator is able to reproduce the largest scales with reasonable accuracy for a range of wavenumber pairs, measurement locations and estimation locations. Importantly, the linear model is also able to predict with reasonable accuracy the performance that will be achieved by the estimator when applied to the DNS. A more practical estimation scheme using the shear stress at the wall as measurement is also considered. The estimator is still able to estimate the largest scales with reasonable accuracy, although the estimator’s performance is reduced.
Methods of detecting and counting raptors: A review
Fuller, M.R.; Mosher, J.A.; Ralph, C. John; Scott, J. Michael
1981-01-01
Most raptors are wide-ranging, secretive, and occur at relatively low densities. These factors, in conjunction with the nocturnal activity of owls, cause the counting of raptors by most standard census and survey efforts to be very time consuming and expensive. This paper reviews the most common methods of detecting and counting raptors. It is hoped that it will be of use to the ever-increasing number of biologists, land-use planners, and managers that must determine the occurrence, density, or population dynamics of raptors. Road counts of fixed station or continuous transect design are often used to sample large areas. Detection of spontaneous or elicited vocalizations, especially those of owls, provides a means of detecting and estimating raptor numbers. Searches for nests are accomplished from foot surveys, observations from automobiles and boats, or from aircraft when nest structures are conspicuous (e.g., Osprey). Knowledge of nest habitat, historic records, and inquiries of local residents are useful for locating nests. Often several of these techniques are combined to help find nest sites. Aerial searches have also been used to locate or count large raptors (e.g., eagles), or those that may be conspicuous in open habitats (e.g., tundra). Counts of birds entering or leaving nest colonies or colonial roosts have been attempted on a limited basis. Results from Christmas Bird Counts have provided an index of the abundance of some species. Trapping and banding generally has proven to be an inefficient method of detecting raptors or estimating their populations. Concentrations of migrants at strategically located points around the world afford the best opportunity to count many rap tors in a relatively short period of time, but the influence of many unquantified variables has inhibited extensive interpretation of these counts. Few data exist to demonstrate the effectiveness of these methods. We believe more research on sampling techniques, rather than complete counts or intensive searches, will provide adequate yet affordable estimates of raptor numbers in addition to providing methods for detecting the presence of raptors on areas of interest to researchers and managers.
Automatic streak endpoint localization from the cornerness metric
NASA Astrophysics Data System (ADS)
Sease, Brad; Flewelling, Brien; Black, Jonathan
2017-05-01
Streaked point sources are a common occurrence when imaging unresolved space objects from both ground- and space-based platforms. Effective localization of streak endpoints is a key component of traditional techniques in space situational awareness related to orbit estimation and attitude determination. To further that goal, this paper derives a general detection and localization method for streak endpoints based on the cornerness metric. Corners detection involves searching an image for strong bi-directional gradients. These locations typically correspond to robust structural features in an image. In the case of unresolved imagery, regions with a high cornerness score correspond directly to the endpoints of streaks. This paper explores three approaches for global extraction of streak endpoints and applies them to an attitude and rate estimation routine.
NASA Technical Reports Server (NTRS)
Khorram, S.
1977-01-01
Results are presented of a study intended to develop a general location-specific remote-sensing procedure for watershed-wide estimation of water loss to the atmosphere by evaporation and transpiration. The general approach involves a stepwise sequence of required information definition (input data), appropriate sample design, mathematical modeling, and evaluation of results. More specifically, the remote sensing-aided system developed to evaluate evapotranspiration employs a basic two-stage two-phase sample of three information resolution levels. Based on the discussed design, documentation, and feasibility analysis to yield timely, relatively accurate, and cost-effective evapotranspiration estimates on a watershed or subwatershed basis, work is now proceeding to implement this remote sensing-aided system.
Cluster mislocation in kinematic Sunyaev-Zel'dovich (kSZ) effect extraction
NASA Astrophysics Data System (ADS)
Calafut, Victoria Rose; Bean, Rachel; Yu, Byeonghee
2018-01-01
We investigate the impact of a variety of analysis assumptions that influence cluster identification and location on the kSZ pairwise momentum signal and covariance estimation. Photometric and spectroscopic galaxy tracers from SDSS, WISE, and DECaLs, spanning redshifts 0.05
NASA Astrophysics Data System (ADS)
Williams, Westin B.; Michaels, Thomas E.; Michaels, Jennifer E.
2018-04-01
Composite materials used for aerospace applications are highly susceptible to impacts, which can result in barely visible delaminations. Reliable and fast detection of such damage is needed before structural failures occur. One approach is to use ultrasonic guided waves generated from a sparse array consisting of permanently mounted or embedded transducers for performing structural health monitoring. This array can detect introduction of damage after baseline subtraction, and also provide localization and characterization information via the minimum variance imaging algorithm. Imaging performance can vary considerably depending upon where damage is located with respect to the array; however, prior work has shown that knowledge of expected scattering can improve imaging consistency for artificial damage at various locations. In this study, anisotropic material attenuation and wave speed are estimated as a function of propagation angle using wavefield data recorded along radial lines at multiple angles with respect to an omnidirectional guided wave source. Additionally, full wavefield data are recorded before and after the introduction of artificial and impact damage so that wavefield baseline subtraction may be applied. 3-D filtering techniques are then used to reduce noise and isolate scattered waves. A model for estimating scattering of a circular defect is developed and scattering estimates for both artificial and impact damage are presented and compared.
NASA Astrophysics Data System (ADS)
Sadeghi-Goughari, M.; Mojra, A.; Sadeghi, S.
2016-02-01
Intraoperative Thermal Imaging (ITI) is a new minimally invasive diagnosis technique that can potentially locate margins of brain tumor in order to achieve maximum tumor resection with least morbidity. This study introduces a new approach to ITI based on artificial tactile sensing (ATS) technology in conjunction with artificial neural networks (ANN) and feasibility and applicability of this method in diagnosis and localization of brain tumors is investigated. In order to analyze validity and reliability of the proposed method, two simulations were performed. (i) An in vitro experimental setup was designed and fabricated using a resistance heater embedded in agar tissue phantom in order to simulate heat generation by a tumor in the brain tissue; and (ii) A case report patient with parafalcine meningioma was presented to simulate ITI in the neurosurgical procedure. In the case report, both brain and tumor geometries were constructed from MRI data and tumor temperature and depth of location were estimated. For experimental tests, a novel assisted surgery robot was developed to palpate the tissue phantom surface to measure temperature variations and ANN was trained to estimate the simulated tumor’s power and depth. Results affirm that ITI based ATS is a non-invasive method which can be useful to detect, localize and characterize brain tumors.
A Noninvasive Body Setup Method for Radiotherapy by Using a Multimodal Image Fusion Technique
Zhang, Jie; Chen, Yunxia; Wang, Chenchen; Chu, Kaiyue; Jin, Jianhua; Huang, Xiaolin; Guan, Yue; Li, Weifeng
2017-01-01
Purpose: To minimize the mismatch error between patient surface and immobilization system for tumor location by a noninvasive patient setup method. Materials and Methods: The method, based on a point set registration, proposes a shift for patient positioning by integrating information of the computed tomography scans and that of optical surface landmarks. An evaluation of the method included 3 areas: (1) a validation on a phantom by estimating 100 known mismatch errors between patient surface and immobilization system. (2) Five patients with pelvic tumors were considered. The tumor location errors of the method were measured using the difference between the proposal shift of cone-beam computed tomography and that of our method. (3) The collected setup data from the evaluation of patients were compared with the published performance data of other 2 similar systems. Results: The phantom verification results showed that the method was capable of estimating mismatch error between patient surface and immobilization system in a precision of <0.22 mm. For the pelvic tumor, the method had an average tumor location error of 1.303, 2.602, and 1.684 mm in left–right, anterior–posterior, and superior–inferior directions, respectively. The performance comparison with other 2 similar systems suggested that the method had a better positioning accuracy for pelvic tumor location. Conclusion: By effectively decreasing an interfraction uncertainty source (mismatch error between patient surface and immobilization system) in radiotherapy, the method can improve patient positioning precision for pelvic tumor. PMID:29333959
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Paramagul, Chintana
2004-05-01
Automated registration of multiple mammograms for CAD depends on accurate nipple identification. We developed two new image analysis techniques based on geometric and texture convergence analyses to improve the performance of our previously developed nipple identification method. A gradient-based algorithm is used to automatically track the breast boundary. The nipple search region along the boundary is then defined by geometric convergence analysis of the breast shape. Three nipple candidates are identified by detecting the changes along the gray level profiles inside and outside the boundary and the changes in the boundary direction. A texture orientation-field analysis method is developed to estimate the fourth nipple candidate based on the convergence of the tissue texture pattern towards the nipple. The final nipple location is determined from the four nipple candidates by a confidence analysis. Our training and test data sets consisted of 419 and 368 randomly selected mammograms, respectively. The nipple location identified on each image by an experienced radiologist was used as the ground truth. For 118 of the training and 70 of the test images, the radiologist could not positively identify the nipple, but provided an estimate of its location. These were referred to as invisible nipple images. In the training data set, 89.37% (269/301) of the visible nipples and 81.36% (96/118) of the invisible nipples could be detected within 1 cm of the truth. In the test data set, 92.28% (275/298) of the visible nipples and 67.14% (47/70) of the invisible nipples were identified within 1 cm of the truth. In comparison, our previous nipple identification method without using the two convergence analysis techniques detected 82.39% (248/301), 77.12% (91/118), 89.93% (268/298) and 54.29% (38/70) of the nipples within 1 cm of the truth for the visible and invisible nipples in the training and test sets, respectively. The results indicate that the nipple on mammograms can be detected accurately. This will be an important step towards automatic multiple image analysis for CAD techniques.
Determination of rain rate from a spaceborne radar using measurements of total attenuation
NASA Technical Reports Server (NTRS)
Meneghini, R.; Eckerman, J.; Atlas, D.
1981-01-01
Studies shows that path-integrated rain rates can be determined by means of a direct measurement of attenuation. For ground based radars this is done by measuring the backscattering cross section of a fixed target in the presence and absence of rain along the radar beam. A ratio of the two measurements yields a factor proportional to the attenuation from which the average rain rate is deduced. The technique is extended to spaceborne radars by choosing the ground as reference target. The technique is also generalized so that both the average and range-profiled rain rates are determined. The accuracies of the resulting estimates are evaluated for a narrow beam radar located on a low earth orbiting satellite.
Geophysical mapping of palsa peatland permafrost
NASA Astrophysics Data System (ADS)
Sjöberg, Y.; Marklund, P.; Pettersson, R.; Lyon, S. W.
2014-10-01
Permafrost peatlands are hydrological and biogeochemical hotspots in the discontinuous permafrost zone. Non-intrusive geophysical methods offer possibility to map current permafrost spatial distributions in these environments. In this study, we estimate the depths to the permafrost table surface and base across a peatland in northern Sweden, using ground penetrating radar and electrical resistivity tomography. Seasonal thaw frost tables (at ~0.5 m depth), taliks (2.1-6.7 m deep), and the permafrost base (at ~16 m depth) could be detected. Higher occurrences of taliks were discovered at locations with a lower relative height of permafrost landforms indicative of lower ground ice content at these locations. These results highlight the added value of combining geophysical techniques for assessing spatial distribution of permafrost within the rapidly changing sporadic permafrost zone. For example, based on a simple thought experiment for the site considered here, we estimated that the thickest permafrost could thaw out completely within the next two centuries. There is a clear need, thus, to benchmark current permafrost distributions and characteristics particularly in under studied regions of the pan-arctic.
Geophysical mapping of palsa peatland permafrost
NASA Astrophysics Data System (ADS)
Sjöberg, Y.; Marklund, P.; Pettersson, R.; Lyon, S. W.
2015-03-01
Permafrost peatlands are hydrological and biogeochemical hotspots in the discontinuous permafrost zone. Non-intrusive geophysical methods offer a possibility to map current permafrost spatial distributions in these environments. In this study, we estimate the depths to the permafrost table and base across a peatland in northern Sweden, using ground penetrating radar and electrical resistivity tomography. Seasonal thaw frost tables (at ~0.5 m depth), taliks (2.1-6.7 m deep), and the permafrost base (at ~16 m depth) could be detected. Higher occurrences of taliks were discovered at locations with a lower relative height of permafrost landforms, which is indicative of lower ground ice content at these locations. These results highlight the added value of combining geophysical techniques for assessing spatial distributions of permafrost within the rapidly changing sporadic permafrost zone. For example, based on a back-of-the-envelope calculation for the site considered here, we estimated that the permafrost could thaw completely within the next 3 centuries. Thus there is a clear need to benchmark current permafrost distributions and characteristics, particularly in under studied regions of the pan-Arctic.
NASA Astrophysics Data System (ADS)
Cao, Y.; Cervone, G.; Barkley, Z.; Lauvaux, T.; Deng, A.; Miles, N.; Richardson, S.
2016-12-01
Fugitive methane emission rates for the Marcellus shale area are estimated using a genetic algorithm that finds optimal weights to minimize the error between simulated and observed concentrations. The overall goal is to understand the relative contribution of methane due to Shale gas extraction. Methane sensors were installed on four towers located in northeastern Pennsylvania to measure atmospheric concentrations since May 2015. Inverse Lagrangian dispersion model runs are performed from each of these tower locations for each hour of 2015. Simulated methane concentrations at each of the four towers are computed by multiplying the resulting footprints from the atmospheric simulations by thousands of emission sources grouped into 11 classes. The emission sources were identified using GIS techniques, and include conventional and unconventional wells, different types of compressor stations, pipelines, landfills, farming and wetlands. Initial estimates for each source are calculated based on emission factors from EPA and few regional studies. A genetic algorithm is then used to identify optimal emission rates for the 11 classes of methane emissions and to explore extreme events and spatial and temporal structures in the emissions associated with natural gas activities.
A signal strength priority based position estimation for mobile platforms
NASA Astrophysics Data System (ADS)
Kalgikar, Bhargav; Akopian, David; Chen, Philip
2010-01-01
Global Positioning System (GPS) products help to navigate while driving, hiking, boating, and flying. GPS uses a combination of orbiting satellites to determine position coordinates. This works great in most outdoor areas, but the satellite signals are not strong enough to penetrate inside most indoor environments. As a result, a new strain of indoor positioning technologies that make use of 802.11 wireless LANs (WLAN) is beginning to appear on the market. In WLAN positioning the system either monitors propagation delays between wireless access points and wireless device users to apply trilateration techniques or it maintains the database of location-specific signal fingerprints which is used to identify the most likely match of incoming signal data with those preliminary surveyed and saved in the database. In this paper we investigate the issue of deploying WLAN positioning software on mobile platforms with typically limited computational resources. We suggest a novel received signal strength rank order based location estimation system to reduce computational loads with a robust performance. The proposed system performance is compared to conventional approaches.
Damage assessment in composite laminates via broadband Lamb wave.
Gao, Fei; Zeng, Liang; Lin, Jing; Shao, Yongsheng
2018-05-01
Time of flight (ToF) based method for damage detection using Lamb waves is widely used. However, due to the energy dissipation of Lamb waves and the non-ignorable size of damage in composite structure, the performance of damage detection is restricted. The objective of this research is to establish an improved method to locate and assess damages in composite structure. To choose appropriate excitation parameters, the propagation characters of Lamb waves in quasi-isotropic composite laminates are firstly studied and the broadband excitation is designed. Subsequently, the pulse compression technique is adopted for energy concentration and high-accuracy distance estimation. On this basis, the gravity center of intersections of path loci is employed for damage localization and the convex envelop of identified damage edge points is taken for damage contour estimation. As a result, both damage location and size can be evaluated, thereby providing the information for quantitative damage detection. The experiment consisting of five different sizes of damage is carried for method verification and the identified results show the efficiency of the proposed method. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Low, Kerwin; Elhadidi, Basman; Glauser, Mark
2009-11-01
Understanding the different noise production mechanisms caused by the free shear flows in a turbulent jet flow provides insight to improve ``intelligent'' feedback mechanisms to control the noise. Towards this effort, a control scheme is based on feedback of azimuthal pressure measurements in the near field of the jet at two streamwise locations. Previous studies suggested that noise reduction can be achieved by azimuthal actuators perturbing the shear layer at the jet lip. The closed-loop actuation will be based on a low-dimensional Fourier representation of the hydrodynamic pressure measurements. Preliminary results show that control authority and reduction in the overall sound pressure level was possible. These results provide motivation to move forward with the overall vision of developing innovative multi-mode sensing methods to improve state estimation and derive dynamical systems. It is envisioned that estimating velocity-field and dynamic pressure information from various locations both local and in the far-field regions, sensor fusion techniques can be utilized to ascertain greater overall control authority.
An Experimental Investigation of Flow past a Wing at high Angles of Attack
NASA Astrophysics Data System (ADS)
Dalela, Vipul; Mukherjee, Rinku
2017-11-01
The aerodynamic characteristics for post-stall angles of attack past a single and/or multiple 3D wing(s) have been studied using a novel `decambering technique' assuming the flow to be steady. It is expected that the location of separation as well as the strength of the separated flow is unsteady. The objective of this work therefore is to investigate flow at high angles of attack considering unsteady behavior. The numerical technique used for this purpose that accounts for loss in camber due to flow separation is termed as `decambering'. Two linear functions are used to define the `decambering' for the steady case, located at the leading edge and anywhere between 50%-80% chord. Wind tunnel experiments are to be conducted to study the unsteady nature of separated flow using flow visualization techniques. An estimation of the unsteady wake will be of paramount importance. It is expected to get an experimental corroboration for the numerical decambering. A NACA 4415 wing section is being tested for a range of Reynolds numbers. It is observed from the preliminary results that the drag becomes more dominant after increasing the Reynolds number from Re = 0.093 ×106 to Re = 0.128 ×106 resulting a gentle decrease in the lift coefficient, Cl.
Lattice sites of ion-implanted Mn, Fe and Ni in 6H-SiC
NASA Astrophysics Data System (ADS)
Costa, A. R. G.; Wahl, U.; Correia, J. G.; David-Bosne, E.; Amorim, L. M.; Augustyns, V.; Silva, D. J.; da Silva, M. R.; Pereira, L. M. C.
2018-01-01
Using radioactive isotopes produced at the CERN-ISOLDE facility, the lattice location of the implanted transition metal (TM) ions 56Mn, 59Fe and 65Ni in n-type single-crystalline hexagonal 6H-SiC was studied by means of the emission channeling technique. TM probes on carbon coordinated tetrahedral interstitial sites (T C) and on substitutional silicon sites (S Si,h+k ) were identified. We tested for but found no indication that the TM distribution on S Si sites deviates from the statistical mixture of 1/3 hexagonal and 2/3 cubic sites present in the 6H crystal. The TM atoms partially disappear from T C positions during annealing at temperatures between 500 °C and 700 °C which is accompanied by an increase on S Si and random sites. From the temperature associated with these site changes, interstitial migration energies of 1.7-2.7 eV for Mn and Ni, and 2.3-3.2 eV for Fe were estimated. TM lattice locations are compared to previous results obtained in 3C-SiC using the same technique.
Projected electric power demands for the Potomac Electric Power Company. Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estomin, S.; Kahal, M.
1984-03-01
This three-volume report presents the results of an econometric forecast of peak and electric power demands for the Potomac Electric Power Company (PEPCO) through the year 2002. Volume I describes the methodology, the results of the econometric estimations, the forecast assumptions and the calculated forecasts of peak demand and energy usage. Separate sets of models were developed for the Maryland Suburbs (Montgomery and Prince George's counties), the District of Columbia and Southern Maryland (served by a wholesale customer of PEPCO). For each of the three jurisdictions, energy equations were estimated for residential and commercial/industrial customers for both summer and wintermore » seasons. For the District of Columbia, summer and winter equations for energy sales to the federal government were also estimated. Equations were also estimated for street lighting and energy losses. Noneconometric techniques were employed to forecast energy sales to the Northern Virginia suburbs, Metrorail and federal government facilities located in Maryland.« less
Porosity estimation by semi-supervised learning with sparsely available labeled samples
NASA Astrophysics Data System (ADS)
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
Estimating natural background groundwater chemistry, Questa molybdenum mine, New Mexico
Verplanck, Phillip L.; Nordstrom, D. Kirk; Plumlee, Geoffrey S.; Walker, Bruce M.; Morgan, Lisa A.; Quane, Steven L.
2010-01-01
This 2 1/2 day field trip will present an overview of a U.S. Geological Survey (USGS) project whose objective was to estimate pre-mining groundwater chemistry at the Questa molybdenum mine, New Mexico. Because of intense debate among stakeholders regarding pre-mining groundwater chemistry standards, the New Mexico Environment Department and Chevron Mining Inc. (formerly Molycorp) agreed that the USGS should determine pre-mining groundwater quality at the site. In 2001, the USGS began a 5-year, multidisciplinary investigation to estimate pre-mining groundwater chemistry utilizing a detailed assessment of a proximal natural analog site and applied an interdisciplinary approach to infer pre-mining conditions. The trip will include a surface tour of the Questa mine and key locations in the erosion scar areas and along the Red River. The trip will provide participants with a detailed understanding of geochemical processes that influence pre-mining environmental baselines in mineralized areas and estimation techniques for determining pre-mining baseline conditions.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Ma, Dinglong; Bec, Julien; Yankelevich, Diego R.; Marcu, Laura
2016-03-01
Fluorescence lifetime imaging has been shown to be a robust technique for biochemical and functional characterization of tissues and to present great potential for intraoperative tissue diagnosis and guidance of surgical procedures. We report a technique for real-time mapping of fluorescence parameters (i.e. lifetime values) onto the location from where the fluorescence measurements were taken. This is achieved by merging a 450 nm aiming beam generated by a diode laser with the excitation light in a single delivery/collection fiber and by continuously imaging the region of interest with a color CMOS camera. The interrogated locations are then extracted from the acquired frames via color-based segmentation of the aiming beam. Assuming a Gaussian profile of the imaged aiming beam, the segmentation results are fitted to ellipses that are dynamically scaled at the full width of three automatically estimated thresholds (50%, 75%, 90%) of the Gaussian distribution's maximum value. This enables the dynamic augmentation of the white-light video frames with the corresponding fluorescence decay parameters. A fluorescence phantom and fresh tissue samples were used to evaluate this method with motorized and hand-held scanning measurements. At 640x512 pixels resolution the area of interest augmented with fluorescence decay parameters can be imaged at an average 34 frames per second. The developed method has the potential to become a valuable tool for real-time display of optical spectroscopy data during continuous scanning applications that subsequently can be used for tissue characterization and diagnosis.
Regionalization of harmonic-mean streamflows in Kentucky
Martin, Gary R.; Ruhl, Kevin J.
1993-01-01
Harmonic-mean streamflow (Qh), defined as the reciprocal of the arithmetic mean of the reciprocal daily streamflow values, was determined for selected stream sites in Kentucky. Daily mean discharges for the available period of record through the 1989 water year at 230 continuous record streamflow-gaging stations located in and adjacent to Kentucky were used in the analysis. Periods of record affected by regulation were identified and analyzed separately from periods of record unaffected by regulation. Record-extension procedures were applied to short-term stations to reducetime-sampling error and, thus, improve estimates of the long-term Qh. Techniques to estimate the Qh at ungaged stream sites in Kentucky were developed. A regression model relating Qh to total drainage area and streamflow-variability index was presented with example applications. The regression model has a standard error of estimate of 76 percent and a standard error of prediction of 78 percent.
NASA Technical Reports Server (NTRS)
Joiner, T. J.; Copeland, C. W., Jr.; Russell, D. D.; Evans, F. E., Jr.; Sapp, C. D.; Boone, P. A.
1978-01-01
Methods by which estimates of the remaining reserves of strippable coal in Alabama could be made were developed. Information acquired from NASA's Earth Resources Office was used to analyze and map existing surface mines in a four-quadrangle area in west central Alabama. Using this information and traditional methods for mapping coal reserves, an estimate of remaining strippable reserves was derived. Techniques for the computer analysis of remotely sensed data and other types of available coal data were developed to produce an estimate of strippable coal reserves for a second four-quadrangle area. Both areas lie in the Warrior coal field, the most prolific and active of Alabama's coal fields. They were chosen because of the amount and type of coal mining in the area, their location relative to urban areas, and the amount and availability of base data necessary for this type of study.
Padole, Atul; Deedar Ali Khawaja, Ranish; Otrakji, Alexi; Zhang, Da; Liu, Bob; Xu, X George; Kalra, Mannudeep K
2016-05-01
The aim of this study was to compare the directly measured and the estimated computed tomography (CT) organ doses obtained from commercial radiation dose-tracking (RDT) software for CT performed with modulated tube current or automatic exposure control (AEC) technique and fixed tube current (mAs). With the institutional review board (IRB) approval, the ionization chambers were surgically implanted in a human cadaver (88 years old, male, 68 kg) in six locations such as liver, stomach, colon, left kidney, small intestine, and urinary bladder. The cadaver was scanned with routine abdomen pelvis protocol on a 128-slice, dual-source multidetector computed tomography (MDCT) scanner using both AEC and fixed mAs. The effective and quality reference mAs of 100, 200, and 300 were used for AEC and fixed mAs, respectively. Scanning was repeated three times for each setting, and measured and estimated organ doses (from RDT software) were recorded (N = 3*3*2 = 18). Mean CTDIvol for AEC and fixed mAs were 4, 8, 13 mGy and 7, 14, 21 mGy, respectively. The most estimated organ doses were significantly greater (P < 0.01) than the measured organ doses for both AEC and fixed mAs. At AEC, the mean estimated organ doses (for six organs) were 14.7 mGy compared to mean measured organ doses of 12.3 mGy. Similarly, at fixed mAs, the mean estimated organ doses (for six organs) were 24 mGy compared to measured organ doses of 22.3 mGy. The differences among the measured and estimated organ doses were higher for AEC technique compared to the fixed mAs for most organs (P < 0.01). The most CT organ doses estimated from RDT software are greater compared to directly measured organ doses, particularly when AEC technique is used for CT scanning. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Comparison of ITRF2014 station coordinate input time series of DORIS, VLBI and GNSS
NASA Astrophysics Data System (ADS)
Tornatore, Vincenza; Tanır Kayıkçı, Emine; Roggero, Marco
2016-12-01
In this paper station coordinate time series from three space geodesy techniques that have contributed to the realization of the International Terrestrial Reference Frame 2014 (ITRF2014) are compared. In particular the height component time series extracted from official combined intra-technique solutions submitted for ITRF2014 by DORIS, VLBI and GNSS Combination Centers have been investigated. The main goal of this study is to assess the level of agreement among these three space geodetic techniques. A novel analytic method, modeling time series as discrete-time Markov processes, is presented and applied to the compared time series. The analysis method has proven to be particularly suited to obtain quasi-cyclostationary residuals which are an important property to carry out a reliable harmonic analysis. We looked for common signatures among the three techniques. Frequencies and amplitudes of the detected signals have been reported along with their percentage of incidence. Our comparison shows that two of the estimated signals, having one-year and 14 days periods, are common to all the techniques. Different hypotheses on the nature of the signal having a period of 14 days are presented. As a final check we have compared the estimated velocities and their standard deviations (STD) for the sites that co-located the VLBI, GNSS and DORIS stations, obtaining a good agreement among the three techniques both in the horizontal (1.0 mm/yr mean STD) and in the vertical (0.7 mm/yr mean STD) component, although some sites show larger STDs, mainly due to lack of data, different data spans or noisy observations.
Precise orbit determination for NASA's earth observing system using GPS (Global Positioning System)
NASA Technical Reports Server (NTRS)
Williams, B. G.
1988-01-01
An application of a precision orbit determination technique for NASA's Earth Observing System (EOS) using the Global Positioning System (GPS) is described. This technique allows the geometric information from measurements of GPS carrier phase and P-code pseudo-range to be exploited while minimizing requirements for precision dynamical modeling. The method combines geometric and dynamic information to determine the spacecraft trajectory; the weight on the dynamic information is controlled by adjusting fictitious spacecraft accelerations in three dimensions which are treated as first order exponentially time correlated stochastic processes. By varying the time correlation and uncertainty of the stochastic accelerations, the technique can range from purely geometric to purely dynamic. Performance estimates for this technique as applied to the orbit geometry planned for the EOS platforms indicate that decimeter accuracies for EOS orbit position may be obtainable. The sensitivity of the predicted orbit uncertainties to model errors for station locations, nongravitational platform accelerations, and Earth gravity is also presented.
NASA Technical Reports Server (NTRS)
Sutton, S. R.; Walker, R. M.
1986-01-01
Thermoluminescence (TL) is a promising technique for rapid screening of the large numbers of Antarctic meteorites, permitting identification of interesting specimens that can then be studied in detail by other, more definite techniques. Specifically, TL permits determination of rough terrestrial age, identification of potential paired groups and location of specimens with unusual pre-fall histories. Meteorites with long terrestrial ages are particularly valuable for studying transport and weathering mechanisms. Pairing studies are possible because TL variations among meteorites are large compared to variations within individual objects, especially for natural TL. Available TL data for several L3 fragments, three of which were paired by other techniques, are presented as an example of the use of TL parameters in pairing studies. Additional TL measurements, specifically a blind test, are recommended to satisfactorily establish the reliability of this pairing property. The TL measurements also identify fragments with unusual pre-fall histories, such an near-Sun orbits.
Absolute reactivity calibration of accelerator-driven systems after RACE-T experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jammes, C. C.; Imel, G. R.; Geslot, B.
2006-07-01
The RACE-T experiments that were held in november 2005 in the ENEA-Casaccia research center near Rome allowed us to improve our knowledge of the experimental techniques for absolute reactivity calibration at either startup or shutdown phases of accelerator-driven systems. Various experimental techniques for assessing a subcritical level were inter-compared through three different subcritical configurations SC0, SC2 and SC3, about -0.5, -3 and -6 dollars, respectively. The area-ratio method based of the use of a pulsed neutron source appears as the most performing. When the reactivity estimate is expressed in dollar unit, the uncertainties obtained with the area-ratio method were lessmore » than 1% for any subcritical configuration. The sensitivity to measurement location was about slightly more than 1% and always less than 4%. Finally, it is noteworthy that the source jerk technique using a transient caused by the pulsed neutron source shutdown provides results in good agreement with those obtained from the area-ratio technique. (authors)« less
Precise tracking of remote sensing satellites with the Global Positioning System
NASA Technical Reports Server (NTRS)
Yunck, Thomas P.; Wu, Sien-Chong; Wu, Jiun-Tsong; Thornton, Catherine L.
1990-01-01
The Global Positioning System (GPS) can be applied in a number of ways to track remote sensing satellites at altitudes below 3000 km with accuracies of better than 10 cm. All techniques use a precise global network of GPS ground receivers operating in concert with a receiver aboard the user satellite, and all estimate the user orbit, GPS orbits, and selected ground locations simultaneously. The GPS orbit solutions are always dynamic, relying on the laws of motion, while the user orbit solution can range from purely dynamic to purely kinematic (geometric). Two variations show considerable promise. The first one features an optimal synthesis of dynamics and kinematics in the user solution, while the second introduces a novel gravity model adjustment technique to exploit data from repeat ground tracks. These techniques, to be demonstrated on the Topex/Poseidon mission in 1992, will offer subdecimeter tracking accuracy for dynamically unpredictable satellites down to the lowest orbital altitudes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, W; Yin, F; Wang, C
Purpose: To develop a technique to estimate on-board VC-MRI using multi-slice sparsely-sampled cine images, patient prior 4D-MRI, motion-modeling and free-form deformation for real-time 3D target verification of lung radiotherapy. Methods: A previous method has been developed to generate on-board VC-MRI by deforming prior MRI images based on a motion model(MM) extracted from prior 4D-MRI and a single-slice on-board 2D-cine image. In this study, free-form deformation(FD) was introduced to correct for errors in the MM when large anatomical changes exist. Multiple-slice sparsely-sampled on-board 2D-cine images located within the target are used to improve both the estimation accuracy and temporal resolution ofmore » VC-MRI. The on-board 2D-cine MRIs are acquired at 20–30frames/s by sampling only 10% of the k-space on Cartesian grid, with 85% of that taken at the central k-space. The method was evaluated using XCAT(computerized patient model) simulation of lung cancer patients with various anatomical and respirational changes from prior 4D-MRI to onboard volume. The accuracy was evaluated using Volume-Percent-Difference(VPD) and Center-of-Mass-Shift(COMS) of the estimated tumor volume. Effects of region-of-interest(ROI) selection, 2D-cine slice orientation, slice number and slice location on the estimation accuracy were evaluated. Results: VCMRI estimated using 10 sparsely-sampled sagittal 2D-cine MRIs achieved VPD/COMS of 9.07±3.54%/0.45±0.53mm among all scenarios based on estimation with ROI-MM-ROI-FD. The FD optimization improved estimation significantly for scenarios with anatomical changes. Using ROI-FD achieved better estimation than global-FD. Changing the multi-slice orientation to axial, coronal, and axial/sagittal orthogonal reduced the accuracy of VCMRI to VPD/COMS of 19.47±15.74%/1.57±2.54mm, 20.70±9.97%/2.34±0.92mm, and 16.02±13.79%/0.60±0.82mm, respectively. Reducing the number of cines to 8 enhanced temporal resolution of VC-MRI by 25% while maintaining the estimation accuracy. Estimation using slices sampled uniformly through the tumor achieved better accuracy than slices sampled non-uniformly. Conclusions: Preliminary studies showed that it is feasible to generate VC-MRI from multi-slice sparsely-sampled 2D-cine images for real-time 3D-target verification. This work was supported by the National Institutes of Health under Grant No. R01-CA184173 and a research grant from Varian Medical Systems.« less
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess
2011-01-01
More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Acoustic Network Localization and Interpretation of Infrasonic Pulses from Lightning
NASA Astrophysics Data System (ADS)
Arechiga, R. O.; Johnson, J. B.; Badillo, E.; Michnovicz, J. C.; Thomas, R. J.; Edens, H. E.; Rison, W.
2011-12-01
We improve on the localization accuracy of thunder sources and identify infrasonic pulses that are correlated across a network of acoustic arrays. We attribute these pulses to electrostatic charge relaxation (collapse of the electric field) and attempt to model their spatial extent and acoustic source strength. Toward this objective we have developed a single audio range (20-15,000 Hz) acoustic array and a 4-station network of broadband (0.01-500 Hz) microphone arrays with aperture of ~45 m. The network has an aperture of 1700 m and was installed during the summers of 2009-2011 in the Magdalena mountains of New Mexico, an area that is subject to frequent lightning activity. We are exploring a new technique based on inverse theory that integrates information from the audio range and the network of broadband acoustic arrays to locate thunder sources more accurately than can be achieved with a single array. We evaluate the performance of the technique by comparing the location of thunder sources with RF sources located by the lightning mapping array (LMA) of Langmuir Laboratory at New Mexico Tech. We will show results of this technique for lightning flashes that occurred in the vicinity of our network of acoustic arrays and over the LMA. We will use acoustic network detection of infrasonic pulses together with LMA data and electric field measurements to estimate the spatial distribution of the charge (within the cloud) that is used to produce a lightning flash, and will try to quantify volumetric charges (charge magnitude) within clouds.
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less
Use of spatial capture-recapture modeling and DNA data to estimate densities of elusive animals
Kery, Marc; Gardner, Beth; Stoeckle, Tabea; Weber, Darius; Royle, J. Andrew
2011-01-01
Assessment of abundance, survival, recruitment rates, and density (i.e., population assessment) is especially challenging for elusive species most in need of protection (e.g., rare carnivores). Individual identification methods, such as DNA sampling, provide ways of studying such species efficiently and noninvasively. Additionally, statistical methods that correct for undetected animals and account for locations where animals are captured are available to efficiently estimate density and other demographic parameters. We collected hair samples of European wildcat (Felis silvestris) from cheek-rub lure sticks, extracted DNA from the samples, and identified each animals' genotype. To estimate the density of wildcats, we used Bayesian inference in a spatial capture-recapture model. We used WinBUGS to fit a model that accounted for differences in detection probability among individuals and seasons and between two lure arrays. We detected 21 individual wildcats (including possible hybrids) 47 times. Wildcat density was estimated at 0.29/km2 (SE 0.06), and 95% of the activity of wildcats was estimated to occur within 1.83 km from their home-range center. Lures located systematically were associated with a greater number of detections than lures placed in a cell on the basis of expert opinion. Detection probability of individual cats was greatest in late March. Our model is a generalized linear mixed model; hence, it can be easily extended, for instance, to incorporate trap- and individual-level covariates. We believe that the combined use of noninvasive sampling techniques and spatial capture-recapture models will improve population assessments, especially for rare and elusive animals.
NASA Astrophysics Data System (ADS)
Herrera, L.; Hoyos Ortiz, C. D.
2017-12-01
The spatio-temporal evolution of the Atmospheric Boundary Layer (ABL) in the Aburrá Valley, a narrow highly complex mountainous terrain located in the Colombian Andes, is studied using different datasets including radiosonde and remote sensors from the meteorological network of the Aburrá Valley Early Warning System. Different techniques are developed in order to estimate Mixed Layer Height (MLH) based on variance of the ceilometer backscattering profiles. The Medellín metropolitan area, home of 4.5 million people, is located on the base and the hills of the valley. The generally large aerosol load within the valley from anthropogenic emissions allows the use of ceilometer retrievals of the MLH, especially under stable atmospheric conditions (late at night and early in the morning). Convective atmospheres, however, favor the aerosol dispersion which in turns increases the uncertainty associated with the estimation of the Convective Boundary Layer using ceilometer retrievals. A multi-sensor technique is also developed based on Richardson Number estimations using a Radar Wind Profiler combined with a Microwave Radiometer. Results of this technique seem to be more accurate thorough the diurnal cycle. ABL retrievals are available from October 2014 to April 2017. The diurnal cycle of the ABL exhibits monomodal behavior, highly influenced by the evolution of the potential temperature profile, and the turbulent fluxes near the surface. On the other hand, the backscattering diurnal cycle presents a bimodal structure, showing that the amount of aerosol particles at the lower troposphere is strongly influenced by anthropogenic emissions, dispersion conditioned by topography and by the ABL dynamics, conditioning the available vertical height for the pollutants to interact and disperse. Nevertheless, the amount, distribution or type of atmospheric aerosols does not appear to have a first order influence on the MLH variations or evolution. Results also show that intra-annual and interannual variations of cloudiness and surface incident radiation strongly condition the ABL expansion rate leading to oscillatory patterns. March (July) is the month with the lowest (highest) ABL mean. In March, the ABL at the base of the Valley is less than the height of surrounding mountains, leading to particulate matter accumulation.
NASA Astrophysics Data System (ADS)
Harlow, R. C.; Blockley, E. W.; Brooks, I. M.; Essery, R.; Milton, S.; Renfrew, I.; Vosper, S.
2016-12-01
The spatio-temporal evolution of the Atmospheric Boundary Layer (ABL) in the Aburrá Valley, a narrow highly complex mountainous terrain located in the Colombian Andes, is studied using different datasets including radiosonde and remote sensors from the meteorological network of the Aburrá Valley Early Warning System. Different techniques are developed in order to estimate Mixed Layer Height (MLH) based on variance of the ceilometer backscattering profiles. The Medellín metropolitan area, home of 4.5 million people, is located on the base and the hills of the valley. The generally large aerosol load within the valley from anthropogenic emissions allows the use of ceilometer retrievals of the MLH, especially under stable atmospheric conditions (late at night and early in the morning). Convective atmospheres, however, favor the aerosol dispersion which in turns increases the uncertainty associated with the estimation of the Convective Boundary Layer using ceilometer retrievals. A multi-sensor technique is also developed based on Richardson Number estimations using a Radar Wind Profiler combined with a Microwave Radiometer. Results of this technique seem to be more accurate thorough the diurnal cycle. ABL retrievals are available from October 2014 to April 2017. The diurnal cycle of the ABL exhibits monomodal behavior, highly influenced by the evolution of the potential temperature profile, and the turbulent fluxes near the surface. On the other hand, the backscattering diurnal cycle presents a bimodal structure, showing that the amount of aerosol particles at the lower troposphere is strongly influenced by anthropogenic emissions, dispersion conditioned by topography and by the ABL dynamics, conditioning the available vertical height for the pollutants to interact and disperse. Nevertheless, the amount, distribution or type of atmospheric aerosols does not appear to have a first order influence on the MLH variations or evolution. Results also show that intra-annual and interannual variations of cloudiness and surface incident radiation strongly condition the ABL expansion rate leading to oscillatory patterns. March (July) is the month with the lowest (highest) ABL mean. In March, the ABL at the base of the Valley is less than the height of surrounding mountains, leading to particulate matter accumulation.
Seismic Methods of Identifying Explosions and Estimating Their Yield
NASA Astrophysics Data System (ADS)
Walter, W. R.; Ford, S. R.; Pasyanos, M.; Pyle, M. L.; Myers, S. C.; Mellors, R. J.; Pitarka, A.; Rodgers, A. J.; Hauk, T. F.
2014-12-01
Seismology plays a key national security role in detecting, locating, identifying and determining the yield of explosions from a variety of causes, including accidents, terrorist attacks and nuclear testing treaty violations (e.g. Koper et al., 2003, 1999; Walter et al. 1995). A collection of mainly empirical forensic techniques has been successfully developed over many years to obtain source information on explosions from their seismic signatures (e.g. Bowers and Selby, 2009). However a lesson from the three DPRK declared nuclear explosions since 2006, is that our historic collection of data may not be representative of future nuclear test signatures (e.g. Selby et al., 2012). To have confidence in identifying future explosions amongst the background of other seismic signals, and accurately estimate their yield, we need to put our empirical methods on a firmer physical footing. Goals of current research are to improve our physical understanding of the mechanisms of explosion generation of S- and surface-waves, and to advance our ability to numerically model and predict them. As part of that process we are re-examining regional seismic data from a variety of nuclear test sites including the DPRK and the former Nevada Test Site (now the Nevada National Security Site (NNSS)). Newer relative location and amplitude techniques can be employed to better quantify differences between explosions and used to understand those differences in term of depth, media and other properties. We are also making use of the Source Physics Experiments (SPE) at NNSS. The SPE chemical explosions are explicitly designed to improve our understanding of emplacement and source material effects on the generation of shear and surface waves (e.g. Snelson et al., 2013). Finally we are also exploring the value of combining seismic information with other technologies including acoustic and InSAR techniques to better understand the source characteristics. Our goal is to improve our explosion models and our ability to understand and predict where methods of identifying explosions and estimating their yield work well, and any circumstances where they may not.
Fundamentals of functional imaging I: current clinical techniques.
Luna, A; Martín Noguerol, T; Mata, L Alcalá
2018-05-01
Imaging techniques can establish a structural, physiological, and molecular phenotype for cancer, which helps enable accurate diagnosis and personalized treatment. In recent years, various imaging techniques that make it possible to study the functional characteristics of tumors quantitatively and reproducibly have been introduced and have become established in routine clinical practice. Perfusion studies enable us to estimate the microcirculation as well as tumor angiogenesis and permeability using ultrafast dynamic acquisitions with ultrasound, computed tomography, or magnetic resonance (MR) imaging. Diffusion-weighted sequences now form part of state-of-the-art MR imaging protocols to evaluate oncologic lesions in any anatomic location. Diffusion-weighted imaging provides information about the occupation of the extracellular and extravascular space and indirectly estimates the cellularity and apoptosis of tumors, having demonstrated its relation with biologic aggressiveness in various tumor lines and its usefulness in the evaluation of the early response to systemic and local targeted therapies. Another tool is hydrogen proton MR spectroscopy, which is used mainly in the study of the metabolic characteristics of brain tumors. However, the complexity of the technique and its lack of reproducibility have limited its clinical use in other anatomic areas, although much experience with the use of this technique in the assessment of prostate and breast cancers as well as liver lesions has also accumulated. This review analyzes the imaging techniques that make it possible to evaluate the physiological and molecular characteristics of cancer that have already been introduced into clinical practice, such as techniques that evaluate angiogenesis through dynamic acquisitions after the administration of contrast material, diffusion-weighted imaging, or hydrogen proton MR spectroscopy, as well as their principal applications in oncology. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Extracting Maximum Total Water Levels from Video "Brightest" Images
NASA Astrophysics Data System (ADS)
Brown, J. A.; Holman, R. A.; Stockdon, H. F.; Plant, N. G.; Long, J.; Brodie, K.
2016-02-01
An important parameter for predicting storm-induced coastal change is the maximum total water level (TWL). Most studies estimate the TWL as the sum of slowly varying water levels, including tides and storm surge, and the extreme runup parameter R2%, which includes wave setup and swash motions over minutes to seconds. Typically, R2% is measured using video remote sensing data, where cross-shore timestacks of pixel intensity are digitized to extract the horizontal runup timeseries. However, this technique must be repeated at multiple alongshore locations to resolve alongshore variability, and can be tedious and time consuming. We seek an efficient, video-based approach that yields a synoptic estimate of TWL that accounts for alongshore variability and can be applied during storms. In this work, the use of a video product termed the "brightest" image is tested; this represents the highest intensity of each pixel captured during a 10-minute collection period. Image filtering and edge detection techniques are applied to automatically determine the shoreward edge of the brightest region (i.e., the swash zone) at each alongshore pixel. The edge represents the horizontal position of the maximum TWL along the beach during the collection period, and is converted to vertical elevations using measured beach topography. This technique is evaluated using video and topographic data collected every half-hour at Duck, NC, during differing hydrodynamic conditions. Relationships between the maximum TWL estimates from the brightest images and various runup statistics computed using concurrent runup timestacks are examined, and errors associated with mapping the horizontal results to elevations are discussed. This technique is invaluable, as it can be used to routinely estimate maximum TWLs along a coastline from a single brightest image product, and provides a means for examining alongshore variability of TWLs at high alongshore resolution. These advantages will be useful in validating numerical hydrodynamic models and improving coastal change predictions.
Purcell, M; Magette, W L
2009-04-01
Both planning and design of integrated municipal solid waste management systems require accurate prediction of waste generation. This research predicted the quantity and distribution of biodegradable municipal waste (BMW) generation within a diverse 'landscape' of residential areas, as well as from a variety of commercial establishments (restaurants, hotels, hospitals, etc.) in the Dublin (Ireland) region. Socio-economic variables, housing types, and the sizes and main activities of commercial establishments were hypothesized as the key determinants contributing to the spatial variability of BMW generation. A geographical information system (GIS) 'model' of BMW generation was created using ArcMap, a component of ArcGIS 9. Statistical data including socio-economic status and household size were mapped on an electoral district basis. Historical research and data from scientific literature were used to assign BMW generation rates to residential and commercial establishments. These predictions were combined to give overall BMW estimates for the region, which can aid waste planning and policy decisions. This technique will also aid the design of future waste management strategies, leading to policy and practice alterations as a function of demographic changes and development. The household prediction technique gave a more accurate overall estimate of household waste generation than did the social class technique. Both techniques produced estimates that differed from the reported local authority data; however, given that local authority reported figures for the region are below the national average, with some of the waste generated from apartment complexes being reported as commercial waste, predictions arising from this research are believed to be closer to actual waste generation than a comparison to reported data would suggest. By changing the input data, this estimation tool can be adapted for use in other locations. Although focusing on waste in the Dublin region, this method of waste prediction can have significant potential benefits if a universal method can be found to apply it effectively.
Bogani, Giorgio; Ditto, Antonino; Martinelli, Fabio; Lorusso, Domenica; Chiappa, Valentina; Donfrancesco, Cristina; Di Donato, Violante; Indini, Alice; Aletti, Giovanni; Raspagliesi, Francesco
2016-02-01
Optimal cytoreduction is one the main factors improving survival outcomes in patients affected by ovarian cancer (OC). It is estimated that approximately 40% of OC patients have gross disease located on the diaphragm. However, no mature data evaluating outcomes of surgical techniques for the management of diaphragmatic carcinosis exist. In the present study, we aimed to estimate surgery-related morbidity of different surgical techniques for diaphragmatic cytoreduction in advanced or recurrent OC. PubMed (MEDLINE), Web of Science, and Clincaltrials.gov databases were searched for records estimating outcomes of diaphragmatic peritoneal stripping (DPS) or diaphragmatic full-thickness resection (DFTR) for OC. The meta-analysis was performed using the Cochrane Review software. For the final analysis, 5 articles were available, including 272 patients. Diaphragmatic peritoneal stripping and DFTR were performed in 197 patients (72%) and 75 patients (28%), respectively. Pooled analysis suggested that the estimated pleural effusion rate was 43% and 51% after DPS and DFTR, respectively. The need for pleural punctures or chest tube placement was 4% and 9% after DPS and DFTR, respectively. The rate of postoperative pneumothorax (4% vs 9%; odds ratio, 0.31; 95% confidence interval, 0.05-2.08) and subdiaphragmatic abscess (3% vs 3%; odds ratio, 0.45; 95% confidence interval, 0.09-2.31) were similar after the execution of DPS and DFTR. Diaphragmatic surgery is a crucial step during cytoreduction for advanced or recurrent OC. Obviously, the choice to perform DPS or DFTR depends on the infiltration of the diaphragmatic muscle or not. Both the procedures are associated with a low pulmonary complication and chest tube placement rates.
Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns
Teng, Dongdong; Chen, Dihu; Tan, Hongzhou
2015-01-01
The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929
Charged-particle emission tomography
Ding, Yijun; Caucci, Luca; Barrett, Harrison H.
2018-01-01
Purpose Conventional charged-particle imaging techniques —such as autoradiography —provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Methods Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Results Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. Conclusions We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. PMID:28370094
Charged-particle emission tomography.
Ding, Yijun; Caucci, Luca; Barrett, Harrison H
2017-06-01
Conventional charged-particle imaging techniques - such as autoradiography - provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Flight control synthesis for flexible aircraft using Eigenspace assignment
NASA Technical Reports Server (NTRS)
Davidson, J. B.; Schmidt, D. K.
1986-01-01
The use of eigenspace assignment techniques to synthesize flight control systems for flexible aircraft is explored. Eigenspace assignment techniques are used to achieve a specified desired eigenspace, chosen to yield desirable system impulse residue magnitudes for selected system responses. Two of these are investigated. The first directly determines constant measurement feedback gains that will yield a close-loop system eigenspace close to a desired eigenspace. The second technique selects quadratic weighting matrices in a linear quadratic control synthesis that will asymptotically yield the close-loop achievable eigenspace. Finally, the possibility of using either of these techniques with state estimation is explored. Application of the methods to synthesize integrated flight-control and structural-mode-control laws for a large flexible aircraft is demonstrated and results discussed. Eigenspace selection criteria based on design goals are discussed, and for the study case it would appear that a desirable eigenspace can be obtained. In addition, the importance of state-space selection is noted along with problems with reduced-order measurement feedback. Since the full-state control laws may be implemented with dynamic compensation (state estimation), the use of reduced-order measurement feedback is less desirable. This is especially true since no change in the transient response from the pilot's input results if state estimation is used appropriately. The potential is also noted for high actuator bandwidth requirements if the linear quadratic synthesis approach is utilized. Even with the actuator pole location selected, a problem with unmodeled modes is noted due to high bandwidth. Some suggestions for future research include investigating how to choose an eigenspace that will achieve certain desired dynamics and stability robustness, determining how the choice of measurements effects synthesis results, and exploring how the phase relationships between desired eigenvector elements effects the synthesis results.
Cover estimation and payload location using Markov random fields
NASA Astrophysics Data System (ADS)
Quach, Tu-Thach
2014-02-01
Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.
MARS Science Laboratory Post-Landing Location Estimation Using Post2 Trajectory Simulation
NASA Technical Reports Server (NTRS)
Davis, J. L.; Shidner, Jeremy D.; Way, David W.
2013-01-01
The Mars Science Laboratory (MSL) Curiosity rover landed safely on Mars August 5th, 2012 at 10:32 PDT, Earth Received Time. Immediately following touchdown confirmation, best estimates of position were calculated to assist in determining official MSL locations during entry, descent and landing (EDL). Additionally, estimated balance mass impact locations were provided and used to assess how predicted locations compared to actual locations. For MSL, the Program to Optimize Simulated Trajectories II (POST2) was the primary trajectory simulation tool used to predict and assess EDL performance from cruise stage separation through rover touchdown and descent stage impact. This POST2 simulation was used during MSL operations for EDL trajectory analyses in support of maneuver decisions and imaging MSL during EDL. This paper presents the simulation methodology used and results of pre/post-landing MSL location estimates and associated imagery from Mars Reconnaissance Orbiter s (MRO) High Resolution Imaging Science Experiment (HiRISE) camera. To generate these estimates, the MSL POST2 simulation nominal and Monte Carlo data, flight telemetry from onboard navigation, relay orbiter positions from MRO and Mars Odyssey and HiRISE generated digital elevation models (DEM) were utilized. A comparison of predicted rover and balance mass location estimations against actual locations are also presented.
Use of environmental isotope tracer and GIS techniques to estimate basin recharge
NASA Astrophysics Data System (ADS)
Odunmbaku, Abdulganiu A. A.
The extensive use of ground water only began with the advances in pumping technology at the early portion of 20th Century. Groundwater provides the majority of fresh water supply for municipal, agricultural and industrial uses, primarily because of little to no treatment it requires. Estimating the volume of groundwater available in a basin is a daunting task, and no accurate measurements can be made. Usually water budgets and simulation models are primarily used to estimate the volume of water in a basin. Precipitation, land surface cover and subsurface geology are factors that affect recharge; these factors affect percolation which invariably affects groundwater recharge. Depending on precipitation, soil chemistry, groundwater chemical composition, gradient and depth, the age and rate of recharge can be estimated. This present research proposes to estimate the recharge in Mimbres, Tularosa and Diablo Basin using the chloride environmental isotope; chloride mass-balance approach and GIS. It also proposes to determine the effect of elevation on recharge rate. Mimbres and Tularosa Basin are located in southern New Mexico State, and extend southward into Mexico. Diablo Basin is located in Texas in extends southward. This research utilizes the chloride mass balance approach to estimate the recharge rate through collection of groundwater data from wells, and precipitation. The data were analysed statistically to eliminate duplication, outliers, and incomplete data. Cluster analysis, piper diagram and statistical significance were performed on the parameters of the groundwater; the infiltration rate was determined using chloride mass balance technique. The data was then analysed spatially using ArcGIS10. Regions of active recharge were identified in Mimbres and Diablo Basin, but this could not be clearly identified in Tularosa Basin. CMB recharge for Tularosa Basin yields 0.04037mm/yr (0.0016in/yr), Diablo Basin was 0.047mm/yr (0.0016 in/yr), and 0.2153mm/yr (0.00848in/yr) for Mimbres Basin. The elevation where active recharge occurs was determined to be 1,500m for Mimbres and Tularosa Basin and 1,200m for Diablo Basin. The results obtained in this study were consistent with result obtained by other researchers working in basins with similar semiarid mountainous conditions, thereby validating the applicability of CMB in the three basins. Keywords: Recharge, chloride mass balance, elevation, Mimbres, Tularosa, Diablo, Basin, GIS, chloride, elevation.
COMDYN: Software to study the dynamics of animal communities using a capture-recapture approach
Hines, J.E.; Boulinier, T.; Nichols, J.D.; Sauer, J.R.; Pollock, K.H.
1999-01-01
COMDYN is a set of programs developed for estimation of parameters associated with community dynamics using count data from two locations or time periods. It is Internet-based, allowing remote users either to input their own data, or to use data from the North American Breeding Bird Survey for analysis. COMDYN allows probability of detection to vary among species and among locations and time periods. The basic estimator for species richness underlying all estimators is the jackknife estimator proposed by Burnham and Overton. Estimators are presented for quantities associated with temporal change in species richness, including rate of change in species richness over time, local extinction probability, local species turnover and number of local colonizing species. Estimators are also presented for quantities associated with spatial variation in species richness, including relative richness at two locations and proportion of species present in one location that are also present at a second location. Application of the estimators to species richness estimation has been previously described and justified. The potential applications of these programs are discussed.
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Vesselinov, Velimir V.
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. R. Saffle; R. G. Mitchell; R. B. Evans
The results of the various monitoring programs for 1998 indicated that radioactivity from the DOE's Idaho National Engineering and Environmental Laboratory (INEEL) operations could generally not be distinguished from worldwide fallout and natural radioactivity in the region surrounding the INEEL. Although some radioactive materials were discharged during INEEL operations, concentrations in the offsite environment and doses to the surrounding population were far less than state of Idaho and federal health protection guidelines. Gross alpha and gross beta measurements, used as a screening technique for air filters, were investigated by making statistical comparisons between onsite or boundary location concentrations and themore » distant community group concentrations. Gross alpha activities were generally higher at distant locations than at boundary and onsite locations. Air samples were also analyzed for specific radionuclides. Some human-made radionuclides were detected at offsite locations, but most were near the minimum detectable concentration and their presence was attributable to natural sources, worldwide fallout, and statistical variations in the analytical results rather than to INEEL operations. Low concentrations of 137Cs were found in muscle tissue and liver of some game animals and sheep. These levels were mostly consistent with background concentrations measured in animals sampled onsite and offsite in recent years. Ionizing radiation measured simultaneously at the INEEL boundary and distant locations using environmental dosimeters were similar and showed only background levels. The maximum potential population dose from submersion, ingestion, inhalation, and deposition to the approximately 121,500 people residing within an 80-km (50-mi) radius from the geographical center of the INEEL was estimated to be 0.08 person-rem (8 x 10-4 person-Sv) using the MDIFF air dispersion model. This population dose is less than 0.0002 percent of the estimated 43,7 00 person-rem (437 person-Sv) population dose from background radioactivity.« less
Ultrasonography in Acupuncture-Uses in Education and Research.
Leow, Mabel Qi He; Cui, Shu Li; Mohamed Shah, Mohammad Taufik Bin; Cao, Taige; Tay, Shian Chao; Tay, Peter Kay Chai; Ooi, Chin Chin
2017-06-01
This study aims to explore the potential use of ultrasound in locating the second posterior sacral foramen acupuncture point, quantifying depth of insertion and describing surrounding anatomical structures. We performed acupuncture needle insertion on a study team member. There were four steps in our experiment. First, the acupuncturist located the acupuncture point by palpation. Second, we used an ultrasound machine to visualize the structures surrounding the location of the acupuncture point and measure the depth required for needle insertion. Third, the acupuncturist inserted the acupuncture needle into the acupuncture point at an angle of 30°. Fourth, we performed another ultrasound scan to ensure that the needle was in the desired location. Results suggested that ultrasound could be used to locate the acupuncture point and estimate the depth of needle insertion. The needle was inserted to a depth of 4.0 cm to reach the surface of the sacral foramen. Based on Pythagoras theorem, taking a needle insertion angle of 30° and a needle insertion depth of 4.0 cm, the estimated perpendicular depth is 1.8 cm. An ultrasound scan corroborated the depth of 1.85 cm. The use of an ultrasound-guided technique for needle insertion in acupuncture practice could help standardize the treatment. Clinicians and students would be able to visualize and measure the depth of the sacral foramen acupuncture point, to guide the depth of needle insertion. This methodological guide could also be used to create a standard treatment protocol for research. A similar mathematical guide could also be created for other acupuncture points in future. Copyright © 2017. Published by Elsevier B.V.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
NASA Technical Reports Server (NTRS)
Hoffer, R. M. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Good ecological, classification accuracy (90-95%) can be achieved in areas of rugged relief on a regional basis for Level 1 cover types (coniferous forest, deciduous forest, grassland, cropland, bare rock and soil, and water) using computer-aided analysis techniques on ERTS/MSS data. Cost comparisons showed that a Level 1 cover type map and a table of areal estimates could be obtained for the 443,000 hectare San Juan Mt. test site for less than 0.1 cent per acre, whereas photointerpretation techniques would cost more than 0.4 cent per acre. Results of snow cover mapping have conclusively proven that the areal extent of snow in mountainous terrain can be rapidly and economically mapped by using ERTS/MSS data and computer-aided analysis techniques. A distinct relationship between elevation and time of freeze or thaw was observed, during mountain lake mapping. Basic lithologic units such as igneous, sedimentary, and unconsolidated rock materials were successfully identified. Geomorphic form, which is exhibited through spatial and textual data, can only be inferred from ERTS data. Data collection platform systems can be utilized to produce satisfactory data from extremely inaccessible locations that encounter very adverse weather conditions, as indicated by results obtained from a DCP located at 3,536 meters elevation that encountered minimum temperatures of -25.5 C and wind speeds of up to 40.9m/sec (91 mph), but which still performed very reliably.
Inferring the most probable maps of underground utilities using Bayesian mapping model
NASA Astrophysics Data System (ADS)
Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony
2018-03-01
Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.
NASA Astrophysics Data System (ADS)
Guo, Qiang; Galushko, Volodymyr G.; Zalizovski, Andriy V.; Kashcheyev, Sergiy B.; Zheng, Yu
2018-05-01
A modification of the Doppler Interferometry Technique is suggested to enable estimating angles of arrival of comparatively broadband HF signals scattered by random irregularities of the ionospheric plasma with the use of small-size weakly directional antennas. The technique is based on the measurements of cross-spectra phases of the probe radiation recorded at least in three spatially separated points. The developed algorithm has been used to investigate the angular and frequency-time characteristics of HF signals propagating at frequencies above the maximum usable one (MUF) for the direct radio path Moscow-Kharkiv. The received signal spectra show presence of three families of spatial components attributed, respectively, to scattering by plasma irregularities near the middle point of the radio path, ground backscatter signals and scattering of the sounding signals by the intense plasma turbulence associated with auroral activations. It has been shown that the regions responsible for the formation of the third family components are located well inside the auroral oval. The drift velocity and direction of the auroral ionosphere plasma have been determined. The obtained estimates are consistent with the classical conception of the ionospheric plasma convection at high latitudes and do not contradict the results of investigations of the auroral ionosphere dynamics using the SuperDARN network.
Albers, J.L.; Wildhaber, M.L.; DeLonay, A.J.
2013-01-01
Minimally invasive, non-lethal methods of ultrasonography were used to assess sex, egg diameter, fecundity, gonad volume, and gonadosomatic index, as well as endoscopy to visually assess the reproductive stage of Scaphirhynchus albus. Estimated mean egg diameters of 2.202 ± 0.187 mm and mean fecundity of 44 531 ± 23 940 eggs were similar to previous studies using invasive techniques. Mean S. albus gonadosomatic indices (GSI) for reproductive and non-reproductive females were 16.16 and 1.26%, respectively, while reproductive and non-reproductive male GSI were 2.00 and 0.43%, respectively. There was no relationship between hybrid status or capture location and GSI. Mean fecundity was 48.5% higher than hatchery spawn estimates. Fecundity increased as fork length increased but did so more dramatically in the upper river kilometers of the Missouri River. By examining multiple fish over multiple years, the reproductive cycle periodicity for hatchery female S. albus was found to be 2–4 years and river dwelling males 1–4 years. The use of ultrasonic and endoscopic methods in combination was shown to be helpful in tracking individual gonad characteristics over multi-year reproductive cycles.
NASA Astrophysics Data System (ADS)
Mitchell, G. A.; Gharib, J. J.; Doolittle, D. F.
2015-12-01
Methane gas flux from the seafloor to atmosphere is an important variable for global carbon cycle and climate models, yet is poorly constrained. Methodologies used to estimate seafloor gas flux commonly employ a combination of acoustic and optical techniques. These techniques often use hull-mounted multibeam echosounders (MBES) to quickly ensonify large volumes of the water column for acoustic backscatter anomalies indicative of gas bubble plumes. Detection of these water column anomalies with a MBES provides information on the lateral distribution of the plumes, the midwater dimensions of the plumes, and their positions on the seafloor. Seafloor plume locations are targeted for visual investigations using a remotely operated vehicle (ROV) to determine bubble emission rates, venting behaviors, bubble sizes, and ascent velocities. Once these variables are measured in-situ, an extrapolation of gas flux is made over the survey area using the number of remotely-mapped flares. This methodology was applied to a geophysical survey conducted in 2013 over a large seafloor crater that developed in response to an oil well blowout in 1983 offshore Papua New Guinea. The site was investigated by multibeam and sidescan mapping, sub-bottom profiling, 2-D high-resolution multi-channel seismic reflection, and ROV video and coring operations. Numerous water column plumes were detected in the data suggesting vigorously active vents within and near the seafloor crater (Figure 1). This study uses dual-frequency MBES datasets (Reson 7125, 200/400 kHz) and ROV video imagery of the active hydrocarbon seeps to estimate total gas flux from the crater. Plumes of bubbles were extracted from the water column data using threshold filtering techniques. Analysis of video images of the seep emission sites within the crater provided estimates on bubble size, expulsion frequency, and ascent velocity. The average gas flux characteristics made from ROV video observations is extrapolated over the number of individual flares detected acoustically and extracted to estimate gas flux from the survey area. The gas flux estimate from the water column filtering and ROV observations yields a range of 2.2 - 6.6 mol CH4 / min.
NASA Astrophysics Data System (ADS)
de Souza, V.; Apel, W. D.; Arteaga, J. C.; Badea, F.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Brüggemann, M.; Buchholz, P.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuhrmann, D.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Klages, H. O.; Kolotaev, Y.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Navarra, G.; Nehls, S.; Oehlschläger, J.; Ostapchenko, S.; Over, S.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schröder, F.; Sima, O.; Stümpert, M.; Toma, G.; Trinchero, G. C.; Ulrich, H.; van Buren, J.; Walkowiak, W.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.
2009-04-01
KASCADE-Grande is a multi-component detector located at Karlsruhe, Germany. It was optimized to measure cosmic ray air showers with energies between 5×1016 and 1018 eV. Its capabilities are based on the use of several techniques to measure the electromagnetic and muon components of the shower in an independent way which allows a direct comparison to hadronic interaction models and a good estimation of the primary cosmic ray composition. In this paper, we present the status of the experiment, an update of the data analysis and the latest results.
Stuckey, Marla H.; Koerkle, Edward H.; Ulrich, James E.
2012-01-01
BaSE uses the map correlation method and flow-duration exceedance probability regression equations to estimate baseline daily mean streamflow for an ungaged location. The output from BaSE is a Microsoft Excel® report file that summarizes the reference streamgage and ungaged location information, including basin characteristics, percent difference in basin characteristics between the two locations, any warning associated with the basin characteristics, mean and median streamflow for the ungaged location, and a daily hydrograph of streamflow for water years 1960–2008 for the ungaged location. The daily mean streamflow for the ungaged location can be exported as a text file to be used as input into other statistical software packages. BaSE estimates daily mean streamflow for baseline conditions only, and any alterations to streamflow from regulation, large water use, or substantial mining are not reflected in the estimated streamflow.
ERIC Educational Resources Information Center
Ho, Chung-Cheng
2016-01-01
For decades, direction finding has been an important research topic in many applications such as radar, location services, and medical diagnosis for treatment. For those kinds of applications, the precision of location estimation plays an important role, since that, having a higher precision location estimate method is always desirable. Although…
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.
2017-01-01
Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified velocity gradients reduce the residuals, the relative location uncertainties and the sensitivity to the combination of stations used. The traveltime gradients appear to be overestimated for the regional phases, and teleseismic relative location estimates are likely to be more accurate despite an apparent lower precision. Calibrations for regional phases are essential given that smaller magnitude events are likely not to be recorded teleseismically. We discuss the implications for the absolute event locations. Placing the 2006 event under a local maximum of overburden at 41.293°N, 129.105°E would imply a location of 41.299°N, 129.075°E for the January 2016 event, providing almost optimal overburden for the later four events.
Assessing the Impact of Observations on Numerical Weather Forecasts Using the Adjoint Method
NASA Technical Reports Server (NTRS)
Gelaro, Ronald
2012-01-01
The adjoint of a data assimilation system provides a flexible and efficient tool for estimating observation impacts on short-range weather forecasts. The impacts of any or all observations can be estimated simultaneously based on a single execution of the adjoint system. The results can be easily aggregated according to data type, location, channel, etc., making this technique especially attractive for examining the impacts of new hyper-spectral satellite instruments and for conducting regular, even near-real time, monitoring of the entire observing system. This talk provides a general overview of the adjoint method, including the theoretical basis and practical implementation of the technique. Results are presented from the adjoint-based observation impact monitoring tool in NASA's GEOS-5 global atmospheric data assimilation and forecast system. When performed in conjunction with standard observing system experiments (OSEs), the adjoint results reveal both redundancies and dependencies between observing system impacts as observations are added or removed from the assimilation system. Understanding these dependencies may be important for optimizing the use of the current observational network and defining requirements for future observing systems
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
Visible light communication technology for fine-grained indoor localization
NASA Astrophysics Data System (ADS)
Vieira, M.; Vieira, M. A.; Louro, P.; Fantoni, A.; Vieira, P.
2018-02-01
This paper focuses on designing and analysing a visible light based communication and positioning system. The indoor positioning system uses trichromatic white Light Emitting Diodes (LEDs), both for illumination purposes and as transmitters, and an optical processor, based on a-SiC:H technology, as mobile receiver. On-Off Keying (OOK) modulation scheme is used, proving a good trade-off between system performance and implementation complexity. In the following, the relationship between the transmitted data and the received output levels is decoded. LED bulbs work as transmitters, sending information together with different identifiers, IDs, related to their physical locations. Square and diamond topologies for the unit cell are analyzed, and a 2D localization design, demonstrated by a prototype implementation, is presented. Fine-grained indoor localization is tested. The received signal is used in coded multiplexing techniques for supporting communications and navigation concomitantly on the same channel. The location and motion information is found by mapping the position and estimating the location areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Allmendinger, M.; Gnann, S.; Heisserer, T.; Bárdossy, A.
2017-12-01
The basic problem of geostatistics is to estimate the primary variable (e.g. groundwater quality, nitrate) at an un-sampled location based on point measurements at locations in the vicinity. Typically, models are being used that describe the spatial dependence based on the geometry of the observation network. This presentation demonstrates methods that take the following properties additionally into account: the statistical distribution of the measurements, a different degree of dependence in different quantiles, censored measurements, the composition of categorical additional information in the neighbourhood (exhaustive secondary information), and the spatial dependence of a dependent secondary variable, possibly measured with a different observation network (non-exhaustive secondary data). Two modelling approaches are demonstrated individually and combined: The non-stationarity in the marginal distribution is accounted for by locally mixed distribution functions that depend on the composition of the categorical variable in the neighbourhood of each interpolation location. This methodology is currently being implemented for operational use at the environmental state agency of Baden-Württemberg. An alternative to co-Kriging in copula space with an arbitrary number of secondary parameters is presented: The method performs better than traditional techniques if the primary variable is undersampled and does not produce erroneous negative estimates. Even more, the quality of the uncertainty estimates is much improved. The worth of the secondary information is thoroughly evaluated. The improved geostatistical hydrogeological models are being analyzed using measurements of a large observation network ( 2500 measurement locations) in the state of Baden-Württemberg ( 36.000 km2). Typical groundwater quality parameters such as nitrate, chloride, barium, antrazine, and desethylatrazine are being assessed, cross-validated, and compared with traditional geostatistical methods. The secondary information of land use is available on a 30m x 30m raster. We show that the presented methods are not only better estimators (e.g. in the sense of an average quadratic error), but exhibit a much more realistic structure of the uncertainty and hence are improvements compared to existing methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, L; Ding, G
Purpose: Dose calculation accuracy for the out-of-field dose is important for predicting the dose to the organs-at-risk when they are located outside primary beams. The investigations on evaluating the calculation accuracy of treatment planning systems (TPS) on out-of-field dose in existing publications have focused on low energy (6MV) photon. This study evaluates out-of-field dose calculation accuracy of AAA algorithm for 15MV high energy photon beams. Methods: We used the EGSnrc Monte Carlo (MC) codes to evaluate the AAA algorithm in Varian Eclipse TPS (v.11). The incident beams start with validated Varian phase-space sources for a TrueBeam linac equipped with Millenniummore » 120 MLC. Dose comparisons between using AAA and MC for CT based realistic patient treatment plans using VMAT techniques for prostate and lung were performed and uncertainties of organ dose predicted by AAA at out-of-field location were evaluated. Results: The results show that AAA calculations under-estimate doses at the dose level of 1% (or less) of prescribed dose for CT based patient treatment plans using VMAT techniques. In regions where dose is only 1% of prescribed dose, although AAA under-estimates the out-of-field dose by 30% relative to the local dose, it is only about 0.3% of prescribed dose. For example, the uncertainties of calculated organ dose to liver or kidney that is located out-of-field is <0.3% of prescribed dose. Conclusion: For 15MV high energy photon beams, very good agreements (<1%) in calculating dose distributions were obtained between AAA and MC. The uncertainty of out-of-field dose calculations predicted by the AAA algorithm for realistic patient VMAT plans is <0.3% of prescribed dose in regions where the dose relative to the prescribed dose is <1%, although the uncertainties can be much larger relative to local doses. For organs-at-risk located at out-of-field, the error of dose predicted by Eclipse using AAA is negligible. This work was conducted in part using the resources of Varian research grant VUMC40590-R.« less
Overlapped Fourier coding for optical aberration removal
Horstmeyer, Roarke; Ou, Xiaoze; Chung, Jaebum; Zheng, Guoan; Yang, Changhuei
2014-01-01
We present an imaging procedure that simultaneously optimizes a camera’s resolution and retrieves a sample’s phase over a sequence of snapshots. The technique, termed overlapped Fourier coding (OFC), first digitally pans a small aperture across a camera’s pupil plane with a spatial light modulator. At each aperture location, a unique image is acquired. The OFC algorithm then fuses these low-resolution images into a full-resolution estimate of the complex optical field incident upon the detector. Simultaneously, the algorithm utilizes redundancies within the acquired dataset to computationally estimate and remove unknown optical aberrations and system misalignments via simulated annealing. The result is an imaging system that can computationally overcome its optical imperfections to offer enhanced resolution, at the expense of taking multiple snapshots over time. PMID:25321982
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less
Ramírez-Miquet, Evelio E.; Perchoux, Julien; Loubière, Karine; Tronche, Clément; Prat, Laurent; Sotolongo-Costa, Oscar
2016-01-01
Optical feedback interferometry (OFI) is a compact sensing technique with recent implementation for flow measurements in microchannels. We propose implementing OFI for the analysis at the microscale of multiphase flows starting with the case of parallel flows of two immiscible fluids. The velocity profiles in each phase were measured and the interface location estimated for several operating conditions. To the authors knowledge, this sensing technique is applied here for the first time to multiphase flows. Theoretical profiles issued from a model based on the Couette viscous flow approximation reproduce fairly well the experimental results. The sensing system and the analysis presented here provide a new tool for studying more complex interactions between immiscible fluids (such as liquid droplets flowing in a microchannel). PMID:27527178
The rush to drill for natural gas: a public health cautionary tale.
Finkel, Madelon L; Law, Adam
2011-05-01
Efforts to identify alternative sources of energy have focused on extracting natural gas from vast shale deposits. The Marcellus Shale, located in western New York, Pennsylvania, and Ohio, is estimated to contain enough natural gas to supply the United States for the next 45 years. New drilling technology-horizontal drilling and high-volume hydraulic fracturing of shale (fracking)-has made gas extraction much more economically feasible. However, this technique poses a threat to the environment and to the public's health. There is evidence that many of the chemicals used in fracking can damage the lungs, liver, kidneys, blood, and brain. We discuss the controversial technique of fracking and raise the issue of how to balance the need for energy with the protection of the public's health.
The Rush to Drill for Natural Gas: A Public Health Cautionary Tale
Law, Adam
2011-01-01
Efforts to identify alternative sources of energy have focused on extracting natural gas from vast shale deposits. The Marcellus Shale, located in western New York, Pennsylvania, and Ohio, is estimated to contain enough natural gas to supply the United States for the next 45 years. New drilling technology—horizontal drilling and high-volume hydraulic fracturing of shale (fracking)—has made gas extraction much more economically feasible. However, this technique poses a threat to the environment and to the public's health. There is evidence that many of the chemicals used in fracking can damage the lungs, liver, kidneys, blood, and brain. We discuss the controversial technique of fracking and raise the issue of how to balance the need for energy with the protection of the public's health. PMID:21421959
Air quality mapping using GIS and economic evaluation of health impact for Mumbai City, India.
Kumar, Awkash; Gupta, Indrani; Brandt, Jørgen; Kumar, Rakesh; Dikshit, Anil Kumar; Patil, Rashmi S
2016-05-01
Mumbai, a highly populated city in India, has been selected for air quality mapping and assessment of health impact using monitored air quality data. Air quality monitoring networks in Mumbai are operated by National Environment Engineering Research Institute (NEERI), Maharashtra Pollution Control Board (MPCB), and Brihanmumbai Municipal Corporation (BMC). A monitoring station represents air quality at a particular location, while we need spatial variation for air quality management. Here, air quality monitored data of NEERI and BMC were spatially interpolated using various inbuilt interpolation techniques of ArcGIS. Inverse distance weighting (IDW), Kriging (spherical and Gaussian), and spline techniques have been applied for spatial interpolation for this study. The interpolated results of air pollutants sulfur dioxide (SO2), nitrogen dioxide (NO2) and suspended particulate matter (SPM) were compared with air quality data of MPCB in the same region. Comparison of results showed good agreement for predicted values using IDW and Kriging with observed data. Subsequently, health impact assessment of a ward was carried out based on total population of the ward and air quality monitored data within the ward. Finally, health cost within a ward was estimated on the basis of exposed population. This study helps to estimate the valuation of health damage due to air pollution. Operating more air quality monitoring stations for measurement of air quality is highly resource intensive in terms of time and cost. The appropriate spatial interpolation techniques can be used to estimate concentration where air quality monitoring stations are not available. Further, health impact assessment for the population of the city and estimation of economic cost of health damage due to ambient air quality can help to make rational control strategies for environmental management. The total health cost for Mumbai city for the year 2012, with a population of 12.4 million, was estimated as USD8000 million.
Disaster debris estimation using high-resolution polarimetric stereo-SAR
NASA Astrophysics Data System (ADS)
Koyama, Christian N.; Gokon, Hideomi; Jimbo, Masaru; Koshimura, Shunichi; Sato, Motoyuki
2016-10-01
This paper addresses the problem of debris estimation which is one of the most important initial challenges in the wake of a disaster like the Great East Japan Earthquake and Tsunami. Reasonable estimates of the debris have to be made available to decision makers as quickly as possible. Current approaches to obtain this information are far from being optimal as they usually rely on manual interpretation of optical imagery. We have developed a novel approach for the estimation of tsunami debris pile heights and volumes for improved emergency response. The method is based on a stereo-synthetic aperture radar (stereo-SAR) approach for very high-resolution polarimetric SAR. An advanced gradient-based optical-flow estimation technique is applied for optimal image coregistration of the low-coherence non-interferometric data resulting from the illumination from opposite directions and in different polarizations. By applying model based decomposition of the coherency matrix, only the odd bounce scattering contributions are used to optimize echo time computation. The method exclusively considers the relative height differences from the top of the piles to their base to achieve a very fine resolution in height estimation. To define the base, a reference point on non-debris-covered ground surface is located adjacent to the debris pile targets by exploiting the polarimetric scattering information. The proposed technique is validated using in situ data of real tsunami debris taken on a temporary debris management site in the tsunami affected area near Sendai city, Japan. The estimated height error is smaller than 0.6 m RMSE. The good quality of derived pile heights allows for a voxel-based estimation of debris volumes with a RMSE of 1099 m3. Advantages of the proposed method are fast computation time, and robust height and volume estimation of debris piles without the need for pre-event data or auxiliary information like DEM, topographic maps or GCPs.
NASA Astrophysics Data System (ADS)
Venäläinen, Ari; Laapas, Mikko; Pirinen, Pentti; Horttanainen, Matti; Hyvönen, Reijo; Lehtonen, Ilari; Junila, Päivi; Hou, Meiting; Peltola, Heli M.
2017-07-01
The bioeconomy has an increasing role to play in climate change mitigation and the sustainable development of national economies. In Finland, a forested country, over 50 % of the current bioeconomy relies on the sustainable management and utilization of forest resources. Wind storms are a major risk that forests are exposed to and high-spatial-resolution analysis of the most vulnerable locations can produce risk assessment of forest management planning. In this paper, we examine the feasibility of the wind multiplier approach for downscaling of maximum wind speed, using 20 m spatial resolution CORINE land-use dataset and high-resolution digital elevation data. A coarse spatial resolution estimate of the 10-year return level of maximum wind speed was obtained from the ERA-Interim reanalyzed data. Using a geospatial re-mapping technique the data were downscaled to 26 meteorological station locations to represent very diverse environments. Applying a comparison, we find that the downscaled 10-year return levels represent 66 % of the observed variation among the stations examined. In addition, the spatial variation in wind-multiplier-downscaled 10-year return level wind was compared with the WAsP model-simulated wind. The heterogeneous test area was situated in northern Finland, and it was found that the major features of the spatial variation were similar, but in some locations, there were relatively large differences. The results indicate that the wind multiplier method offers a pragmatic and computationally feasible tool for identifying at a high spatial resolution those locations with the highest forest wind damage risks. It can also be used to provide the necessary wind climate information for wind damage risk model calculations, thus making it possible to estimate the probability of predicted threshold wind speeds for wind damage and consequently the probability (and amount) of wind damage for certain forest stand configurations.
Satellite Based Soil Moisture Product Validation Using NOAA-CREST Ground and L-Band Observations
NASA Astrophysics Data System (ADS)
Norouzi, H.; Campo, C.; Temimi, M.; Lakhankar, T.; Khanbilvardi, R.
2015-12-01
Soil moisture content is among most important physical parameters in hydrology, climate, and environmental studies. Many microwave-based satellite observations have been utilized to estimate this parameter. The Advanced Microwave Scanning Radiometer 2 (AMSR2) is one of many remotely sensors that collects daily information of land surface soil moisture. However, many factors such as ancillary data and vegetation scattering can affect the signal and the estimation. Therefore, this information needs to be validated against some "ground-truth" observations. NOAA - Cooperative Remote Sensing and Technology (CREST) center at the City University of New York has a site located at Millbrook, NY with several insitu soil moisture probes and an L-Band radiometer similar to Soil Moisture Passive and Active (SMAP) one. This site is among SMAP Cal/Val sites. Soil moisture information was measured at seven different locations from 2012 to 2015. Hydra probes are used to measure six of these locations. This study utilizes the observations from insitu data and the L-Band radiometer close to ground (at 3 meters height) to validate and to compare soil moisture estimates from AMSR2. Analysis of the measurements and AMSR2 indicated a weak correlation with the hydra probes and a moderate correlation with Cosmic-ray Soil Moisture Observing System (COSMOS probes). Several differences including the differences between pixel size and point measurements can cause these discrepancies. Some interpolation techniques are used to expand point measurements from 6 locations to AMSR2 footprint. Finally, the effect of penetration depth in microwave signal and inconsistencies with other ancillary data such as skin temperature is investigated to provide a better understanding in the analysis. The results show that the retrieval algorithm of AMSR2 is appropriate under certain circumstances. This validation algorithm and similar study will be conducted for SMAP mission. Keywords: Remote Sensing, Soil Moisture, AMSR2, SMAP, L-Band.
Direct and indirect genetic and fine-scale location effects on breeding date in song sparrows.
Germain, Ryan R; Wolak, Matthew E; Arcese, Peter; Losdat, Sylvain; Reid, Jane M
2016-11-01
Quantifying direct and indirect genetic effects of interacting females and males on variation in jointly expressed life-history traits is central to predicting microevolutionary dynamics. However, accurately estimating sex-specific additive genetic variances in such traits remains difficult in wild populations, especially if related individuals inhabit similar fine-scale environments. Breeding date is a key life-history trait that responds to environmental phenology and mediates individual and population responses to environmental change. However, no studies have estimated female (direct) and male (indirect) additive genetic and inbreeding effects on breeding date, and estimated the cross-sex genetic correlation, while simultaneously accounting for fine-scale environmental effects of breeding locations, impeding prediction of microevolutionary dynamics. We fitted animal models to 38 years of song sparrow (Melospiza melodia) phenology and pedigree data to estimate sex-specific additive genetic variances in breeding date, and the cross-sex genetic correlation, thereby estimating the total additive genetic variance while simultaneously estimating sex-specific inbreeding depression. We further fitted three forms of spatial animal model to explicitly estimate variance in breeding date attributable to breeding location, overlap among breeding locations and spatial autocorrelation. We thereby quantified fine-scale location variances in breeding date and quantified the degree to which estimating such variances affected the estimated additive genetic variances. The non-spatial animal model estimated nonzero female and male additive genetic variances in breeding date (sex-specific heritabilities: 0·07 and 0·02, respectively) and a strong, positive cross-sex genetic correlation (0·99), creating substantial total additive genetic variance (0·18). Breeding date varied with female, but not male inbreeding coefficient, revealing direct, but not indirect, inbreeding depression. All three spatial animal models estimated small location variance in breeding date, but because relatedness and breeding location were virtually uncorrelated, modelling location variance did not alter the estimated additive genetic variances. Our results show that sex-specific additive genetic effects on breeding date can be strongly positively correlated, which would affect any predicted rates of microevolutionary change in response to sexually antagonistic or congruent selection. Further, we show that inbreeding effects on breeding date can also be sex specific and that genetic effects can exceed phenotypic variation stemming from fine-scale location-based variation within a wild population. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
NASA Astrophysics Data System (ADS)
Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan
2012-10-01
Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.
Estimating snow leopard population abundance using photography and capture-recapture techniques
Jackson, R.M.; Roe, J.D.; Wangchuk, R.; Hunter, D.O.
2006-01-01
Conservation and management of snow leopards (Uncia uncia) has largely relied on anecdotal evidence and presence-absence data due to their cryptic nature and the difficult terrain they inhabit. These methods generally lack the scientific rigor necessary to accurately estimate population size and monitor trends. We evaluated the use of photography in capture-mark-recapture (CMR) techniques for estimating snow leopard population abundance and density within Hemis National Park, Ladakh, India. We placed infrared camera traps along actively used travel paths, scent-sprayed rocks, and scrape sites within 16- to 30-km2 sampling grids in successive winters during January and March 2003-2004. We used head-on, oblique, and side-view camera configurations to obtain snow leopard photographs at varying body orientations. We calculated snow leopard abundance estimates using the program CAPTURE. We obtained a total of 66 and 49 snow leopard captures resulting in 8.91 and 5.63 individuals per 100 trap-nights during 2003 and 2004, respectively. We identified snow leopards based on the distinct pelage patterns located primarily on the forelimbs, flanks, and dorsal surface of the tail. Capture probabilities ranged from 0.33 to 0.67. Density estimates ranged from 8.49 (SE = 0.22; individuals per 100 km2 in 2003 to 4.45 (SE = 0.16) in 2004. We believe the density disparity between years is attributable to different trap density and placement rather than to an actual decline in population size. Our results suggest that photographic capture-mark-recapture sampling may be a useful tool for monitoring demographic patterns. However, we believe a larger sample size would be necessary for generating a statistically robust estimate of population density and abundance based on CMR models.
Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology
Hayward, John
2016-01-01
The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world’s earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated. PMID:27579865
Into the Past: A Step Towards a Robust Kimberley Rock Art Chronology.
Ross, June; Westaway, Kira; Travers, Meg; Morwood, Michael J; Hayward, John
2016-01-01
The recent establishment of a minimum age estimate of 39.9 ka for the origin of rock art in Sulawesi has challenged claims that Western Europe was the locus for the production of the world's earliest art assemblages. Tantalising excavated evidence found across northern Australian suggests that Australia too contains a wealth of ancient art. However, the dating of rock art itself remains the greatest obstacle to be addressed if the significance of Australian assemblages are to be recognised on the world stage. A recent archaeological project in the northwest Kimberley trialled three dating techniques in order to establish chronological markers for the proposed, regional, relative stylistic sequence. Applications using optically-stimulated luminescence (OSL) provided nine minimum age estimates for fossilised mudwasp nests overlying a range of rock art styles, while Accelerator Mass Spectrometry radiocarbon (AMS 14C) results provided an additional four. Results confirm that at least one phase of the northwest Kimberley rock art assemblage is Pleistocene in origin. A complete motif located on the ceiling of a rockshelter returned a minimum age estimate of 16 ± 1 ka. Further, our results demonstrate the inherent problems in relying solely on stylistic classifications to order rock art assemblages into temporal sequences. An earlier than expected minimum age estimate for one style and a maximum age estimate for another together illustrate that the Holocene Kimberley rock art sequence is likely to be far more complex than generally accepted with different styles produced contemporaneously well into the last few millennia. It is evident that reliance on techniques that produce minimum age estimates means that many more dating programs will need to be undertaken before the stylistic sequence can be securely dated.
Arsenic in North Carolina: public health implications.
Sanders, Alison P; Messier, Kyle P; Shehee, Mina; Rudo, Kenneth; Serre, Marc L; Fry, Rebecca C
2012-01-01
Arsenic is a known human carcinogen and relevant environmental contaminant in drinking water systems. We set out to comprehensively examine statewide arsenic trends and identify areas of public health concern. Specifically, arsenic trends in North Carolina private wells were evaluated over an eleven-year period using the North Carolina Department of Health and Human Services database for private domestic well waters. We geocoded over 63,000 domestic well measurements by applying a novel geocoding algorithm and error validation scheme. Arsenic measurements and geographical coordinates for database entries were mapped using Geographic Information System techniques. Furthermore, we employed a Bayesian Maximum Entropy (BME) geostatistical framework, which accounts for geocoding error to better estimate arsenic values across the state and identify trends for unmonitored locations. Of the approximately 63,000 monitored wells, 7712 showed detectable arsenic concentrations that ranged between 1 and 806μg/L. Additionally, 1436 well samples exceeded the EPA drinking water standard. We reveal counties of concern and demonstrate a historical pattern of elevated arsenic in some counties, particularly those located along the Carolina terrane (Carolina slate belt). We analyzed these data in the context of populations using private well water and identify counties for targeted monitoring, such as Stanly and Union Counties. By spatiotemporally mapping these data, our BME estimate revealed arsenic trends at unmonitored locations within counties and better predicted well concentrations when compared to the classical kriging method. This study reveals relevant information on the location of arsenic-contaminated private domestic wells in North Carolina and indicates potential areas at increased risk for adverse health outcomes. Copyright © 2011 Elsevier Ltd. All rights reserved.
Arsenic in North Carolina: Public Health Implications
Sanders, Alison P.; Messier, Kyle P.; Shehee, Mina; Rudo, Kenneth; Serre, Marc L.; Fry, Rebecca C.
2012-01-01
Arsenic is a known human carcinogen and relevant environmental contaminant in drinking water systems. We set out to comprehensively examine statewide arsenic trends and identify areas of public health concern. Specifically, arsenic trends in North Carolina private wells were evaluated over an eleven-year period using the North Carolina Department of Health and Human Services (NCDHHS) database for private domestic well waters. We geocoded over 63,000 domestic well measurements by applying a novel geocoding algorithm and error validation scheme. Arsenic measurements and geographical coordinates for database entries were mapped using Geographic Information System (GIS) techniques. Furthermore, we employed a Bayesian Maximum Entropy (BME) geostatistical framework, which accounts for geocoding error to better estimate arsenic values across the state and identify trends for unmonitored locations. Of the approximately 63,000 monitored wells, 7,712 showed detectable arsenic concentrations that ranged between 1 and 806 μg/L. Additionally, 1,436 well samples exceeded the EPA drinking water standard. We reveal counties of concern and demonstrate a historical pattern of elevated arsenic in some counties, particularly those located along the Carolina terrane (Carolina slate belt). We analyzed these data in the context of populations using private well water and identify counties for targeted monitoring, such as Stanly and Union Counties. By spatiotemporally mapping these data, our BME estimate revealed arsenic trends at unmonitored locations within counties and better predicted well concentrations when compared to the classical kriging method. This study reveals relevant information on the location of arsenic-contaminated private domestic wells in North Carolina and indicates potential areas at increased risk for adverse health outcomes. PMID:21982028
Estimating watershed level nonagricultural pesticide use from golf courses using geospatial methods
Fox, G.A.; Thelin, G.P.; Sabbagh, G.J.; Fuchs, J.W.; Kelly, I.D.
2008-01-01
Limited information exists on pesticide use for nonagricultural purposes, making it difficult to estimate pesticide loadings from nonagricultural sources to surface water and to conduct environmental risk assessments. A method was developed to estimate the amount of pesticide use on recreational turf grasses, specifically golf course turf grasses, for watersheds located throughout the conterminous United States (U.S.). The approach estimates pesticide use: (1) based on the area of recreational turf grasses (used as a surrogate for turf associated with golf courses) within the watershed, which was derived from maps of land cover, and (2) from data on the location and average treatable area of golf courses. The area of golf course turf grasses determined from these two methods was used to calculate the percentage of each watershed planted in golf course turf grass (percent crop area, or PCA). Turf-grass PCAs derived from the two methods were used with recommended application rates provided on pesticide labels to estimate total pesticide use on recreational turf within 1,606 watersheds associated with surface-water sources of drinking water. These pesticide use estimates made from label rates and PCAs were compared to use estimates from industry sales data on the amount of each pesticide sold for use within the watershed. The PCAs derived from the land-cover data had an average value of 0.4% of a watershed with minimum of 0.01% and a maximum of 9.8%, whereas the PCA values that are based on the number of golf courses in a watershed had an average of 0.3% of a watershed with a minimum of <0.01% and a maximum of 14.2%. Both the land-cover method and the number of golf courses method produced similar PCA distributions, suggesting that either technique may be used to provide a PCA estimate for recreational turf. The average and maximum PCAs generally correlated to watershed size, with the highest PCAs estimated for small watersheds. Using watershed specific PCAs, combined with label rates, resulted in greater than two orders of magnitude over-estimation of the pesticide use compared to estimates from sales data. ?? 2008 American Water Resources Association.
Langdon, Jonathan H; Elegbe, Etana; Gonzalez, Raul S; Osapoetra, Laurentius; Ford, Tristan; McAleavey, Stephen A
2017-11-01
The clinical use of elastography for monitoring fibrosis progression is challenged by the subtle changes in liver stiffness associated with early-stage fibrosis and the comparatively large variance in stiffness estimates provided by elastography. Single-tracking-location (STL) shear wave elasticity imaging (SWEI) is an ultrasound elastography technique previously found to provide improved estimate precision compared with multiple-tracking-location (MTL) SWEI. Because of the improved precision, it is reasonable to expect that STL-SWEI would provide improved ability to differentiate liver fibrosis stage compared with MTL-SWEI. However, this expectation has not been previously challenged rigorously. In this work, the performance of STL- and MTL-SWEI in the setting of a rat model of liver fibrosis is characterized, and the advantages of STL-SWEI in staging fibrosis are explored. The purpose of this study was to determine what advantages, if any, arise from using STL-SWEI instead of MTL-SWEI in the characterization of fibrotic liver. Thus, the ability of STL-SWEI to differentiate livers at various METAVIR fibrosis scores, for ex vivo postmortem measurements, is explored. In addition, we examined the effect of the common confounding factor of fluid versus solid boundary conditions in SWEI experiments. Sprague-Dawley rats were treated with carbon tetrachloride over several weeks to produce liver disease of varying severity. STL and MTL stiffness measurements were performed ex vivo and compared with the METAVIR scores from histological analysis and the duration of treatment. A strong association was observed between liver stiffness and weeks of treatment with the liver toxin carbon tetrachloride. Direct comparison of STL- and MTL-SWEI measurements revealed no significant difference in ability to differentiate fibrosis stages based on SWEI mean values. However, image interquartile range was greatly improved in the case of STL-SWEI, compared with MTL-SWEI, at small beam spacing. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Estimation of distributed Fermat-point location for wireless sensor networking.
Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien
2011-01-01
This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.
Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia
1996-01-01
Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er
NASA Astrophysics Data System (ADS)
Saccorotti, G.; Nisii, V.; Del Pezzo, E.
2008-07-01
Long-Period (LP) and Very-Long-Period (VLP) signals are the most characteristic seismic signature of volcano dynamics, and provide important information about the physical processes occurring in magmatic and hydrothermal systems. These events are usually characterized by sharp spectral peaks, which may span several frequency decades, by emergent onsets, and by a lack of clear S-wave arrivals. These two latter features make both signal detection and location a challenging task. In this paper, we propose a processing procedure based on Continuous Wavelet Transform of multichannel, broad-band data to simultaneously solve the signal detection and location problems. Our method consists of two steps. First, we apply a frequency-dependent threshold to the estimates of the array-averaged WCO in order to locate the time-frequency regions spanned by coherent arrivals. For these data, we then use the time-series of the complex wavelet coefficients for deriving the elements of the spatial Cross-Spectral Matrix. From the eigenstructure of this matrix, we eventually estimate the kinematic signals' parameters using the MUltiple SIgnal Characterization (MUSIC) algorithm. The whole procedure greatly facilitates the detection and location of weak, broad-band signals, in turn avoiding the time-frequency resolution trade-off and frequency leakage effects which affect conventional covariance estimates based upon Windowed Fourier Transform. The method is applied to explosion signals recorded at Stromboli volcano by either a short-period, small aperture antenna, or a large-aperture, broad-band network. The LP (0.2 < T < 2s) components of the explosive signals are analysed using data from the small-aperture array and under the plane-wave assumption. In this manner, we obtain a precise time- and frequency-localization of the directional properties for waves impinging at the array. We then extend the wavefield decomposition method using a spherical wave front model, and analyse the VLP components (T > 2s) of the explosion recordings from the broad-band network. Source locations obtained this way are fully compatible with those retrieved from application of more traditional (and computationally expensive) time-domain techniques, such as the Radial Semblance method.
Use of USLE/GIS methodology for predicting soil loss in a semiarid agricultural watershed.
Erdogan, Emrah H; Erpul, Günay; Bayramin, Ilhami
2007-08-01
The Universal Soil Loss Equation (USLE) is an erosion model to estimate average soil loss that would generally result from splash, sheet, and rill erosion from agricultural plots. Recently, use of USLE has been extended as a useful tool predicting soil losses and planning control practices in agricultural watersheds by the effective integration of the GIS-based procedures to estimate the factor values in a grid cell basis. This study was performed in the Kazan Watershed located in the central Anatolia, Turkey, to predict soil erosion risk by the USLE/GIS methodology for planning conservation measures in the site. Rain erosivity (R), soil erodibility (K), and cover management factor (C) values of the model were calculated from erosivity map, soil map, and land use map of Turkey, respectively. R values were site-specifically corrected using DEM and climatic data. The topographical and hydrological effects on the soil loss were characterized by LS factor evaluated by the flow accumulation tool using DEM and watershed delineation techniques. From resulting soil loss map of the watershed, the magnitude of the soil erosion was estimated in terms of the different soil units and land uses and the most erosion-prone areas where irreversible soil losses occurred were reasonably located in the Kazan watershed. This could be very useful for deciding restoration practices to control the soil erosion of the sites to be severely influenced.
Accurate characterisation of hole size and location by projected fringe profilometry
NASA Astrophysics Data System (ADS)
Wu, Yuxiang; Dantanarayana, Harshana G.; Yue, Huimin; Huntley, Jonathan M.
2018-06-01
The ability to accurately estimate the location and geometry of holes is often required in the field of quality control and automated assembly. Projected fringe profilometry is a potentially attractive technique on account of being non-contacting, of lower cost, and orders of magnitude faster than the traditional coordinate measuring machine. However, we demonstrate in this paper that fringe projection is susceptible to significant (hundreds of µm) measurement artefacts in the neighbourhood of hole edges, which give rise to errors of a similar magnitude in the estimated hole geometry. A mechanism for the phenomenon is identified based on the finite size of the imaging system’s point spread function and the resulting bias produced near to sample discontinuities in geometry and reflectivity. A mathematical model is proposed, from which a post-processing compensation algorithm is developed to suppress such errors around the holes. The algorithm includes a robust and accurate sub-pixel edge detection method based on a Fourier descriptor of the hole contour. The proposed algorithm was found to reduce significantly the measurement artefacts near the hole edges. As a result, the errors in estimated hole radius were reduced by up to one order of magnitude, to a few tens of µm for hole radii in the range 2–15 mm, compared to those from the uncompensated measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were usedmore » to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.« less
Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B
1998-01-01
Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.
Automatic location of L/H transition times for physical studies with a large statistical basis
NASA Astrophysics Data System (ADS)
González, S.; Vega, J.; Murari, A.; Pereira, A.; Dormido-Canto, S.; Ramírez, J. M.; contributors, JET-EFDA
2012-06-01
Completely automatic techniques to estimate and validate L/H transition times can be essential in L/H transition analyses. The generation of databases with hundreds of transition times and without human intervention is an important step to accomplish (a) L/H transition physics analysis, (b) validation of L/H theoretical models and (c) creation of L/H scaling laws. An entirely unattended methodology is presented in this paper to build large databases of transition times in JET using time series. The proposed technique has been applied to a dataset of 551 JET discharges between campaigns C21 and C26. A prediction with discharges that show a clear signature in time series is made through the locating properties of the wavelet transform. It is an accurate prediction and the uncertainty interval is ±3.2 ms. The discharges with a non-clear pattern in the time series use an L/H mode classifier based on discharges with a clear signature. In this case, the estimation error shows a distribution with mean and standard deviation of 27.9 ms and 37.62 ms, respectively. Two different regression methods have been applied to the measurements acquired at the transition times identified by the automatic system. The obtained scaling laws for the threshold power are not significantly different from those obtained using the data at the transition times determined manually by the experts. The automatic methods allow performing physical studies with a large number of discharges, showing, for example, that there are statistically different types of transitions characterized by different scaling laws.
LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas
2015-01-01
High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV <15%). Satellite imagery may be an effective monitoring tool in certain areas, but large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.
A reassessment of ground water flow conditions and specific yield at Borden and Cape Cod
Grimestad, Garry
2002-01-01
Recent widely accepted findings respecting the origin and nature of specific yield in unconfined aquifers rely heavily on water level changes observed during two pumping tests, one conducted at Borden, Ontario, Canada, and the other at Cape Cod, Massachusetts. The drawdown patterns observed during those tests have been taken as proof that unconfined specific yield estimates obtained from long-duration pumping tests should approach the laboratory-estimated effective porosity of representative aquifer formation samples. However, both of the original test reports included direct or referential descriptions of potential supplemental sources of pumped water that would have introduced intractable complications and errors into straightforward interpretations of the drawdown observations if actually present. Searches for evidence of previously neglected sources were performed by screening the original drawdown observations from both locations for signs of diagnostic skewing that should be present only if some of the extracted water was derived from sources other than main aquifer storage. The data screening was performed using error-guided computer assisted fitting techniques, capable of accurately sensing and simulating the effects of a wide range of non-traditional and external sources. The drawdown curves from both tests proved to be inconsistent with traditional single-source pumped aquifer models but consistent with site-specific alternatives that included significant contributions of water from external sources. The corrected pumping responses shared several important features. Unsaturated drainage appears to have ceased effectively at both locations within the first day of pumping, and estimates of specific yield stabilized at levels considerably smaller than the corresponding laboratory-measured or probable effective porosity. Separate sequential analyses of progressively later field observations gave stable and nearly constant specific yield estimates for each location, with no evidence from either test that more prolonged pumping would have induced substantially greater levels of unconfined specific yield.
Site Transfer Functions of Three-Component Ground Motion in Western Turkey
NASA Astrophysics Data System (ADS)
Ozgur Kurtulmus, Tevfik; Akyol, Nihal; Camyildiz, Murat; Gungor, Talip
2015-04-01
Because of high seismicity accommodating crustal deformation and deep graben structures, on which have, urbanized and industrialized large cities in western Turkey, the importance of site-specific seismic hazard assessments becomes more crucial. Characterizing source, site and path effects is important for both assessing the seismic hazard in a specific region and generation of the building codes/or renewing previous ones. In this study, we evaluated three-component recordings for micro- and moderate-size earthquakes with local magnitudes ranging between 2.0 and 5.6. This dataset is used for site transfer function estimations, utilizing two different spectral ratio approaches 'Standard Spectral Ratio-(SSR)' and 'Horizontal to Vertical Spectral Ratio-(HVSR)' and a 'Generalized Inversion Technique-(GIT)' to highlight site-specific seismic hazard potential of deep basin structures of the region. Obtained transfer functions revealed that the sites located near the basin edges are characterized by broader HVSR curves. Broad HVSR peaks could be attributed to the complexity of wave propagation related to significant 2D/3D velocity variations at the sediment-bedrock interface near the basin edges. Comparison of HVSR and SSR estimates for the sites located on the grabens showed that SSR estimates give larger values at lower frequencies which could be attributed to lateral variations in regional velocity and attenuation values caused by basin geometry and edge effects. However, large amplitude values of vertical component GIT site transfer functions were observed at varying frequency ranges for some of the stations. These results imply that vertical component of ground motion is not amplification free. Contamination of HVSR site transfer function estimates at different frequency bands could be related to complexities in the wave field caused by deep or shallow heterogeneities in the region such as differences in the basin geometries, fracturing and fluid saturation along different propagation paths. The results also show that, even if the site is located on a horst, the presence of weathered zones near the surface could cause moderate frequency dependent site effects.
Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S
2013-03-01
To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.
NASA Astrophysics Data System (ADS)
Czarnogorska, M.; Samsonov, S.; White, D.
2014-11-01
The research objectives of the Aquistore CO2 storage project are to design, adapt, and test non-seismic monitoring methods for measurement, and verification of CO2 storage, and to integrate data to determine subsurface fluid distributions, pressure changes and associated surface deformation. Aquistore site is located near Estevan in Southern Saskatchewan on the South flank of the Souris River and west of the Boundary Dam Power Station and the historical part of Estevan coal mine in southeastern Saskatchewan, Canada. Several monitoring techniques were employed in the study area including advanced satellite Differential Interferometric Synthetic Aperture Radar (DInSAR) technique, GPS, tiltmeters and piezometers. The targeted CO2 injection zones are within the Winnipeg and Deadwood formations located at > 3000 m depth. An array of monitoring techniques was employed in the study area including advanced satellite Differential Interferometric Synthetic Aperture Radar (DInSAR) with established corner reflectors, GPS, tiltmeters and piezometers stations. We used airborne LIDAR data for topographic phase estimation, and DInSAR product geocoding. Ground deformation maps have been calculated using Multidimensional Small Baseline Subset (MSBAS) methodology from 134 RADARSAT-2 images, from five different beams, acquired during 20120612-20140706. We computed and interpreted nine time series for selected places. MSBAS results indicate slow ground deformation up to 1 cm/year not related to CO2 injection but caused by various natural and anthropogenic causes.
Imaging Large Cohorts of Single Ion Channels and Their Activity
Hiersemenzel, Katia; Brown, Euan R.; Duncan, Rory R.
2013-01-01
As calcium is the most important signaling molecule in neurons and secretory cells, amongst many other cell types, it follows that an understanding of calcium channels and their regulation of exocytosis is of vital importance. Calcium imaging using calcium dyes such as Fluo3, or FRET-based dyes that have been used widely has provided invaluable information, which combined with modeling has estimated the subtypes of channels responsible for triggering the exocytotic machinery as well as inferences about the relative distances away from vesicle fusion sites these molecules adopt. Importantly, new super-resolution microscopy techniques, combined with novel Ca2+ indicators and imaginative imaging approaches can now define directly the nano-scale locations of very large cohorts of single channel molecules in relation to single vesicles. With combinations of these techniques the activity of individual channels can be visualized and quantified using novel Ca2+ indicators. Fluorescently labeled specific channel toxins can also be used to localize endogenous assembled channel tetramers. Fluorescence lifetime imaging microscopy and other single-photon-resolution spectroscopic approaches offer the possibility to quantify protein–protein interactions between populations of channels and the SNARE protein machinery for the first time. Together with simultaneous electrophysiology, this battery of quantitative imaging techniques has the potential to provide unprecedented detail describing the locations, dynamic behaviors, interactions, and conductance activities of many thousands of channel molecules and vesicles in living cells. PMID:24027557
Source counting in MEG neuroimaging
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.
2009-02-01
Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
An Efficient Location Verification Scheme for Static Wireless Sensor Networks.
Kim, In-Hwan; Kim, Bo-Sung; Song, JooSeok
2017-01-24
In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors.
An Efficient Location Verification Scheme for Static Wireless Sensor Networks
Kim, In-hwan; Kim, Bo-sung; Song, JooSeok
2017-01-01
In wireless sensor networks (WSNs), the accuracy of location information is vital to support many interesting applications. Unfortunately, sensors have difficulty in estimating their location when malicious sensors attack the location estimation process. Even though secure localization schemes have been proposed to protect location estimation process from attacks, they are not enough to eliminate the wrong location estimations in some situations. The location verification can be the solution to the situations or be the second-line defense. The problem of most of the location verifications is the explicit involvement of many sensors in the verification process and requirements, such as special hardware, a dedicated verifier and the trusted third party, which causes more communication and computation overhead. In this paper, we propose an efficient location verification scheme for static WSN called mutually-shared region-based location verification (MSRLV), which reduces those overheads by utilizing the implicit involvement of sensors and eliminating several requirements. In order to achieve this, we use the mutually-shared region between location claimant and verifier for the location verification. The analysis shows that MSRLV reduces communication overhead by 77% and computation overhead by 92% on average, when compared with the other location verification schemes, in a single sensor verification. In addition, simulation results for the verification of the whole network show that MSRLV can detect the malicious sensors by over 90% when sensors in the network have five or more neighbors. PMID:28125007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebron, S; Yan, G; Li, J
2016-06-15
Purpose: To develop an accurate and quick multileaf collimator (MLC) calibration and quality assurance technique using an electronic portal imaging device (EPID) Methods: The MLC models used include the MLCi and Agility (Elekta Ltd). This technique consists of two 22(L)x10(W) cm{sup 2} fields with 0{sup 0} and 180{sup 0} collimator angles centered to an offset EPID. The MLC opening is estimated by calculating the profile at the image’s center in the image’s horizontal direction. Scans in the image’s vertical direction were calculated every 20 pixels in the inner 70% of estimated MLC opening. The profiles’ edges were fitted with linearmore » equations to determine the image’s rotation angle. Then, crossline profiles were scanned at the center of each leaf taking into account the leaf’s width at isocenter and the rotation angle. The profiles’ edges determine the location of the leaves’ edges and these were subtracted from the reference leaf’s position in order to determine the relative leaf offsets. The edge location of all profiles was determined by using the parameterized gradient of the penumbra region. The technique was tested against an established diode array-based method, and for different MLC systems, patterns, gantry angles, days, energies, beam modalities and MLC openings. Results: The differences between the proposed and established methods were 0.26±0.19mm. The leaf offsets’ deviation was <0.3mm (5 months period). For pattern fields, the differences between predetermined and calculated offsets were 0.18±0.18mm. The leaf offset deviation of measurements with different energies and MLC openings were <0.1mm and <0.3mm, respectively. The differences between offsets of FF and FFF beams were 0.01±0.02mm (<0.07mm). The differences between the offsets at different gantry angles were 0.08±0.15mm. Conclusion: The proposed method proved to be accurate and efficient in calculating the relative leaf offsets. Parameterized field edge is essential to obtain accurate result by eliminating the noise from EPID.« less
Travel time to maternity care and its effect on utilization in rural Ghana: a multilevel analysis.
Masters, Samuel H; Burstein, Roy; Amofah, George; Abaogye, Patrick; Kumar, Santosh; Hanlon, Michael
2013-09-01
Rates of neonatal and maternal mortality are high in Ghana. In-facility delivery and other maternal services could reduce this burden, yet utilization rates of key maternal services are relatively low, especially in rural areas. We tested a theoretical implication that travel time negatively affects the use of in-facility delivery and other maternal services. Empirically, we used geospatial techniques to estimate travel times between populations and health facilities. To account for uncertainty in Ghana Demographic and Health Survey cluster locations, we adopted a novel approach of treating the location selection as an imputation problem. We estimated a multilevel random-intercept logistic regression model. For rural households, we found that travel time had a significant effect on the likelihood of in-facility delivery and antenatal care visits, holding constant education, wealth, maternal age, facility capacity, female autonomy, and the season of birth. In contrast, a facility's capacity to provide sophisticated maternity care had no detectable effect on utilization. As the Ghanaian health network expands, our results suggest that increasing the availability of basic obstetric services and improving transport infrastructure may be important interventions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Advancing US GHG Inventory by Incorporating Survey Data using Machine-Learning Techniques
NASA Astrophysics Data System (ADS)
Alsaker, C.; Ogle, S. M.; Breidt, J.
2017-12-01
Crop management data are used in the National Greenhouse Gas Inventory that is compiled annually and reported to the United Nations Framework Convention on Climate Change. Emissions for carbon stock change and N2O emissions for US agricultural soils are estimated using the USDA National Resources Inventory (NRI). NRI provides basic information on land use and cropping histories, but it does not provide much detail on other management practices. In contrast, the Conservation Effects Assessment Project (CEAP) survey collects detailed crop management data that could be used in the GHG Inventory. The survey data were collected from NRI survey locations that are a subset of the NRI every 10 years. Therefore, imputation of the CEAP are needed to represent the management practices across all NRI survey locations both spatially and temporally. Predictive mean matching and an artificial neural network methods have been applied to develop imputation model under a multiple imputation framework. Temporal imputation involves adjusting the imputation model using state-level USDA Agricultural Resource Management Survey data. Distributional and predictive accuracy is assessed for the imputed data, providing not only management data needed for the inventory but also rigorous estimates of uncertainty.
Royston, Thomas J.; Dai, Zoujun; Chaunsali, Rajesh; Liu, Yifei; Peng, Ying; Magin, Richard L.
2011-01-01
Previous studies of the first author and others have focused on low audible frequency (<1 kHz) shear and surface wave motion in and on a viscoelastic material comprised of or representative of soft biological tissue. A specific case considered has been surface (Rayleigh) wave motion caused by a circular disk located on the surface and oscillating normal to it. Different approaches to identifying the type and coefficients of a viscoelastic model of the material based on these measurements have been proposed. One approach has been to optimize coefficients in an assumed viscoelastic model type to match measurements of the frequency-dependent Rayleigh wave speed. Another approach has been to optimize coefficients in an assumed viscoelastic model type to match the complex-valued frequency response function (FRF) between the excitation location and points at known radial distances from it. In the present article, the relative merits of these approaches are explored theoretically, computationally, and experimentally. It is concluded that matching the complex-valued FRF may provide a better estimate of the viscoelastic model type and parameter values; though, as the studies herein show, there are inherent limitations to identifying viscoelastic properties based on surface wave measurements. PMID:22225067
NASA Astrophysics Data System (ADS)
Mahmoudabadi, H.; Briggs, G.
2016-12-01
Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.
MR-guided adaptive focusing of therapeutic ultrasound beams in the human head
Marsac, Laurent; Chauvet, Dorian; Larrat, Benoît; Pernot, Mathieu; Robert, B.; Fink, Mathias; Boch, Anne-Laure; Aubry, Jean-François; Tanter, Mickaël
2012-01-01
Purpose This study aims to demonstrate, using human cadavers the feasibility of energy-based adaptive focusing of ultrasonic waves using Magnetic Resonance Acoustic Radiation Force Imaging (MR-ARFI) in the framework of non-invasive transcranial High Intensity Focused Ultrasound (HIFU) therapy. Methods Energy-based adaptive focusing techniques were recently proposed in order to achieve aberration correction. We evaluate this method on a clinical brain HIFU system composed of 512 ultrasonic elements positioned inside a full body 1.5 T clinical Magnetic Resonance (MR) imaging system. Cadaver heads were mounted onto a clinical Leksell stereotactic frame. The ultrasonic wave intensity at the chosen location was indirectly estimated by the MR system measuring the local tissue displacement induced by the acoustic radiation force of the ultrasound (US) beams. For aberration correction, a set of spatially encoded ultrasonic waves was transmitted from the ultrasonic array and the resulting local displacements were estimated with the MR-ARFI sequence for each emitted beam. A non-iterative inversion process was then performed in order to estimate the spatial phase aberrations induced by the cadaver skull. The procedure was first evaluated and optimized in a calf brain using a numerical aberrator mimicking human skull aberrations. The full method was then demonstrated using a fresh human cadaver head. Results The corrected beam resulting from the direct inversion process was found to focus at the targeted location with an acoustic intensity 2.2 times higher than the conventional non corrected beam. In addition, this corrected beam was found to give an acoustic intensity 1.5 times higher than the focusing pattern obtained with an aberration correction using transcranial acoustic simulation based on X-ray computed tomography (CT) scans. Conclusion The proposed technique achieved near optimal focusing in an intact human head for the first time. These findings confirm the strong potential of energy-based adaptive focusing of transcranial ultrasonic beams for clinical applications. PMID:22320825
Space Shuttle Transportation (Roll-Out) Loads Diagnostics
NASA Technical Reports Server (NTRS)
Elliott, Kenny B.; Buehrle, Ralph D.; James, George H.; Richart, Jene A.
2005-01-01
The Space Transportation System (STS) consists of three primary components; an Orbiter Vehicle, an External Fuel Tank, and two Solid Rocket Boosters. The Orbiter Vehicle and Solid Rocket Boosters are reusable components, and as such, they are susceptible to durability issues. Recently, the fatigue load spectra for these components have been updated to include load histories acquired during the rollout phase of the STS processing for flight. Using traditional program life assessment techniques, the incorporation of these "rollout" loads produced unacceptable life estimates for certain Orbiter structural members. As a result, the Space Shuttle System Engineering and Integration Office has initiated a program to re-assess the method used for developing the "rollout" loads and performing the life assessments. In the fall of 2003 a set of tests were preformed to provide information to either validate existing load spectra estimation techniques or generate new load spectra estimation methods. Acceleration and strain data were collected from two rollouts of a partial-stack configuration of the Space Shuttle. The partial stack configuration consists of two Solid Rocket Boosters tied together at the upper External Tank attachment locations mounted on the Mobile Launch Platform carried by a Crawler Transporter (CT). In the current analysis, the data collected from this test is examined for consistency in speed, surface condition effects, and the characterization of the forcing function. It is observed that the speed of the CT is relatively stable. The dynamic response acceleration of the partial-stack is slightly sensitive to the surface condition of the road used for transport, and the dynamic response acceleration of the partial-stack generally increases as the transport speed increases. However, the speed sensitivity is dependent on the measurement location. Finally, the character of the forcing function is narrow-banded with the primary drivers being harmonics of two CT speed dependent excitations. One source is an excitation due to the CT treads striking the road surface, and the second is unknown.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
A technique for locating function roots and for satisfying equality constraints in optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1992-01-01
A new technique for locating simultaneous roots of a set of functions is described. The technique is based on the property of the Kreisselmeier-Steinhauser function which descends to a minimum at each root location. It is shown that the ensuing algorithm may be merged into any nonlinear programming method for solving optimization problems with equality constraints.
VLF long-range lightning location using the arrival time difference technique (ATD)
NASA Technical Reports Server (NTRS)
Ierkic, H. Mario
1996-01-01
A new network of VLF receiving systems is currently being developed in the USA to support NASA's Tropical Rain Measuring Mission (TRMM). The new network will be deployed in the east coast of the US, including Puerto Rico, and will be operational in late 1995. The system should give affordable, near real-time, accurate lightning locating capabilities at long ranges and with extended coverage. It is based on the Arrival Time Difference (ATD) method of Lee (1986; 1990). The ATD technique is based on the estimation of the time of arrival of sferics detected over an 18 kHz bandwith. The ground system results will be compared and complemented with satellite optical measurements gathered with the already operational Optical Transient Detector (OTD) instrument and in due course with its successor the Lightning Imaging Sensor (LIS). Lightning observations are important to understand atmospheric electrification phenomena, discharge processes, associated phenomena on earth (e.g. whistlers, explosive Spread-F) and other planets. In addition, lightning is a conspicuous indicator of atmospheric activity whose potential is just beginning to be recognized and utilized. On more prosaic grounds, lightning observations are important for protection of life, property and services.
Geologic Carbon Sequestration Leakage Detection: A Physics-Guided Machine Learning Approach
NASA Astrophysics Data System (ADS)
Lin, Y.; Harp, D. R.; Chen, B.; Pawar, R.
2017-12-01
One of the risks of large-scale geologic carbon sequestration is the potential migration of fluids out of the storage formations. Accurate and fast detection of this fluids migration is not only important but also challenging, due to the large subsurface uncertainty and complex governing physics. Traditional leakage detection and monitoring techniques rely on geophysical observations including pressure. However, the resulting accuracy of these methods is limited because of indirect information they provide requiring expert interpretation, therefore yielding in-accurate estimates of leakage rates and locations. In this work, we develop a novel machine-learning technique based on support vector regression to effectively and efficiently predict the leakage locations and leakage rates based on limited number of pressure observations. Compared to the conventional data-driven approaches, which can be usually seem as a "black box" procedure, we develop a physics-guided machine learning method to incorporate the governing physics into the learning procedure. To validate the performance of our proposed leakage detection method, we employ our method to both 2D and 3D synthetic subsurface models. Our novel CO2 leakage detection method has shown high detection accuracy in the example problems.
Reingold, Eyal M.; Reichle, Erik D.; Glaholt, Mackenzie G.; Sheridan, Heather
2013-01-01
Participants’ eye movements were monitored in an experiment that manipulated the frequency of target words (high vs. low) as well as their availability for parafoveal processing during fixations on the pre-target word (valid vs. invalid preview). The influence of the word-frequency by preview validity manipulation on the distributions of first fixation duration was examined by using ex-Gaussian fitting as well as a novel survival analysis technique which provided precise estimates of the timing of the first discernible influence of word frequency on first fixation duration. Using this technique, we found a significant influence of word frequency on fixation duration in normal reading (valid preview) as early as 145 ms from the start of fixation. We also demonstrated an equally rapid non-lexical influence on first fixation duration as a function of initial landing position (location) on target words. The time-course of frequency effects, but not location effects was strongly influenced by preview validity, demonstrating the crucial role of parafoveal processing in enabling direct lexical control of reading fixation times. Implications for models of eye-movement control are discussed. PMID:22542804
Parallel Estimation and Control Architectures for Deep-Space Formation Flying Spacecraft
NASA Technical Reports Server (NTRS)
Hadaegh, Fred Y.; Smith, Roy S.
2006-01-01
The formation flying of precisely controlled spacecraft in deep space can be used to implement optical instruments capable of imaging planets in other solar systems. The distance of the formation from Earth necessitates a significant level of autonomy and each spacecraft must base its actions on its estimates of the location and velocity of the other spacecraft. Precise coordination and control is the key requirement in such missions and the flow of information between spacecraft must be carefully designed. Doing this in an efficient and optimal manner requires novel techniques for the design of the on-board estimators. The use of standard Kalman filter-based designs can lead to unanticipated dynamics--which we refer to as disagreement dynamics--in the estimators' errors. We show how communication amongst the spacecraft can be designed in order to control all of the dynamics within the formation. We present several results relating the topology of the communication network to the resulting closed-loop control dynamics of the formation. The consequences for the design of the control, communication and coordination are discussed.
Statistical analysis of the calibration procedure for personnel radiation measurement instruments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, W.J.; Bengston, S.J.; Kalbeitzer, F.L.
1980-11-01
Thermoluminescent analyzer (TLA) calibration procedures were used to estimate personnel radiation exposure levels at the Idaho National Engineering Laboratory (INEL). A statistical analysis is presented herein based on data collected over a six month period in 1979 on four TLA's located in the Department of Energy (DOE) Radiological and Environmental Sciences Laboratory at the INEL. The data were collected according to the day-to-day procedure in effect at that time. Both gamma and beta radiation models are developed. Observed TLA readings of thermoluminescent dosimeters are correlated with known radiation levels. This correlation is then used to predict unknown radiation doses frommore » future analyzer readings of personnel thermoluminescent dosimeters. The statistical techniques applied in this analysis include weighted linear regression, estimation of systematic and random error variances, prediction interval estimation using Scheffe's theory of calibration, the estimation of the ratio of the means of two normal bivariate distributed random variables and their corresponding confidence limits according to Kendall and Stuart, tests of normality, experimental design, a comparison between instruments, and quality control.« less
Analyzing animal movements using Brownian bridges.
Horne, Jon S; Garton, Edward O; Krone, Stephen M; Lewis, Jesse S
2007-09-01
By studying animal movements, researchers can gain insight into many of the ecological characteristics and processes important for understanding population-level dynamics. We developed a Brownian bridge movement model (BBMM) for estimating the expected movement path of an animal, using discrete location data obtained at relatively short time intervals. The BBMM is based on the properties of a conditional random walk between successive pairs of locations, dependent on the time between locations, the distance between locations, and the Brownian motion variance that is related to the animal's mobility. We describe two critical developments that enable widespread use of the BBMM, including a derivation of the model when location data are measured with error and a maximum likelihood approach for estimating the Brownian motion variance. After the BBMM is fitted to location data, an estimate of the animal's probability of occurrence can be generated for an area during the time of observation. To illustrate potential applications, we provide three examples: estimating animal home ranges, estimating animal migration routes, and evaluating the influence of fine-scale resource selection on animal movement patterns.
Hansen, Scott K.; Vesselinov, Velimir Valentinov
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context.
Herrero-Huerta, Mónica; Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.
Automatic tree parameter extraction by a Mobile LiDAR System in an urban context
Lindenbergh, Roderik; Rodríguez-Gonzálvez, Pablo
2018-01-01
In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees. PMID:29689076
Hayes, Laura; Horn, Marilee A.
2009-01-01
The U.S. Geological Survey, in cooperation with the New Hampshire Department of Environmental Services, estimated the amount of water demand, consumptive use, withdrawal, and return flow for each U.S. Census block in New Hampshire for the years 2005 (current) and 2020. Estimates of domestic, commercial, industrial, irrigation, and other nondomestic water use were derived through the use and innovative integration of several State and Federal databases, and by use of previously developed techniques. The New Hampshire Water Demand database was created as part of this study to store and integrate State of New Hampshire data central to the project. Within the New Hampshire Water Demand database, a lookup table was created to link the State databases and identify water users common to more than one database. The lookup table also allowed identification of withdrawal and return-flow locations of registered and unregistered commercial, industrial, agricultural, and other nondomestic users. Geographic information system data from the State were used in combination with U.S. Census Bureau spatial data to locate and quantify withdrawals and return flow for domestic users in each census block. Analyzing and processing the most recently available data resulted in census-block estimations of 2005 water use. Applying population projections developed by the State to the data sets enabled projection of water use for the year 2020. The results for each census block are stored in the New Hampshire Water Demand database and may be aggregated to larger political areas or watersheds to assess relative hydrologic stress on the basis of current and potential water availability.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Cardona, Javier Fernando; García Bonilla, Alba Carolina; Tomás García, Rogelio
2017-11-01
This article shows that the effect of all quadrupole errors present in an interaction region with low β * can be modeled by an equivalent magnetic kick, which can be estimated from action and phase jumps found on beam position data. This equivalent kick is used to find the strengths that certain normal and skew quadrupoles located on the IR must have to make an effective correction in that region. Additionally, averaging techniques to reduce noise on beam position data, which allows precise estimates of equivalent kicks, are presented and mathematically justified. The complete procedure is tested with simulated data obtained from madx and 2015-LHC experimental data. The analyses performed in the experimental data indicate that the strengths of the IR skew quadrupole correctors and normal quadrupole correctors can be estimated within a 10% uncertainty. Finally, the effect of IR corrections in the β* is studied, and a correction scheme that returns this parameter to its designed value is proposed.
Downward Atmospheric Longwave Radiation in the City of Sao Paulo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbaro, Eduardo W.; Oliveira, Amauri P.; Soares, Jacyra
2009-03-11
This work evaluates objectively the consistency and quality of a 9 year dataset based on 5 minute average values of downward longwave atmospheric (LW) emission, shortwave radiation, temperature and relative humidity. All these parameters were observed simultaneously and continuously from 1997 to 2006 in the IAG micrometeorological platform, located at the top of the IAG-USP building. The pyrgeometer dome emission effect was removed using neural network technique reducing the downward long wave atmospheric emission error to 3.5%. The comparison, between the monthly average values of LW emission observed in Sao Paulo and satellite estimates from SRB-NASA project, indicated a verymore » good agreement. Furthermore, this work investigates the performance of 10 empirical expressions to estimate the LW emission at the surface. The comparison between the models indicates that Brunt's one presents the better results, with smallest ''MBE,''''RMSE'' and biggest ''d'' index of agreement, therefore Brunt is the most indicated model to estimate LW emission under clear sky conditions in the city of Sao Paulo.« less
Practical Considerations for Optic Nerve Estimation in Telemedicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karnowski, Thomas Paul; Aykac, Deniz; Chaum, Edward
The projected increase in diabetes in the United States and worldwide has created a need for broad-based, inexpensive screening for diabetic retinopathy (DR), an eye disease which can lead to vision impairment. A telemedicine network with retina cameras and automated quality control, physiological feature location, and lesion / anomaly detection is a low-cost way of achieving broad-based screening. In this work we report on the effect of quality estimation on an optic nerve (ON) detection method with a confidence metric. We report on an improvement of the fusion technique using a data set from an ophthalmologists practice then show themore » results of the method as a function of image quality on a set of images from an on-line telemedicine network collected in Spring 2009 and another broad-based screening program. We show that the fusion method, combined with quality estimation processing, can improve detection performance and also provide a method for utilizing a physician-in-the-loop for images that may exceed the capabilities of automated processing.« less
MoisturEC: a new R program for moisture content estimation from electrical conductivity data
Terry, Neil; Day-Lewis, Frederick D.; Werkema, Dale D.; Lane, John W.
2018-01-01
Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data‐analysis tools are needed to “translate” geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user‐friendly tools are required to fully capitalize on the potential of geophysical information for soil‐moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two‐ and three‐dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach.
Estimating the number of male sex workers with the capture-recapture technique in Nigeria.
Adebajo, Sylvia B; Eluwa, George I; Tocco, Jack U; Ahonsi, Babatunde A; Abiodun, Lolade Y; Anene, Oliver A; Akpona, Dennis O; Karlyn, Andrew S; Kellerman, Scott
2013-12-01
Estimating the size of populations most affected by HIV such as men who have sex with men (MSM) though crucial for structuring responses to the epidemic presents significant challenges, especially in a developing society. Using capture-recapture methodology, the size of MSM-SW in Nigeria was estimated in three major cities (Lagos, Kano and Port Harcourt) between July and December 2009. Following interviews with key informants, locations and times when MSM-SW were available to male clients were mapped and designated as "hotspots". Counts were conducted on two consecutive weekends. Population estimates were computed using a standardized Lincoln formula. Fifty-six hotspots were identified in Kano, 38 in Lagos and 42 in Port Harcourt. On a given weekend night, Port Harcourt had the largest estimated population of MSM sex workers, 723 (95% CI: 594-892) followed by Lagos state with 620 (95%CI: 517-724) and Kano state with 353 (95%CI: 332-373). This study documents a large population of MSM-SW in 3 Nigerian cities where higher HIV prevalence among MSM compared to the general population has been documented. Research and programming are needed to better understand and address the health vulnerabilities that MSM-SW and their clients face.
Mesospheric gravity wave momentum flux estimation using hybrid Doppler interferometry
NASA Astrophysics Data System (ADS)
Spargo, Andrew J.; Reid, Iain M.; MacKinnon, Andrew D.; Holdsworth, David A.
2017-06-01
Mesospheric gravity wave (GW) momentum flux estimates using data from multibeam Buckland Park MF radar (34.6° S, 138.5° E) experiments (conducted from July 1997 to June 1998) are presented. On transmission, five Doppler beams were symmetrically steered about the zenith (one zenith beam and four off-zenith beams in the cardinal directions). The received beams were analysed with hybrid Doppler interferometry (HDI) (Holdsworth and Reid, 1998), principally to determine the radial velocities of the effective scattering centres illuminated by the radar. The methodology of Thorsen et al. (1997), later re-introduced by Hocking (2005) and since extensively applied to meteor radar returns, was used to estimate components of Reynolds stress due to propagating GWs and/or turbulence in the radar resolution volume. Physically reasonable momentum flux estimates are derived from the Reynolds stress components, which are also verified using a simple radar model incorporating GW-induced wind perturbations. On the basis of these results, we recommend the intercomparison of momentum flux estimates between co-located meteor radars and vertical-beam interferometric MF radars. It is envisaged that such intercomparisons will assist with the clarification of recent concerns (e.g. Vincent et al., 2010) of the accuracy of the meteor radar technique.
EXPERIMENTAL METHODS TO ESTIMATE ACCUMULATED SOLIDS IN NUCLEAR WASTE TANKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, M.; Steeper, T.; Steimke, J.
2012-12-10
The Department of Energy has a large number of nuclear waste tanks. It is important to know if fissionable materials can concentrate when waste is transferred from staging tanks prior to feeding waste treatment plants. Specifically, there is a concern that large, dense particles, e.g., plutonium containing, could accumulate in poorly mixed regions of a blend tank heel for tanks that employ mixing jet pumps. At the request of the DOE Hanford Tank Operations Contractor, Washington River Protection Solutions, the Engineering Development Laboratory of the Savannah River National Laboratory performed a scouting study in a 1/22-scale model of a wastemore » tank to investigate this concern and to develop measurement techniques that could be applied in a more extensive study at a larger scale. Simulated waste tank solids and supernatant were charged to the test tank and rotating liquid jets were used to remove most of the solids. Then the volume and shape of the residual solids and the spatial concentration profiles for the surrogate for plutonium were measured. This paper discusses the overall test results, which indicated heavy solids only accumulate during the first few transfer cycles, along with the techniques and equipment designed and employed in the test. Those techniques include: Magnetic particle separator to remove stainless steel solids, the plutonium surrogate from a flowing stream; Magnetic wand used to manually remove stainless steel solids from samples and the tank heel; Photographs were used to determine the volume and shape of the solids mounds by developing a composite of topographical areas; Laser rangefinders to determine the volume and shape of the solids mounds; Core sampler to determine the stainless steel solids distribution within the solids mounds; Computer driven positioner that placed the laser rangefinders and the core sampler over solids mounds that accumulated on the bottom of a scaled staging tank in locations where jet velocities were low. These devices and techniques were very effective to estimate the movement, location, and concentrations of the solids representing plutonium and are expected to perform well at a larger scale. The operation of the techniques and their measurement accuracies will be discussed as well as the overall results of the accumulated solids test.« less
Experimental Methods to Estimate Accumulated Solids in Nuclear Waste Tanks - 13313
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duignan, Mark R.; Steeper, Timothy J.; Steimke, John L.
2013-07-01
The Department of Energy has a large number of nuclear waste tanks. It is important to know if fissionable materials can concentrate when waste is transferred from staging tanks prior to feeding waste treatment plants. Specifically, there is a concern that large, dense particles, e.g., plutonium containing, could accumulate in poorly mixed regions of a blend tank heel for tanks that employ mixing jet pumps. At the request of the DOE Hanford Tank Operations Contractor, Washington River Protection Solutions, the Engineering Development Laboratory of the Savannah River National Laboratory performed a scouting study in a 1/22-scale model of a wastemore » tank to investigate this concern and to develop measurement techniques that could be applied in a more extensive study at a larger scale. Simulated waste tank solids and supernatant were charged to the test tank and rotating liquid jets were used to remove most of the solids. Then the volume and shape of the residual solids and the spatial concentration profiles for the surrogate for plutonium were measured. This paper discusses the overall test results, which indicated heavy solids only accumulate during the first few transfer cycles, along with the techniques and equipment designed and employed in the test. Those techniques include: - Magnetic particle separator to remove stainless steel solids, the plutonium surrogate from a flowing stream. - Magnetic wand used to manually remove stainless steel solids from samples and the tank heel. - Photographs were used to determine the volume and shape of the solids mounds by developing a composite of topographical areas. - Laser range finders to determine the volume and shape of the solids mounds. - Core sampler to determine the stainless steel solids distribution within the solids mounds. - Computer driven positioner that placed the laser range finders and the core sampler over solids mounds that accumulated on the bottom of a scaled staging tank in locations where jet velocities were low. These devices and techniques were very effective to estimate the movement, location, and concentrations of the solids representing plutonium and are expected to perform well at a larger scale. The operation of the techniques and their measurement accuracies will be discussed as well as the overall results of the accumulated solids test. (authors)« less
Toward a probabilistic acoustic emission source location algorithm: A Bayesian approach
NASA Astrophysics Data System (ADS)
Schumacher, Thomas; Straub, Daniel; Higgins, Christopher
2012-09-01
Acoustic emissions (AE) are stress waves initiated by sudden strain releases within a solid body. These can be caused by internal mechanisms such as crack opening or propagation, crushing, or rubbing of crack surfaces. One application for the AE technique in the field of Structural Engineering is Structural Health Monitoring (SHM). With piezo-electric sensors mounted to the surface of the structure, stress waves can be detected, recorded, and stored for later analysis. An important step in quantitative AE analysis is the estimation of the stress wave source locations. Commonly, source location results are presented in a rather deterministic manner as spatial and temporal points, excluding information about uncertainties and errors. Due to variability in the material properties and uncertainty in the mathematical model, measures of uncertainty are needed beyond best-fit point solutions for source locations. This paper introduces a novel holistic framework for the development of a probabilistic source location algorithm. Bayesian analysis methods with Markov Chain Monte Carlo (MCMC) simulation are employed where all source location parameters are described with posterior probability density functions (PDFs). The proposed methodology is applied to an example employing data collected from a realistic section of a reinforced concrete bridge column. The selected approach is general and has the advantage that it can be extended and refined efficiently. Results are discussed and future steps to improve the algorithm are suggested.
Testing the PV-Theta Mapping Technique in a 3-D CTM Model Simulation
NASA Technical Reports Server (NTRS)
Frith, Stacey M.
2004-01-01
Mapping lower stratospheric ozone into potential vorticity (PV)- potential temperature (Theta) coordinates is a common technique employed to analyze sparse data sets. Ozone transformed into a flow-following dynamical coordinate system is insensitive to meteorological variations. Therefore data from a wide range of times/locations can be compared, so long as the measurements were made in the same airmass (as defined by PV). Moreover, once a relationship between ozone and PV/Theta is established, a full 3D ozone field can be estimated from this relationship and the 3D analyzed PV field. However, ozone data mapped in this fashion can be hampered by noisy PV fields, or "mis-matches" in the resolution and/or exact location of the ozone and PV measurements. In this study, we investigate the PV-ozone relationship using output from a recent 50-year run of the Goddard 3D chemical transport model (CTM). Model constituents are transported using off-line dynamics from the finite volume general circulation model (FVGCM). By using the internally consistent model PV and ozone fields, we minimize noise due to mis-matching and resolution issues. We calculate correlations between model ozone and PV throughout the stratosphere, and test the sensitivity of the technique to initial data resolution. To do this we degrade the model data to that of various satellite instruments, then compare the mapped fields derived from the sub-sampled data to the full resolution model data. With these studies we can determine appropriate limits for the PV-theta mapping technique in latitude, altitude, and as a function of original data resolution.
Ivorra, Eugenio; Verdu, Samuel; Sánchez, Antonio J; Grau, Raúl; Barat, José M
2016-10-19
A technique that combines the spatial resolution of a 3D structured-light (SL) imaging system with the spectral analysis of a hyperspectral short-wave near infrared system was developed for freshness predictions of gilthead sea bream on the first storage days (Days 0-6). This novel approach allows the hyperspectral analysis of very specific fish areas, which provides more information for freshness estimations. The SL system obtains a 3D reconstruction of fish, and an automatic method locates gilthead's pupils and irises. Once these regions are positioned, the hyperspectral camera acquires spectral information and a multivariate statistical study is done. The best region is the pupil with an R² of 0.92 and an RMSE of 0.651 for predictions. We conclude that the combination of 3D technology with the hyperspectral analysis offers plenty of potential and is a very promising technique to non destructively predict gilthead freshness.
Ivorra, Eugenio; Verdu, Samuel; Sánchez, Antonio J.; Grau, Raúl; Barat, José M.
2016-01-01
A technique that combines the spatial resolution of a 3D structured-light (SL) imaging system with the spectral analysis of a hyperspectral short-wave near infrared system was developed for freshness predictions of gilthead sea bream on the first storage days (Days 0–6). This novel approach allows the hyperspectral analysis of very specific fish areas, which provides more information for freshness estimations. The SL system obtains a 3D reconstruction of fish, and an automatic method locates gilthead’s pupils and irises. Once these regions are positioned, the hyperspectral camera acquires spectral information and a multivariate statistical study is done. The best region is the pupil with an R2 of 0.92 and an RMSE of 0.651 for predictions. We conclude that the combination of 3D technology with the hyperspectral analysis offers plenty of potential and is a very promising technique to non destructively predict gilthead freshness. PMID:27775556
Study of Profile Changes during Mechanical Polishing using Relocation Profilometry
NASA Astrophysics Data System (ADS)
Kumaran, S. Chidambara; Shunmugam, M. S.
2017-10-01
Mechanical polishing is a finishing process practiced conventionally to enhance quality of surface. Surface finish is improved by mechanical cutting action of abrasive particles on work surface. Polishing is complex in nature and research efforts have been focused on understanding the polishing mechanism. Study of changes in profile is a useful method of understanding behavior of the polishing process. Such a study requires tracing same profile at regular process intervals, which is a tedious job. An innovative relocation technique is followed in the present work to study profile changes during mechanical polishing of austenitic stainless steel specimen. Using special locating fixture, micro-indentation mark and cross-correlation technique, the same profile is traced at certain process intervals. Comparison of different parameters of profiles shows the manner in which metal removal takes place in the polishing process. Mass removal during process estimated by the same relocation technique is checked with that obtained using weight measurement. The proposed approach can be extended to other micro/nano finishing processes and favorable process conditions can be identified.
NASA Astrophysics Data System (ADS)
Angel, Erin
Advances in Computed Tomography (CT) technology have led to an increase in the modality's diagnostic capabilities and therefore its utilization, which has in turn led to an increase in radiation exposure to the patient population. As a result, CT imaging currently constitutes approximately half of the collective exposure to ionizing radiation from medical procedures. In order to understand the radiation risk, it is necessary to estimate the radiation doses absorbed by patients undergoing CT imaging. The most widely accepted risk models are based on radiosensitive organ dose as opposed to whole body dose. In this research, radiosensitive organ dose was estimated using Monte Carlo based simulations incorporating detailed multidetector CT (MDCT) scanner models, specific scan protocols, and using patient models based on accurate patient anatomy and representing a range of patient sizes. Organ dose estimates were estimated for clinical MDCT exam protocols which pose a specific concern for radiosensitive organs or regions. These dose estimates include estimation of fetal dose for pregnant patients undergoing abdomen pelvis CT exams or undergoing exams to diagnose pulmonary embolism and venous thromboembolism. Breast and lung dose were estimated for patients undergoing coronary CTA imaging, conventional fixed tube current chest CT, and conventional tube current modulated (TCM) chest CT exams. The correlation of organ dose with patient size was quantified for pregnant patients undergoing abdomen/pelvis exams and for all breast and lung dose estimates presented. Novel dose reduction techniques were developed that incorporate organ location and are specifically designed to reduce close to radiosensitive organs during CT acquisition. A generalizable model was created for simulating conventional and novel attenuation-based TCM algorithms which can be used in simulations estimating organ dose for any patient model. The generalizable model is a significant contribution of this work as it lays the foundation for the future of simulating TCM using Monte Carlo methods. As a result of this research organ dose can be estimated for individual patients undergoing specific conventional MDCT exams. This research also brings understanding to conventional and novel close reduction techniques in CT and their effect on organ dose.
Optimal regionalization of extreme value distributions for flood estimation
NASA Astrophysics Data System (ADS)
Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.
2018-01-01
Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.
Comparative study of shear wave-based elastography techniques in optical coherence tomography
NASA Astrophysics Data System (ADS)
Zvietcovich, Fernando; Rolland, Jannick P.; Yao, Jianing; Meemon, Panomsak; Parker, Kevin J.
2017-03-01
We compare five optical coherence elastography techniques able to estimate the shear speed of waves generated by one and two sources of excitation. The first two techniques make use of one piezoelectric actuator in order to produce a continuous shear wave propagation or a tone-burst propagation (TBP) of 400 Hz over a gelatin tissue-mimicking phantom. The remaining techniques utilize a second actuator located on the opposite side of the region of interest in order to create three types of interference patterns: crawling waves, swept crawling waves, and standing waves, depending on the selection of the frequency difference between the two actuators. We evaluated accuracy, contrast to noise ratio, resolution, and acquisition time for each technique during experiments. Numerical simulations were also performed in order to support the experimental findings. Results suggest that in the presence of strong internal reflections, single source methods are more accurate and less variable when compared to the two-actuator methods. In particular, TBP reports the best performance with an accuracy error <4.1%. Finally, the TBP was tested in a fresh chicken tibialis anterior muscle with a localized thermally ablated lesion in order to evaluate its performance in biological tissue.
Impact of HMO penetration and other environmental factors on hospital X-inefficiency.
Rosko, M D
2001-12-01
This study examined the impact of health maintenance organization (HMO) market penetration and other internal and external environmental factors on hospital X-inefficiency in a national sample (N = 1,966) of urban U.S. hospitals in 1997. Stochastic frontier analysis, a frontier regression technique, was used to measure X-inefficiency and estimate parameters of the correlates of X-inefficiency. Log-likelihood restriction tests were used to test a variety of assumptions about the empirical model that guided its selection. Average estimated X-inefficiency in study hospitals was 12.96 percent. Increases in managed care penetration, dependence on Medicare and Medicaid, membership in a multihospital system, and location in areas where competitive pressures and the pool of uncompensated care are greater were associated with less X-inefficiency. Not-for-profit ownership was associated with increased X-inefficiency.
Porosity estimation of aged mortar using a micromechanical model.
Hernández, M G; Anaya, J J; Sanchez, T; Segura, I
2006-12-22
Degradation of concrete structures located in high humidity atmospheres or under flowing water is a very important problem. In this study, a method for ultrasonic non-destructive characterization in aged mortar is presented. The proposed method makes a prediction of the behaviour of aged mortar accomplished with a three phase micromechanical model using ultrasonic measurements. Aging mortar was accelerated by immersing the probes in ammonium nitrate solution. Both destructive and non-destructive characterization of mortar was performed. Destructive tests of porosity were performed using a vacuum saturation method and non-destructive characterization was carried out using ultrasonic velocities. Aging experiments show that mortar degradation not only involves a porosity increase, but also microstructural changes in the cement matrix. Experimental results show that the estimated porosity using the proposed non-destructive methodology had a comparable performance to classical destructive techniques.
Feasibility of a ring FEL at low emittance storage rings
NASA Astrophysics Data System (ADS)
Agapov, I.
2015-09-01
A scheme for generating coherent radiation at latest generation low emittance storage rings such as PETRA III at DESY (Balewski et al., 2004 [1]) is proposed. The scheme is based on focusing and subsequent defocusing of the electron beam in the longitudinal phase space at the undulator location. The expected performance characteristics are estimated for radiation in the wavelength range of 500-1500 eV. It is shown that the average brightness is increased by several orders of magnitude compared to spontaneous undulator radiation, which can open new perspectives for photon-hungry soft X-ray spectroscopy techniques.
1981-07-01
process is observed over all of (0,1], the reproducing kernel Hilbert space (RKHS) techniques developed by Parzen (1961a, 1961b) 2 may be used to construct...covariance kernel,R, for the process (1.1) is the reproducing kernel for a reproducing kernel Hilbert space (RKHS) which will be denoted as H(R) (c.f...2.6), it is known that (c.f. Eubank, Smith and Smith (1981a, 1981b)), i) H(R) is a Hilbert function space consisting of functions which satisfy for fEH
Guide to luminescence dating techniques and their application for paleoseismic research
Gray, Harrison J.; Mahan, Shannon; Rittenour, Tammy M.; Nelson, Michelle Summa; Lund, William R.
2015-01-01
Over the past 25 years, luminescence dating has become a key tool for dating sediments of interest in paleoseismic research. The data obtained from luminescence dating has been used to determine timing of fault displacement, calculate slip rates, and estimate earthquake recurrence intervals. The flexibility of luminescence is a key complement to other chronometers such as radiocarbon or cosmogenic nuclides. Careful sampling and correct selection of sample sites exert two of the strongest controls on obtaining an accurate luminescence age. Factors such as partial bleaching and post-depositional mixing should be avoided during sampling and special measures may be needed to help correct for associated problems. Like all geochronologic techniques, context is necessary for interpreting and calculating luminescence results and this can be achieved by supplying participating labs with associated trench logs, photos, and stratigraphic locations of sample sites.
Sim, Jae-Ang; Kim, Jong-Min; Lee, Sahnghoon; Bae, Ji-Yong; Seon, Jong-Keun
2017-04-01
Although trans-portal and outside-in techniques are commonly used for anatomical ACL reconstruction, there is very little information on variability in tunnel placement between two techniques. A total of 103 patients who received ACL reconstruction using trans-portal (50 patients) and outside-in techniques (53 patients) were included in the study. The ACL tunnel location, length and graft-femoral tunnel angle were analyzed using the 3D CT knee models, and we compared the location and length of the femoral and tibial tunnels, and graft bending angle between the two techniques. The variability in each technique regarding the tunnel location, length and graft tunnel angle using the range values was also compared. There were no differences in the average of femoral tunnel depth and height between the two groups. The ranges of femoral tunnel depth and height showed no difference between two groups (36 and 41 % in trans-portal technique vs. 32 and 41 % in outside-in technique). The average value and ranges of tibial tunnel location also showed similar results in two groups. The outside-in technique showed longer femoral tunnel than the trans-portal technique (34.0 vs. 36.8 mm, p = 0.001). The range of femoral tunnel was also wider in trans-portal technique than in outside-in technique. Although the outside-in technique showed significant acute graft bending angle than trans-portal technique in average values, the trans-portal technique showed wider ranges in graft bending angle than outside-in technique [ranges 73° (SD 13.6) vs. 53° (SD 10.7), respectively]. Although both trans-portal and outside-in techniques in ACL reconstruction can provide relatively consistent in femoral and tibial tunnel locations, trans-portal technique showed high variability in femoral tunnel length and graft bending angles than outside-in technique. Therefore, the outside-in technique in ACL reconstruction is considered as the effective method for surgeons to make more consistent femoral tunnel. III.