USDA-ARS?s Scientific Manuscript database
A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...
Ice Sheet Change Detection by Satellite Image Differencing
NASA Technical Reports Server (NTRS)
Bindschadler, Robert A.; Scambos, Ted A.; Choi, Hyeungu; Haran, Terry M.
2010-01-01
Differencing of digital satellite image pairs highlights subtle changes in near-identical scenes of Earth surfaces. Using the mathematical relationships relevant to photoclinometry, we examine the effectiveness of this method for the study of localized ice sheet surface topography changes using numerical experiments. We then test these results by differencing images of several regions in West Antarctica, including some where changes have previously been identified in altimeter profiles. The technique works well with coregistered images having low noise, high radiometric sensitivity, and near-identical solar illumination geometry. Clouds and frosts detract from resolving surface features. The ETM(plus) sensor on Landsat-7, ALI sensor on EO-1, and MODIS sensor on the Aqua and Terra satellite platforms all have potential for detecting localized topographic changes such as shifting dunes, surface inflation and deflation features associated with sub-glacial lake fill-drain events, or grounding line changes. Availability and frequency of MODIS images favor this sensor for wide application, and using it, we demonstrate both qualitative identification of changes in topography and quantitative mapping of slope and elevation changes.
Lu, Dengsheng; Batistella, Mateus; Moran, Emilio
2009-01-01
Traditional change detection approaches have been proven to be difficult in detecting vegetation changes in the moist tropical regions with multitemporal images. This paper explores the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data for vegetation change detection in the Brazilian Amazon. A principal component analysis was used to integrate TM and HRG panchromatic data. Vegetation change/non-change was detected with the image differencing approach based on the TM and HRG fused image and the corresponding TM image. A rule-based approach was used to classify the TM and HRG multispectral images into thematic maps with three coarse land-cover classes: forest, non-forest vegetation, and non-vegetation lands. A hybrid approach combining image differencing and post-classification comparison was used to detect vegetation change trajectories. This research indicates promising vegetation change techniques, especially for vegetation gain and loss, even if very limited reference data are available. PMID:19789721
Change analysis in the United Arab Emirates: An investigation of techniques
Sohl, Terry L.
1999-01-01
Much of the landscape of the United Arab Emirates has been transformed over the past 15 years by massive afforestation, beautification, and agricultural programs. The "greening" of the United Arab Emirates has had environmental consequences, however, including degraded groundwater quality and possible damage to natural regional ecosystems. Personnel from the Ground- Water Research project, a joint effort between the National Drilling Company of the Abu Dhabi Emirate and the U.S. Geological Survey, were interested in studying landscape change in the Abu Dhabi Emirate using Landsat thematic mapper (TM) data. The EROs Data Center in Sioux Falls, South Dakota was asked to investigate land-cover change techniques that (1) provided locational, quantitative, and qualitative information on landcover change within the Abu Dhabi Emirate; and (2) could be easily implemented by project personnel who were relatively inexperienced in remote sensing. A number of products were created with 1987 and 1996 Landsat TM data using change-detection techniques, including univariate image differencing, an "enhanced" image differencing, vegetation index differencing, post-classification differencing, and changevector analysis. The different techniques provided products that varied in levels of adequacy according to the specific application and the ease of implementation and interpretation. Specific quantitative values of change were most accurately and easily provided by the enhanced image-differencing technique, while the change-vector analysis excelled at providing rich qualitative detail about the nature of a change.
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2016-06-01
Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.
Short-term change detection for UAV video
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2012-11-01
In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.
Alphan, Hakan
2013-03-01
The aim of this study is (1) to quantify landscape changes in the easternmost Mediterranean deltas using bi-temporal binary change detection approach and (2) to analyze relationships between conservation/management designations and various categories of change that indicate type, degree and severity of human impact. For this purpose, image differencing and ratioing were applied to Landsat TM images of 1984 and 2006. A total of 136 candidate change images including normalized difference vegetation index (NDVI) and principal component analysis (PCA) difference images were tested to understand performance of bi-temporal pre-classification analysis procedures in the Mediterranean delta ecosystems. Results showed that visible image algebra provided high accuracies than did NDVI and PCA differencing. On the other hand, Band 5 differencing had one of the lowest change detection performances. Seven superclasses of change were identified using from/to change categories between the earlier and later dates. These classes were used to understand spatial character of anthropogenic impacts in the study area and derive qualitative and quantitative change information within and outside of the conservation/management areas. Change analysis indicated that natural site and wildlife reserve designations fell short of protecting sand dunes from agricultural expansion in the west. East of the study area, however, was exposed to least human impact owing to the fact that nature conservation status kept human interference at a minimum. Implications of these changes were discussed and solutions were proposed to deal with management problems leading to environmental change.
Extended image differencing for change detection in UAV video mosaics
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang; Schumann, Arne
2014-03-01
Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.
Detection of urban expansion in an urban-rural landscape with multitemporal QuickBird images
Lu, Dengsheng; Hetrick, Scott; Moran, Emilio; Li, Guiying
2011-01-01
Accurately detecting urban expansion with remote sensing techniques is a challenge due to the complexity of urban landscapes. This paper explored methods for detecting urban expansion with multitemporal QuickBird images in Lucas do Rio Verde, Mato Grosso, Brazil. Different techniques, including image differencing, principal component analysis (PCA), and comparison of classified impervious surface images with the matched filtering method, were used to examine urbanization detection. An impervious surface image classified with the hybrid method was used to modify the urbanization detection results. As a comparison, the original multispectral image and segmentation-based mean-spectral images were used during the detection of urbanization. This research indicates that the comparison of classified impervious surface images with matched filtering method provides the best change detection performance, followed by the image differencing method based on segmentation-based mean spectral images. The PCA is not a good method for urban change detection in this study. Shadows and high spectral variation within the impervious surfaces represent major challenges to the detection of urban expansion when high spatial resolution images are used. PMID:21799706
Alphan, Hakan
2011-11-01
The aim of this study is to compare various image algebra procedures for their efficiency in locating and identifying different types of landscape changes on the margin of a Mediterranean coastal plain, Cukurova, Turkey. Image differencing and ratioing were applied to the reflective bands of Landsat TM datasets acquired in 1984 and 2006. Normalized Difference Vegetation index (NDVI) and Principal Component Analysis (PCA) differencing were also applied. The resulting images were tested for their capacity to detect nine change phenomena, which were a priori defined in a three-level classification scheme. These change phenomena included agricultural encroachment, sand dune afforestation, coastline changes and removal/expansion of reed beds. The percentage overall accuracies of different algebra products for each phenomenon were calculated and compared. The results showed that some of the changes such as sand dune afforestation and reed bed expansion were detected with accuracies varying between 85 and 97% by the majority of the algebra operations, while some other changes such as logging could only be detected by mid-infrared (MIR) ratioing. For optimizing change detection in similar coastal landscapes, underlying causes of these changes were discussed and the guidelines for selecting band and algebra operations were provided. Copyright © 2011 Elsevier Ltd. All rights reserved.
Detection of Deforestation and Land Conversion in Rondonia, Brazil Using Change Detection Techniques
NASA Technical Reports Server (NTRS)
Guild, Liane S.; Cohen, Warren B,; Kauffman, J. Boone; Peterson, David L. (Technical Monitor)
2001-01-01
Fires associated with tropical deforestation, land conversion, and land use greatly contribute to emissions as well as the depletion of carbon and nutrient pools. The objective of this research was to compare change detection techniques for identifying deforestation and cattle pasture formation during a period of early colonization and agricultural expansion in the vicinity of Jamari, Rond6nia. Multi-date Landsat Thematic Mapper (TM) data between 1984 and 1992 was examined in a 94 370-ha area of active deforestation to map land cover change. The Tasseled Cap (TC) transformation was used to enhance the contrast between forest, cleared areas, and regrowth. TC images were stacked into a composite multi-date TC and used in a principal components (PC) transformation to identify change components. In addition, consecutive TC image pairs were differenced and stacked into a composite multi-date differenced image. A maximum likelihood classification of each image composite was compared for identification of land cover change. The multi-date TC composite classification had the best accuracy of 78.1% (kappa). By 1984, only 5% of the study area had been cleared, but by 1992, 11% of the area had been deforested, primarily for pasture and 7% lost due to hydroelectric dam flooding. Finally, discrimination of pasture versus cultivation was improved due to the ability to detect land under sustained clearing opened to land exhibiting regrowth with infrequent clearing.
NASA Astrophysics Data System (ADS)
Zhu, Zhe
2017-08-01
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
NASA Astrophysics Data System (ADS)
Mangano, Joseph F.
A debris flow associated with the 2003 breach of Grand Ditch in Rocky Mountain National Park, Colorado provided an opportunity to determine controls on channel geomorphic responses following a large sedimentation event. Due to the remote site location and high spatial and temporal variability of processes controlling channel response, repeat airborne lidar surveys in 2004 and 2012 were used to capture conditions along the upper Colorado River and tributary Lulu Creek i) one year following the initial debris flow, and ii) following two bankfull flows (2009 and 2010) and a record-breaking long duration, high intensity snowmelt runoff season (2011). Locations and volumes of aggradation and degradation were determined using lidar differencing. Channel and valley metrics measured from the lidar surveys included water surface slope, valley slope, changes in bankfull width, sinuosity, braiding index, channel migration, valley confinement, height above the water surface along the floodplain, and longitudinal profiles. Reaches of aggradation and degradation along the upper Colorado River are influenced by valley confinement and local controls. Aggradational reaches occurred predominantly in locations where the valley was unconfined and valley slope remained constant through the length of the reach. Channel avulsions, migration, and changes in sinuosity were common in all unconfined reaches, whether aggradational or degradational. Bankfull width in both aggradational and degradational reaches showed greater changes closer to the sediment source, with the magnitude of change decreasing downstream. Local variations in channel morphology, site specific channel conditions, and the distance from the sediment source influence the balance of transport supply and capacity and, therefore, locations of aggradation, degradation, and associated morphologic changes. Additionally, a complex response initially seen in repeat cross-sections is broadly supported by lidar differencing, although the differencing captures only the net change over eight years and not annual changes. Lidar differencing shows great promise because it reveals vertical and horizontal trends in morphologic changes at a high resolution over a large area. Repeat lidar surveys were also used to create a sediment budget along the upper Colorado River by means of the morphologic inverse method. In addition to the geomorphic changes detected by lidar, several levels of attrition of the weak clasts within debris flow sediment were applied to the sediment budget to reduce gaps in expected inputs and outputs. Bed-material estimates using the morphologic inverse method were greater than field-measured transport estimates, but the two were within an order of magnitude. Field measurements and observations are critical for robust interpretation of the lidar-based analyses because applying lidar differencing without field control may not identify local controls on valley and channel geometry and sediment characteristics. The final sediment budget helps define variability in bed-material transport and constrain transport rates through the site, which will be beneficial for restoration planning. The morphologic inverse method approach using repeat lidar surveys appears promising, especially if lidar resolution is similar between sequential surveys.
NASA Astrophysics Data System (ADS)
Candela, S. G.; Howat, I.; Noh, M. J.; Porter, C. C.; Morin, P. J.
2016-12-01
In the last decade, high resolution satellite imagery has become an increasingly accessible tool for geoscientists to quantify changes in the Arctic land surface due to geophysical, ecological and anthropomorphic processes. However, the trade off between spatial coverage and spatial-temporal resolution has limited detailed, process-level change detection over large (i.e. continental) scales. The ArcticDEM project utilized over 300,000 Worldview image pairs to produce a nearly 100% coverage elevation model (above 60°N) offering the first polar, high spatial - high resolution (2-8m by region) dataset, often with multiple repeats in areas of particular interest to geo-scientists. A dataset of this size (nearly 250 TB) offers endless new avenues of scientific inquiry, but quickly becomes unmanageable computationally and logistically for the computing resources available to the average scientist. Here we present TopoDiff, a framework for a generalized. automated workflow that requires minimal input from the end user about a study site, and utilizes cloud computing resources to provide a temporally sorted and differenced dataset, ready for geostatistical analysis. This hands-off approach allows the end user to focus on the science, without having to manage thousands of files, or petabytes of data. At the same time, TopoDiff provides a consistent and accurate workflow for image sorting, selection, and co-registration enabling cross-comparisons between research projects.
Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.
Coban, Huseyin Oguz; Koc, Ayhan; Eker, Mehmet
2010-01-01
Previous studies have been able to successfully detect changes in gently-sloping forested areas with low-diversity and homogeneous vegetation cover using medium-resolution satellite data such as landsat. The aim of the present study is to examine the capacity of multi-temporal landsat data to identify changes in forested areas with mixed vegetation and generally located on steep slopes or non-uniform topography landsat thematic mapper (TM) and landsat enhanced thematic mapperplus (ETM+) data for the years 1987-2000 was used to detect changes within a 19,500 ha forested area in the Western Black sea region of Turkey. The data comply with the forest cover type maps previously created for forest management plans of the research area. The methods used to detect changes were: post-classification comparison, image differencing, image rationing and NDVI (Normalized Difference Vegetation Index) differencing methods. Following the supervised classification process, error matrices were used to evaluate the accuracy of classified images obtained. The overall accuracy has been calculated as 87.59% for 1987 image and as 91.81% for 2000 image. General kappa statistics have been calculated as 0.8543 and 0.9038 for 1987 and 2000, respectively. The changes identified via the post-classification comparison method were compared with other change detetion methods. Maximum coherence was found to be 74.95% at 4/3 band rate. The NDVI difference and 3rd band difference methods achieved the same coherence with slight variations. The results suggest that landsat satellite data accurately conveys the temporal changes which occur on steeply-sloping forested areas with a mixed structure, providing a limited amount of detail but with a high level of accuracy. Moreover it has been decided that the post-classification comparison method can meet the needs of forestry activities better than other methods as it provides information about the direction of these changes.
NASA Astrophysics Data System (ADS)
Rojali, Siahaan, Ida Sri Rejeki; Soewito, Benfano
2017-08-01
Steganography is the art and science of hiding the secret messages so the existence of the message cannot be detected by human senses. The data concealment is using the Multi Pixel Value Differencing (MPVD) algorithm, utilizing the difference from each pixel. The development was done by using six interval tables. The objective of this algorithm is to enhance the message capacity and to maintain the data security.
Detecting forest canopy change due to insect activity using Landsat MSS
NASA Technical Reports Server (NTRS)
Nelson, R. F.
1983-01-01
Multitemporal Landsat multispectral scanner data were analyzed to test various computer-aided analysis techniques for detecting significant forest canopy alteration. Three data transformations - differencing, ratioing, and a vegetative index difference - were tested to determine which best delineated gypsy moth defoliation. Response surface analyses were conducted to determine optimal threshold levels for the individual transformed bands and band combinations. Results indicate that, of the three transformations investigated, a vegetative index difference (VID) transformation most accurately delineates forest canopy change. Band 5 (0.6 to 0.7 micron ratioed data did nearly as well. However, other single bands and band combinations did not improve upon the band 5 ratio and VID results.
Efficient entanglement distribution over 200 kilometers.
Dynes, J F; Takesue, H; Yuan, Z L; Sharpe, A W; Harada, K; Honjo, T; Kamada, H; Tadanaga, O; Nishida, Y; Asobe, M; Shields, A J
2009-07-06
Here we report the first demonstration of entanglement distribution over a record distance of 200 km which is of sufficient fidelity to realize secure communication. In contrast to previous entanglement distribution schemes, we use detection elements based on practical avalanche photodiodes (APDs) operating in a self-differencing mode. These APDs are low-cost, compact and easy to operate requiring only electrical cooling to achieve high single photon detection efficiency. The self-differencing APDs in combination with a reliable parametric down-conversion source demonstrate that entanglement distribution over ultra-long distances has become both possible and practical. Consequently the outlook is extremely promising for real world entanglement-based communication between distantly separated parties.
Jay D. Miller; Eric E. Knapp; Carl H. Key; Carl N. Skinner; Clint J. Isbell; R. Max Creasy; Joseph W. Sherlock
2009-01-01
Multispectral satellite data have become a common tool used in the mapping of wildland fire effects. Fire severity, defined as the degree to which a site has been altered, is often the variable mapped. The Normalized Burn Ratio (NBR) used in an absolute difference change detection protocol (dNBR), has become the remote sensing method of choice for US Federal land...
Spatially explicit rangeland erosion monitoring using high-resolution digital aerial imagery
Gillan, Jeffrey K.; Karl, Jason W.; Barger, Nichole N.; Elaksher, Ahmed; Duniway, Michael C.
2016-01-01
Nearly all of the ecosystem services supported by rangelands, including production of livestock forage, carbon sequestration, and provisioning of clean water, are negatively impacted by soil erosion. Accordingly, monitoring the severity, spatial extent, and rate of soil erosion is essential for long-term sustainable management. Traditional field-based methods of monitoring erosion (sediment traps, erosion pins, and bridges) can be labor intensive and therefore are generally limited in spatial intensity and/or extent. There is a growing effort to monitor natural resources at broad scales, which is driving the need for new soil erosion monitoring tools. One remote-sensing technique that can be used to monitor soil movement is a time series of digital elevation models (DEMs) created using aerial photogrammetry methods. By geographically coregistering the DEMs and subtracting one surface from the other, an estimate of soil elevation change can be created. Such analysis enables spatially explicit quantification and visualization of net soil movement including erosion, deposition, and redistribution. We constructed DEMs (12-cm ground sampling distance) on the basis of aerial photography immediately before and 1 year after a vegetation removal treatment on a 31-ha Piñon-Juniper woodland in southeastern Utah to evaluate the use of aerial photography in detecting soil surface change. On average, we were able to detect surface elevation change of ± 8−9cm and greater, which was sufficient for the large amount of soil movement exhibited on the study area. Detecting more subtle soil erosion could be achieved using the same technique with higher-resolution imagery from lower-flying aircraft such as unmanned aerial vehicles. DEM differencing and process-focused field methods provided complementary information and a more complete assessment of soil loss and movement than any single technique alone. Photogrammetric DEM differencing could be used as a technique to quantitatively monitor surface change over time relative to management activities.
Image Change Detection via Ensemble Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Benjamin W; Vatsavai, Raju
2013-01-01
The concept of geographic change detection is relevant in many areas. Changes in geography can reveal much information about a particular location. For example, analysis of changes in geography can identify regions of population growth, change in land use, and potential environmental disturbance. A common way to perform change detection is to use a simple method such as differencing to detect regions of change. Though these techniques are simple, often the application of these techniques is very limited. Recently, use of machine learning methods such as neural networks for change detection has been explored with great success. In this work,more » we explore the use of ensemble learning methodologies for detecting changes in bitemporal synthetic aperture radar (SAR) images. Ensemble learning uses a collection of weak machine learning classifiers to create a stronger classifier which has higher accuracy than the individual classifiers in the ensemble. The strength of the ensemble lies in the fact that the individual classifiers in the ensemble create a mixture of experts in which the final classification made by the ensemble classifier is calculated from the outputs of the individual classifiers. Our methodology leverages this aspect of ensemble learning by training collections of weak decision tree based classifiers to identify regions of change in SAR images collected of a region in the Staten Island, New York area during Hurricane Sandy. Preliminary studies show that the ensemble method has approximately 11.5% higher change detection accuracy than an individual classifier.« less
a Landsat Time-Series Stacks Model for Detection of Cropland Change
NASA Astrophysics Data System (ADS)
Chen, J.; Chen, J.; Zhang, J.
2017-09-01
Global, timely, accurate and cost-effective cropland monitoring with a fine spatial resolution will dramatically improve our understanding of the effects of agriculture on greenhouse gases emissions, food safety, and human health. Time-series remote sensing imagery have been shown particularly potential to describe land cover dynamics. The traditional change detection techniques are often not capable of detecting land cover changes within time series that are severely influenced by seasonal difference, which are more likely to generate pseuso changes. Here,we introduced and tested LTSM ( Landsat time-series stacks model), an improved Continuous Change Detection and Classification (CCDC) proposed previously approach to extract spectral trajectories of land surface change using a dense Landsat time-series stacks (LTS). The method is expected to eliminate pseudo changes caused by phenology driven by seasonal patterns. The main idea of the method is that using all available Landsat 8 images within a year, LTSM consisting of two term harmonic function are estimated iteratively for each pixel in each spectral band .LTSM can defines change area by differencing the predicted and observed Landsat images. The LTSM approach was compared with change vector analysis (CVA) method. The results indicated that the LTSM method correctly detected the "true change" without overestimating the "false" one, while CVA pointed out "true change" pixels with a large number of "false changes". The detection of change areas achieved an overall accuracy of 92.37 %, with a kappa coefficient of 0.676.
Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E
2017-05-01
Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.
NASA Technical Reports Server (NTRS)
Thomas, S. D.; Holst, T. L.
1985-01-01
A full-potential steady transonic wing flow solver has been modified so that freestream density and residual are captured in regions of constant velocity. This numerically precise freestream consistency is obtained by slightly altering the differencing scheme without affecting the implicit solution algorithm. The changes chiefly affect the fifteen metrics per grid point, which are computed once and stored. With this new method, the outer boundary condition is captured accurately, and the smoothness of the solution is especially improved near regions of grid discontinuity.
Shermeyer, Jacob S.; Haack, Barry N.
2015-01-01
Two forestry-change detection methods are described, compared, and contrasted for estimating deforestation and growth in threatened forests in southern Peru from 2000 to 2010. The methods used in this study rely on freely available data, including atmospherically corrected Landsat 5 Thematic Mapper and Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation continuous fields (VCF). The two methods include a conventional supervised signature extraction method and a unique self-calibrating method called MODIS VCF guided forest/nonforest (FNF) masking. The process chain for each of these methods includes a threshold classification of MODIS VCF, training data or signature extraction, signature evaluation, k-nearest neighbor classification, analyst-guided reclassification, and postclassification image differencing to generate forest change maps. Comparisons of all methods were based on an accuracy assessment using 500 validation pixels. Results of this accuracy assessment indicate that FNF masking had a 5% higher overall accuracy and was superior to conventional supervised classification when estimating forest change. Both methods succeeded in classifying persistently forested and nonforested areas, and both had limitations when classifying forest change.
Fast Optical Hazard Detection for Planetary Rovers Using Multiple Spot Laser Triangulation
NASA Technical Reports Server (NTRS)
Matthies, L.; Balch, T.; Wilcox, B.
1997-01-01
A new laser-based optical sensor system that provides hazard detection for planetary rovers is presented. It is anticipated that the sensor can support safe travel at speeds up to 6cm/second for large (1m) rovers in full sunlight on Earth or Mars. The system overcomes limitations in an older design that require image differencing ot detect a laser stripe in full sun.
Gigahertz-gated InGaAs/InP single-photon detector with detection efficiency exceeding 55% at 1550 nm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comandar, L. C.; Engineering Department, Cambridge University, 9 J J Thomson Ave, Cambridge CB3 0FA; Fröhlich, B.
We report on a gated single-photon detector based on InGaAs/InP avalanche photodiodes (APDs) with a single-photon detection efficiency exceeding 55% at 1550 nm. Our detector is gated at 1 GHz and employs the self-differencing technique for gate transient suppression. It can operate nearly dead time free, except for the one clock cycle dead time intrinsic to self-differencing, and we demonstrate a count rate of 500 Mcps. We present a careful analysis of the optimal driving conditions of the APD measured with a dead time free detector characterization setup. It is found that a shortened gate width of 360 ps together with anmore » increased driving signal amplitude and operation at higher temperatures leads to improved performance of the detector. We achieve an afterpulse probability of 7% at 50% detection efficiency with dead time free measurement and a record efficiency for InGaAs/InP APDs of 55% at an afterpulse probability of only 10.2% with a moderate dead time of 10 ns.« less
Yang, Limin; Xian, George Z.; Klaver, Jacqueline M.; Deal, Brian
2003-01-01
We developed a Sub-pixel Imperviousness Change Detection (SICD) approach to detect urban land-cover changes using Landsat and high-resolution imagery. The sub-pixel percent imperviousness was mapped for two dates (09 March 1993 and 11 March 2001) over western Georgia using a regression tree algorithm. The accuracy of the predicted imperviousness was reasonable based on a comparison using independent reference data. The average absolute error between predicted and reference data was 16.4 percent for 1993 and 15.3 percent for 2001. The correlation coefficient (r) was 0.73 for 1993 and 0.78 for 2001, respectively. Areas with a significant increase (greater than 20 percent) in impervious surface from 1993 to 2001 were mostly related to known land-cover/land-use changes that occurred in this area, suggesting that the spatial change of an impervious surface is a useful indicator for identifying spatial extent, intensity, and, potentially, type of urban land-cover/land-use changes. Compared to other pixel-based change-detection methods (band differencing, rationing, change vector, post-classification), information on changes in sub-pixel percent imperviousness allow users to quantify and interpret urban land-cover/land-use changes based on their own definition. Such information is considered complementary to products generated using other change-detection methods. In addition, the procedure for mapping imperviousness is objective and repeatable, hence, can be used for monitoring urban land-cover/land-use change over a large geographic area. Potential applications and limitations of the products developed through this study in urban environmental studies are also discussed.
NASA Technical Reports Server (NTRS)
Melbourne, William G.
1986-01-01
In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.
Changing volatility of U.S. annual tornado reports
NASA Astrophysics Data System (ADS)
Tippett, Michael K.
2014-10-01
United States (U.S.) tornado activity results in substantial loss of life and property damage each year. A simple measure of the U.S. tornado climatology is the average number of tornadoes per year. However, even this statistic is elusive because of nonstationary behavior due in large part to changes in reporting practices. Differencing of the annual report data results in a quantity without mean trends and whose standard deviation we denote as volatility, since it is an indication of the likely year-to-year variation in the number of tornadoes reported. While volatility changes detected prior to 2000 can be associated with known reporting practice changes, an increase in volatility in the 2000s across intensity levels cannot. A volatility increase is also seen in a tornado environment index which measures the favorability of atmospheric conditions to tornado activity, providing evidence that the recent increase in tornado report volatility is related to the physical environment.
Effective image differencing with convolutional neural networks for real-time transient hunting
NASA Astrophysics Data System (ADS)
Sedaghat, Nima; Mahabal, Ashish
2018-06-01
Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.
Reducing numerical diffusion for incompressible flow calculations
NASA Technical Reports Server (NTRS)
Claus, R. W.; Neely, G. M.; Syed, S. A.
1984-01-01
A number of approaches for improving the accuracy of incompressible, steady-state flow calculations are examined. Two improved differencing schemes, Quadratic Upstream Interpolation for Convective Kinematics (QUICK) and Skew-Upwind Differencing (SUD), are applied to the convective terms in the Navier-Stokes equations and compared with results obtained using hybrid differencing. In a number of test calculations, it is illustrated that no single scheme exhibits superior performance for all flow situations. However, both SUD and QUICK are shown to be generally more accurate than hybrid differencing.
NASA Astrophysics Data System (ADS)
Kranz, Olaf; Lang, Stefan; Schoepfer, Elisabeth
2017-09-01
Mining natural resources serve fundamental societal needs or commercial interests, but it may well turn into a driver of violence and regional instability. In this study, very high resolution (VHR) optical stereo satellite data are analysed to monitor processes and changes in one of the largest artisanal and small-scale mining sites in the Democratic Republic of the Congo, which is among the world's wealthiest countries in exploitable minerals To identify the subtle structural changes, the applied methodological framework employs object-based change detection (OBCD) based on optical VHR data and generated digital surface models (DSM). Results prove the DSM-based change detection approach enhances the assessment gained from sole 2D analyses by providing valuable information about changes in surface structure or volume. Land cover changes as analysed by OBCD reveal an increase in bare soil area by a rate of 47% between April 2010 and September 2010, followed by a significant decrease of 47.5% until March 2015. Beyond that, DSM differencing enabled the characterisation of small-scale features such as pits and excavations. The presented Earth observation (EO)-based monitoring of mineral exploitation aims at a better understanding of the relations between resource extraction and conflict, and thus providing relevant information for potential mitigation strategies and peace building.
NASA Technical Reports Server (NTRS)
Rodden, John James (Inventor); Price, Xenophon (Inventor); Carrou, Stephane (Inventor); Stevens, Homer Darling (Inventor)
2002-01-01
A control system for providing attitude control in spacecraft. The control system comprising a primary attitude reference system, a secondary attitude reference system, and a hyper-complex number differencing system. The hyper-complex number differencing system is connectable to the primary attitude reference system and the secondary attitude reference system.
Development Context Driven Change Awareness and Analysis Framework
NASA Technical Reports Server (NTRS)
Sarma, Anita; Branchaud, Josh; Dwyer, Matthew B.; Person, Suzette; Rungta, Neha
2014-01-01
Recent work on workspace monitoring allows conflict prediction early in the development process, however, these approaches mostly use syntactic differencing techniques to compare different program versions. In contrast, traditional change-impact analysis techniques analyze related versions of the program only after the code has been checked into the master repository. We propose a novel approach, De- CAF (Development Context Analysis Framework), that leverages the development context to scope a change impact analysis technique. The goal is to characterize the impact of each developer on other developers in the team. There are various client applications such as task prioritization, early conflict detection, and providing advice on testing that can benefit from such a characterization. The DeCAF framework leverages information from the development context to bound the iDiSE change impact analysis technique to analyze only the parts of the code base that are of interest. Bounding the analysis can enable DeCAF to efficiently compute the impact of changes using a combination of program dependence and symbolic execution based approaches.
Development Context Driven Change Awareness and Analysis Framework
NASA Technical Reports Server (NTRS)
Sarma, Anita; Branchaud, Josh; Dwyer, Matthew B.; Person, Suzette; Rungta, Neha; Wang, Yurong; Elbaum, Sebastian
2014-01-01
Recent work on workspace monitoring allows conflict prediction early in the development process, however, these approaches mostly use syntactic differencing techniques to compare different program versions. In contrast, traditional change-impact analysis techniques analyze related versions of the program only after the code has been checked into the master repository. We propose a novel approach, DeCAF (Development Context Analysis Framework), that leverages the development context to scope a change impact analysis technique. The goal is to characterize the impact of each developer on other developers in the team. There are various client applications such as task prioritization, early conflict detection, and providing advice on testing that can benefit from such a characterization. The DeCAF framework leverages information from the development context to bound the iDiSE change impact analysis technique to analyze only the parts of the code base that are of interest. Bounding the analysis can enable DeCAF to efficiently compute the impact of changes using a combination of program dependence and symbolic execution based approaches.
Nordberg, Maj-Liz; Evertson, Joakim
2003-12-01
Vegetation cover-change analysis requires selection of an appropriate set of variables for measuring and characterizing change. Satellite sensors like Landsat TM offer the advantages of wide spatial coverage while providing land-cover information. This facilitates the monitoring of surface processes. This study discusses change detection in mountainous dry-heath communities in Jämtland County, Sweden, using satellite data. Landsat-5 TM and Landsat-7 ETM+ data from 1984, 1994 and 2000, respectively, were used. Different change detection methods were compared after the images had been radiometrically normalized, georeferenced and corrected for topographic effects. For detection of the classes change--no change the NDVI image differencing method was the most accurate with an overall accuracy of 94% (K = 0.87). Additional change information was extracted from an alternative method called NDVI regression analysis and vegetation change in 3 categories within mountainous dry-heath communities were detected. By applying a fuzzy set thresholding technique the overall accuracy was improved from of 65% (K = 0.45) to 74% (K = 0.59). The methods used generate a change product showing the location of changed areas in sensitive mountainous heath communities, and it also indicates the extent of the change (high, moderate and unchanged vegetation cover decrease). A total of 17% of the dry and extremely dry-heath vegetation within the study area has changed between 1984 and 2000. On average 4% of the studied heath communities have been classified as high change, i.e. have experienced "high vegetation cover decrease" during the period. The results show that the low alpine zone of the southern part of the study area shows the highest amount of "high vegetation cover decrease". The results also show that the main change occurred between 1994 and 2000.
Performance of differenced range data types in Voyager navigation
NASA Technical Reports Server (NTRS)
Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.
1982-01-01
Voyager radio navigation made use of a differenced rage data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter-to-Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.
Performance of differenced range data types in Voyager navigation
NASA Technical Reports Server (NTRS)
Taylor, T. H.; Campbell, J. K.; Jacobson, R. A.; Moultrie, B.; Nichols, R. A., Jr.; Riedel, J. E.
1982-01-01
Voyager radio navigation made use of differenced range data type for both Saturn encounters because of the low declination singularity of Doppler data. Nearly simultaneous two-way range from two-station baselines was explicitly differenced to produce this data type. Concurrently, a differential VLBI data type (DDOR), utilizing doubly differenced quasar-spacecraft delays, with potentially higher precision was demonstrated. Performance of these data types is investigated on the Jupiter to Saturn leg of Voyager 2. The statistics of performance are presented in terms of actual data noise comparisons and sample orbit estimates. Use of DDOR as a primary data type for navigation to Uranus is discussed.
NASA Astrophysics Data System (ADS)
Prokešová, Roberta; Kardoš, Miroslav; Tábořík, Petr; Medveďová, Alžbeta; Stacke, Václav; Chudý, František
2014-11-01
Large earthflow-type landslides are destructive mass movement phenomena with highly unpredictable behaviour. Knowledge of earthflow kinematics is essential for understanding the mechanisms that control its movements. The present paper characterises the kinematic behaviour of a large earthflow near the village of Ľubietová in Central Slovakia over a period of 35 years following its most recent reactivation in 1977. For this purpose, multi-temporal spatial data acquired by point-based in-situ monitoring and optical remote sensing methods have been used. Quantitative data analyses including strain modelling and DEM differencing techniques have enabled us to: (i) calculate the annual landslide movement rates; (ii) detect the trend of surface displacements; (iii) characterise spatial variability of movement rates; (iv) measure changes in the surface topography on a decadal scale; and (v) define areas with distinct kinematic behaviour. The results also integrate the qualitative characteristics of surface topography, in particular the distribution of surface structures as defined by a high-resolution DEM, and the landslide subsurface structure, as revealed by 2D resistivity imaging. Then, the ground surface kinematics of the landslide is evaluated with respect to the specific conditions encountered in the study area including slope morphology, landslide subsurface structure, and local geological and hydrometeorological conditions. Finally, the broader implications of the presented research are discussed with particular focus on the role that strain-related structures play in landslide kinematic behaviour.
A Dual Frequency Carrier Phase Error Difference Checking Algorithm for the GNSS Compass.
Liu, Shuo; Zhang, Lei; Li, Jian
2016-11-24
The performance of the Global Navigation Satellite System (GNSS) compass is related to the quality of carrier phase measurement. How to process the carrier phase error properly is important to improve the GNSS compass accuracy. In this work, we propose a dual frequency carrier phase error difference checking algorithm for the GNSS compass. The algorithm aims at eliminating large carrier phase error in dual frequency double differenced carrier phase measurement according to the error difference between two frequencies. The advantage of the proposed algorithm is that it does not need additional environment information and has a good performance on multiple large errors compared with previous research. The core of the proposed algorithm is removing the geographical distance from the dual frequency carrier phase measurement, then the carrier phase error is separated and detectable. We generate the Double Differenced Geometry-Free (DDGF) measurement according to the characteristic that the different frequency carrier phase measurements contain the same geometrical distance. Then, we propose the DDGF detection to detect the large carrier phase error difference between two frequencies. The theoretical performance of the proposed DDGF detection is analyzed. An open sky test, a manmade multipath test and an urban vehicle test were carried out to evaluate the performance of the proposed algorithm. The result shows that the proposed DDGF detection is able to detect large error in dual frequency carrier phase measurement by checking the error difference between two frequencies. After the DDGF detection, the accuracy of the baseline vector is improved in the GNSS compass.
Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.
Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757
USDA-ARS?s Scientific Manuscript database
Brown rot is a severe disease affecting stone and pome fruits. This disease was recently confirmed to be caused by the following six closely related species: Monilinia fructicola, Monilinia laxa, Monilinia fructigena, Monilia polystroma, Monilia mumecola and Monilia yunnanensis. Because of differenc...
Multiple pedestrian detection using IR LED stereo camera
NASA Astrophysics Data System (ADS)
Ling, Bo; Zeifman, Michael I.; Gibson, David R. P.
2007-09-01
As part of the U.S. Department of Transportations Intelligent Vehicle Initiative (IVI) program, the Federal Highway Administration (FHWA) is conducting R&D in vehicle safety and driver information systems. There is an increasing number of applications where pedestrian monitoring is of high importance. Visionbased pedestrian detection in outdoor scenes is still an open challenge. People dress in very different colors that sometimes blend with the background, wear hats or carry bags, and stand, walk and change directions unpredictably. The background is various, containing buildings, moving or parked cars, bicycles, street signs, signals, etc. Furthermore, existing pedestrian detection systems perform only during daytime, making it impossible to detect pedestrians at night. Under FHWA funding, we are developing a multi-pedestrian detection system using IR LED stereo camera. This system, without using any templates, detects the pedestrians through statistical pattern recognition utilizing 3D features extracted from the disparity map. A new IR LED stereo camera is being developed, which can help detect pedestrians during daytime and night time. Using the image differencing and denoising, we have also developed new methods to estimate the disparity map of pedestrians in near real time. Our system will have a hardware interface with the traffic controller through wireless communication. Once pedestrians are detected, traffic signals at the street intersections will change phases to alert the drivers of approaching vehicles. The initial test results using images collected at a street intersection show that our system can detect pedestrians in near real time.
Cest Analysis: Automated Change Detection from Very-High Remote Sensing Images
NASA Astrophysics Data System (ADS)
Ehlers, M.; Klonus, S.; Jarmer, T.; Sofina, N.; Michel, U.; Reinartz, P.; Sirmacek, B.
2012-08-01
A fast detection, visualization and assessment of change in areas of crisis or catastrophes are important requirements for coordination and planning of help. Through the availability of new satellites and/or airborne sensors with very high spatial resolutions (e.g., WorldView, GeoEye) new remote sensing data are available for a better detection, delineation and visualization of change. For automated change detection, a large number of algorithms has been proposed and developed. From previous studies, however, it is evident that to-date no single algorithm has the potential for being a reliable change detector for all possible scenarios. This paper introduces the Combined Edge Segment Texture (CEST) analysis, a decision-tree based cooperative suite of algorithms for automated change detection that is especially designed for the generation of new satellites with very high spatial resolution. The method incorporates frequency based filtering, texture analysis, and image segmentation techniques. For the frequency analysis, different band pass filters can be applied to identify the relevant frequency information for change detection. After transforming the multitemporal images via a fast Fourier transform (FFT) and applying the most suitable band pass filter, different methods are available to extract changed structures: differencing and correlation in the frequency domain and correlation and edge detection in the spatial domain. Best results are obtained using edge extraction. For the texture analysis, different 'Haralick' parameters can be calculated (e.g., energy, correlation, contrast, inverse distance moment) with 'energy' so far providing the most accurate results. These algorithms are combined with a prior segmentation of the image data as well as with morphological operations for a final binary change result. A rule-based combination (CEST) of the change algorithms is applied to calculate the probability of change for a particular location. CEST was tested with high-resolution satellite images of the crisis areas of Darfur (Sudan). CEST results are compared with a number of standard algorithms for automated change detection such as image difference, image ratioe, principal component analysis, delta cue technique and post classification change detection. The new combined method shows superior results averaging between 45% and 15% improvement in accuracy.
NASA Astrophysics Data System (ADS)
Barry, Richard K.; Bennett, D. P.; Klaasen, K.; Becker, A. C.; Christiansen, J.; Albrow, M.
2014-01-01
We have worked to characterize two exoplanets newly detected from the ground: OGLE-2012-BLG-0406 and OGLE-2012-BLG-0838, using microlensing observations of the Galactic Bulge recently obtained by NASA’s Deep Impact (DI) spacecraft, in combination with ground data. These observations of the crowded Bulge fields from Earth and from an observatory at a distance of ~1 AU have permitted the extraction of a microlensing parallax signature - critical for breaking exoplanet model degeneracies. For this effort, we used DI’s High Resolution Instrument, launched with a permanent defocus aberration due to an error in cryogenic testing. We show how the effects of a very large, chromatic PSF can be reduced in differencing photometry. We also compare two approaches to differencing photometry - one of which employs the Bramich algorithm and another using the Fruchter & Hook drizzle algorithm.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differenced range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 microrad, and angular rate precision on the order of 10 to 25(10)(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wide band and narrow band (delta)VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 /microrad, and angular rate precisions of 0.5 to 1.0(10)(exp -12) rad/sec.
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
Performance Analysis of Several GPS/Galileo Precise Point Positioning Models
Afifi, Akram; El-Rabbany, Ahmed
2015-01-01
This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada’s GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference. PMID:26102495
Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.
Afifi, Akram; El-Rabbany, Ahmed
2015-06-19
This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.
Orbit determination performances using single- and double-differenced methods: SAC-C and KOMPSAT-2
NASA Astrophysics Data System (ADS)
Hwang, Yoola; Lee, Byoung-Sun; Kim, Haedong; Kim, Jaehoon
2011-01-01
In this paper, Global Positioning System-based (GPS) Orbit Determination (OD) for the KOrea-Multi-Purpose-SATellite (KOMPSAT)-2 using single- and double-differenced methods is studied. The requirement of KOMPSAT-2 orbit accuracy is to allow 1 m positioning error to generate 1-m panchromatic images. KOMPSAT-2 OD is computed using real on-board GPS data. However, the local time of the KOMPSAT-2 GPS receiver is not synchronized with the zero fractional seconds of the GPS time internally, and it continuously drifts according to the pseudorange epochs. In order to resolve this problem, an OD based on single-differenced GPS data from the KOMPSAT-2 uses the tagged time of the GPS receiver, and the accuracy of the OD result is assessed using the overlapping orbit solution between two adjacent days. The clock error of the GPS satellites in the KOMPSAT-2 single-differenced method is corrected using International GNSS Service (IGS) clock information at 5-min intervals. KOMPSAT-2 OD using both double- and single-differenced methods satisfies the requirement of 1-m accuracy in overlapping three dimensional orbit solutions. The results of the SAC-C OD compared with JPL’s POE (Precise Orbit Ephemeris) are also illustrated to demonstrate the implementation of the single- and double-differenced methods using a satellite that has independent orbit information available for validation.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1990-01-01
A process is disclosed for x ray registration and differencing which results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
Digital Data Registration and Differencing Compression System
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1996-01-01
A process for X-ray registration and differencing results in more efficient compression. Differencing of registered modeled subject image with a modeled reference image forms a differenced image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three-dimensional model, which three-dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either a remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic X-ray digital images.
NASA Astrophysics Data System (ADS)
Chavis, Christopher
Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.
NASA Astrophysics Data System (ADS)
Martinez-Gutierrez, Genaro
Baja California Sur (Mexico), as well as mainland Mexico, is affected by tropical cyclone storms, which originate in the eastern north Pacific. Historical records show that Baja has been damaged by intense summer storms. An arid to semiarid climate characterizes the study area, where precipitation mainly occurs during the summer and winter seasons. Natural and anthropogenic changes have impacted the landscape of southern Baja. The present research documents the effects of tropical storms over the southern region of Baja California for a period of approximately twenty-six years. The goal of the research is to demonstrate how remote sensing can be used to detect the important effects of tropical storms including: (a) evaluation of change detection algorithms, and (b) delineating changes to the landscape including coastal modification, fluvial erosion and deposition, vegetation change, river avulsion using change detection algorithms. Digital image processing methods with temporal Landsat satellite remotely sensed data from the North America Landscape Characterization archive (NALC), Thematic Mapper (TM), and Enhanced Thematic Mapper (ETM) images were used to document the landscape change. Two image processing methods were tested including Image differencing (ID), and Principal Component Analysis (PCA). Landscape changes identified with the NALC archive and TM images showed that the major changes included a rapid change of land use in the towns of San Jose del Cabo and Cabo San Lucas between 1973 and 1986. The features detected using the algorithms included flood deposits within the channels of active streams, erosion banks, and new channels caused by channel avulsion. Despite the 19 year period covered by the NALC data and approximately 10 year intervals between acquisition dates, there were changed features that could be identified in the images. The TM images showed that flooding from Hurricane Isis (1998) produced new large deposits within the stream channels. This research has shown that remote sensing based change detection can delineate the effects of flooding on the landscape at scales down to the nominal resolution of the sensor. These findings indicate that many other applications for change detection are both viable and important. These include disaster response, flood hazard planning, geomorphic studies, water supply management in deserts.
Comparing fire severity models from post-fire and pre/post-fire differenced imagery
USDA-ARS?s Scientific Manuscript database
Wildland fires are common in rangelands worldwide. The potential for high severity fires to affect long-term changes in rangelands is considerable, and for this reason assessing fire severity shortly after the fire is critical. Such assessments are typically carried out following Burned Area Emergen...
Interferometric observations of an artificial satellite.
Preston, R A; Ergas, R; Hinteregger, H F; Knight, C A; Robertson, D S; Shapiro, I I; Whitney, A R; Rogers, A E; Clark, T A
1972-10-27
Very-long-baseline interferometric observations of radio signals from the TACSAT synchronous satellite, even though extending over only 7 hours, have enabled an excellent orbit to be deduced. Precision in differenced delay and delay-rate measurements reached 0.15 nanosecond ( approximately 5 centimeters in equivalent differenced distance) and 0.05 picosecond per second ( approximately 0.002 centimeter per second in equivalent differenced velocity), respectively. The results from this initial three-station experiment demonstrate the feasibility of using the method for accurate satellite tracking and for geodesy. Comparisons are made with other techniques.
Study of structural change in volcanic and geothermal areas using seismic tomography
NASA Astrophysics Data System (ADS)
Mhana, Najwa; Foulger, Gillian; Julian, Bruce; peirce, Christine
2014-05-01
Long Valley caldera is a large silicic volcano. It has been in a state of volcanic and seismic unrest since 1978. Farther escalation of this unrest could pose a threat to the 5,000 residents and the tens of thousands of tourists who visit the area. We have studied the crustal structure beneath 28 km X 16 km area using seismic tomography. We performed tomographic inversions for the years 2009 and 2010 with a view to differencing it with the 1997 result to look for structural changes with time and whether repeat tomography is a capable of determining the changes in structure in volcanic and geothermal reservoirs. Thus, it might provide a useful tool to monitoring physical changes in volcanoes and exploited geothermal reservoirs. Up to 600 earthquakes, selected from the best-quality events, were used for the inversion. The inversions were performed using program simulps12 [Thurber, 1983]. Our initial results show that changes in both V p and V s were consistent with the migration of CO2 into the upper 2 km or so. Our ongoing work will also invert pairs of years simultaneously using a new program, tomo4d [Julian and Foulger, 2010]. This program inverts for the differences in structure between two epochs so it can provide a more reliable measure of structural change than simply differencing the results of individual years.
Space-based observations of megacity carbon dioxide
NASA Astrophysics Data System (ADS)
Kort, Eric A.; Frankenberg, Christian; Miller, Charles E.; Oda, Tom
2012-09-01
Urban areas now house more than half the world's population, and are estimated to contribute over 70% of global energy-related CO2 emissions. Many cities have emission reduction policies in place, but lack objective, observation-based methods for verifying their outcomes. Here we demonstrate the potential of satellite-borne instruments to provide accurate global monitoring of megacity CO2 emissions using GOSAT observations of column averaged CO2 dry air mole fraction (XCO2) collected over Los Angeles and Mumbai. By differencing observations over the megacity with those in nearby background, we observe robust, statistically significant XCO2 enhancements of 3.2 ± 1.5 ppm for Los Angeles and 2.4 ± 1.2 ppm for Mumbai, and find these enhancements can be exploited to track anthropogenic emission trends over time. We estimate that XCO2 changes as small as 0.7 ppm in Los Angeles, corresponding to a 22% change in emissions, could be detected with GOSAT at the 95% confidence level.
de Vine, Glenn; McClelland, David E; Gray, Malcolm B; Close, John D
2005-05-15
We present an experimental technique that permits mechanical-noise-free, cavity-enhanced frequency measurements of an atomic transition and its hyperfine structure. We employ the 532-nm frequency-doubled output from a Nd:YAG laser and an iodine vapor cell. The cell is placed in a folded ring cavity (FRC) with counterpropagating pump and probe beams. The FRC is locked with the Pound-Drever-Hall technique. Mechanical noise is rejected by differencing the pump and probe signals. In addition, this differenced error signal provides a sensitive measure of differential nonlinearity within the FRC.
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Thurman, S. W.
1992-01-01
An approximate six-parameter analytic model for Earth-based differential range measurements is presented and is used to derive a representative analytic approximation for differenced Doppler measurements. The analytical models are tasked to investigate the ability of these data types to estimate spacecraft geocentric angular motion, Deep Space Network station oscillator (clock/frequency) offsets, and signal-path calibration errors over a period of a few days, in the presence of systematic station location and transmission media calibration errors. Quantitative results indicate that a few differenced Doppler plus ranging passes yield angular position estimates with a precision on the order of 0.1 to 0.4 micro-rad, and angular rate precision on the order of 10 to 25 x 10(exp -12) rad/sec, assuming no a priori information on the coordinate parameters. Sensitivity analyses suggest that troposphere zenith delay calibration error is the dominant systematic error source in most of the tracking scenarios investigated; as expected, the differenced Doppler data were found to be much more sensitive to troposphere calibration errors than differenced range. By comparison, results computed using wideband and narrowband (delta) VLBI under similar circumstances yielded angular precisions of 0.07 to 0.4 micro-rad, and angular rate precisions of 0.5 to 1.0 x 10(exp -12) rad/sec.
NASA Astrophysics Data System (ADS)
Jacquet, J.; McCoy, S. W.; McGrath, D.; Nimick, D.; Friesen, B.; Fahey, M. J.; Leidich, J.; Okuinghttons, J.
2015-12-01
The Colonia river system, draining the eastern edge of the Northern Patagonia Icefield, Chile, has experienced a dramatic shift in flow regime from one characterized by seasonal discharge variability to one dominated by episodic glacial lake outburst floods (GLOFs). We use multi-temporal visible satellite images, high-resolution digital elevation models (DEMs) derived from stereo image pairs, and in situ observations to quantify sediment and water fluxes out of the dammed glacial lake, Lago Cachet Dos (LC2), as well as the concomitant downstream environmental change. GLOFs initiated in April 2008 and have since occurred, on average, two to three times a year. Differencing concurrent gage measurements made on the Baker River upstream and downstream of the confluence with the Colonia river finds peak GLOF discharges of ~ 3,000 m3s-1, which is ~ 4 times the median discharge of the Baker River and over 20 times the median discharge of the Colonia river. During each GLOF, ~ 200,000,000 m3 of water evacuates from the LC2, resulting in erosion of valley-fill sediments and the delta on the upstream end of LC2. Differencing DEMs between April 2008 and February 2014 revealed that ~ 2.5 x 107 m3 of sediment was eroded. Multi-temporal DEM differencing shows that erosion rates were highest initially, with > 20 vertical m of sediment removed between 2008 and 2012, and generally less than 5 m between 2012 and 2014. The downstream Colonia River Sandur also experienced geomorphic changes due to GLOFs. Using Landsat imagery to calculate the normalized difference water index (NDWI), we demonstrate that the Colonia River was in a stable configuration between 1984 and 2008. At the onset of GLOFs in April 2008, a change in channel location began and continued with each subsequent GLOF. Quantification of sediment and water fluxes due to GLOFs in the Colonia river valley provides insight on the geomorphic and environmental changes in river systems experiencing dramatic shifts in flow regime.
Mass Loss of Larsen B Tributary Glaciers (Antarctic Peninsula) Unabated Since 2002
NASA Technical Reports Server (NTRS)
Berthier, Etienne; Scambos, Ted; Shuman, Christopher A.
2012-01-01
Ice mass loss continues at a high rate among the large glacier tributaries of the Larsen B Ice Shelf following its disintegration in 2002. We evaluate recent mass loss by mapping elevation changes between 2006 and 201011 using differencing of digital elevation models (DEMs). The measurement accuracy of these elevation changes is confirmed by a null test, subtracting DEMs acquired within a few weeks. The overall 2006201011 mass loss rate (9.0 2.1 Gt a-1) is similar to the 2001022006 rate (8.8 1.6 Gt a-1), derived using DEM differencing and laser altimetry. This unchanged overall loss masks a varying pattern of thinning and ice loss for individual glacier basins. On Crane Glacier, the thinning pulse, initially greatest near the calving front, is now broadening and migrating upstream. The largest losses are now observed for the HektoriaGreen glacier basin, having increased by 33 since 2006. Our method has enabled us to resolve large residual uncertainties in the Larsen B sector and confirm its state of ongoing rapid mass loss.
Space Monitoring of urban sprawl
NASA Astrophysics Data System (ADS)
Nole, G.; Lanorte, A.; Murgante, B.; Lasaponara, R.
2012-04-01
Space Monitoring of urban sprawl Gabriele Nolè (1,2), Antonio Lanorte (1), , Beniamino Murgante (2) and Rosa Lasaponara (1) , (1,2) Institute of Methodologies for Environmental Analysis, National Research Council, Italy (2) Laboratory of Urban and Territorial Systems, University of Basilicata, During the last few decades, in many regions throughout the world abandonment of agricultural land has induced a high concentration of people in densely populated urban areas. The deep social, economic and environmental changes have caused strong and extensive land cover changes. This is regarded as a pressing issue that calls for a clear understanding of the ongoing trends and future urban expansion. The main issue of great importance in modelling urban growth includes spatial and temporal dynamics, scale dynamics, man-induced land use changes. Although urban growth is perceived as necessary for a sustainable economy, uncontrolled or sprawling urban growth can cause various problems, such as, the loss of open space, landscape alteration, environmental pollution, traffic congestion, infrastructure pressure, and other social and economical issues. To face these drawbacks, a continuous monitoring of the urban growth evolution in terms of type and extent of changes over time are essential for supporting planners and decision makers in future urban planning. A critical point for the understanding and monitoring urban expansion processes is the availability of both (i) time-series data set and (ii) updated information relating to the current urban spatial structure a to define and locate the evolution trends. In such a context, an effective contribution can be offered by satellite remote sensing technologies, which are able to provide both historical data archive and up-to-date imagery. Satellite technologies represent a cost-effective mean for obtaining useful data that can be easily and systematically updated for the whole globe. Nowadays medium resolution satellite images, such as Landsat TM or ASTER can be downloaded free of charge from the NASA web site. The use of satellite imagery along with robust data analysis techniques can be used for the monitoring and planning purposes as these enable the reporting of ongoing trends of urban growth at a detailed level. Nevertheless, the exploitation of satellite Earth Observation in the field of the urban growth monitoring is a relatively new tool, although during the last three decades great efforts have been addressed to the application of remote sensing in detecting land use and land cover changes using a number of data analyses, such as: (i) Spectral enhancement based on vegetation index differencing, principal component analysis, Image differencing and visual interpretation and/or classification, (ii) post-classification change differencing and a combination of image enhancement and post-classification comparison, (iii) mixture analysis, (iv) artificial neural networks, (v) landscape metrics (patchiness and map density) and (vi) the integration of geographical information system and remote sensing data. In this paper a comparison of the methods listed before is carried out using satellite time series made up of Landsat MSS, TM, ETM+ASTER for some test areas selected in South of Italy and Cairo in order to extract and quantify urban sprawl and its spatial and temporal feature patterns.
NASA Astrophysics Data System (ADS)
Hanagan, C.; La Femina, P.
2017-12-01
Understanding processes that lead to volcanic eruptions is paramount for predicting future volcanic activity. Telica volcano, Nicaragua is a persistently active volcano with hundreds of daily, low magnitude and low frequency seismic events, high-temperature degassing, and sub-decadal VEI 1-3 eruptions. The phreatic vulcanian eruptions of 1999, 2011, and 2013, and phreatic to phreatomagmatic vulcanian eruption of 2015 are thought to have resulted by sealing of the hydrothermal system prior to the eruptions. Two mechanisms have been proposed for sealing of the volcanic system, hydrothermal mineralization and landslides covering the vent. These eruptions affect the crater morphology of Telica volcano, and therefore the exact mechanisms of change to the crater's form are of interest to provide data that may support or refute the proposed sealing mechanisms, improving our understanding of eruption mechanisms. We use a collection of photographs between February 1994 and May 2016 and a combination of qualitative and quantitative photogrammetry to detect the extent and type of changes in crater morphology associated with 2011, 2013, and 2015 eruptive activity. We produced dense point cloud models using Agisoft PhotoScan Professional for times with sufficient photographic coverage, including August 2011, March 2013, December 2015, March 2016, and May 2016. Our May 2016 model is georeferenced, and each other point cloud was differenced using the C2C tool in CloudCompare and the M3C2 method (CloudCompare plugin) Lague et al. (2013). Results of the qualitative observations and quantitative differencing reveal a general trend of material subtraction from the inner crater walls associated with eruptive activity and accumulation of material on the crater floor, often visibly sourced from the walls of the crater. Both daily activity and VEI 1-3 explosive events changed the crater morphology, and correlation between a landslide-covered vent and the 2011 and 2015 eruptive sequences exists. Though further study and integration with other date sets is required, a positive feedback mechanism between accumulation of material blocking the vent, eruption, and subsequent accumulation of material to re-block the vent remains possible.
NASA Astrophysics Data System (ADS)
Moeeni, Hamid; Bonakdari, Hossein; Fatemi, Seyed Ehsan
2017-04-01
Because time series stationarization has a key role in stochastic modeling results, three methods are analyzed in this study. The methods are seasonal differencing, seasonal standardization and spectral analysis to eliminate the periodic effect on time series stationarity. First, six time series including 4 streamflow series and 2 water temperature series are stationarized. The stochastic term for these series obtained with ARIMA is subsequently modeled. For the analysis, 9228 models are introduced. It is observed that seasonal standardization and spectral analysis eliminate the periodic term completely, while seasonal differencing maintains seasonal correlation structures. The obtained results indicate that all three methods present acceptable performance overall. However, model accuracy in monthly streamflow prediction is higher with seasonal differencing than with the other two methods. Another advantage of seasonal differencing over the other methods is that the monthly streamflow is never estimated as negative. Standardization is the best method for predicting monthly water temperature although it is quite similar to seasonal differencing, while spectral analysis performed the weakest in all cases. It is concluded that for each monthly seasonal series, seasonal differencing is the best stationarization method in terms of periodic effect elimination. Moreover, the monthly water temperature is predicted with more accuracy than monthly streamflow. The criteria of the average stochastic term divided by the amplitude of the periodic term obtained for monthly streamflow and monthly water temperature were 0.19 and 0.30, 0.21 and 0.13, and 0.07 and 0.04 respectively. As a result, the periodic term is more dominant than the stochastic term for water temperature in the monthly water temperature series compared to streamflow series.
Near Real-Time Event Detection & Prediction Using Intelligent Software Agents
2006-03-01
value was 0.06743. Multiple autoregressive integrated moving average ( ARIMA ) models were then build to see if the raw data, differenced data, or...slight improvement. The best adjusted r^2 value was found to be 0.1814. Successful results were not expected from linear or ARIMA -based modelling ...appear, 2005. [63] Mora-Lopez, L., Mora, J., Morales-Bueno, R., et al. Modelling time series of climatic parameters with probabilistic finite
A method of real-time detection for distant moving obstacles by monocular vision
NASA Astrophysics Data System (ADS)
Jia, Bao-zhi; Zhu, Ming
2013-12-01
In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.
Generalized Abstract Symbolic Summaries
NASA Technical Reports Server (NTRS)
Person, Suzette; Dwyer, Matthew B.
2009-01-01
Current techniques for validating and verifying program changes often consider the entire program, even for small changes, leading to enormous V&V costs over a program s lifetime. This is due, in large part, to the use of syntactic program techniques which are necessarily imprecise. Building on recent advances in symbolic execution of heap manipulating programs, in this paper, we develop techniques for performing abstract semantic differencing of program behaviors that offer the potential for improved precision.
Basic research for the Earth dynamics program
NASA Technical Reports Server (NTRS)
1981-01-01
The technique of range differencing with Lageos ranges to obtain more accurate estimates of baseline lengths and polar motion variation was studied. Differencing quasi simultaneous range observations eliminate a great deal of orbital biases. Progress is reported on the definition and maintenance of a conventional terrestrial reference system.
ERIC Educational Resources Information Center
Liberto, Giuliana
2016-01-01
Research into the impact of non-consultative home education regulatory change in New South Wales (NSW), Australia, identified clear benefits of a child-led, interest-inspired approach to learning and a negative impact on student learning and well-being outcomes, particularly for learning-differenced children, of restricted practice freedom.…
CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers
NASA Technical Reports Server (NTRS)
Gaski, J. D.; Lewis, D. R.; Thompson, L. R.
1970-01-01
The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.
Post-fire Thermokarst Development Along a Planned Road Corridor in Arctic Alaska
NASA Astrophysics Data System (ADS)
Jones, B. M.; Grosse, G.; Larsen, C. F.; Hayes, D. J.; Arp, C. D.; Liu, L.; Miller, E.
2015-12-01
Wildfire disturbance in northern high latitude regions is an important factor contributing to ecosystem and landscape change. In permafrost influenced terrain, fire may initiate thermokarst development which impacts hydrology, vegetation, wildlife, carbon storage and infrastructure. In this study we differenced two airborne LiDAR datasets that were acquired in the aftermath of the large and severe Anaktuvuk River tundra fire, which in 2007 burned across a proposed road corridor in Arctic Alaska. The 2009 LiDAR dataset was acquired by the Alaska Department of Transportation in preparation for construction of a gravel road that would connect the Dalton Highway with the logistical camp of Umiat. The 2014 LiDAR dataset was acquired by the USGS to quantify potential post-fire thermokarst development over the first seven years following the tundra fire event. By differencing the two 1 m resolution digital terrain models, we measured permafrost thaw subsidence across 34% of the burned tundra area studied, and observed less than 1% in similar, undisturbed tundra terrain units. Ice-rich, yedoma upland terrain was most susceptible to thermokarst development following the disturbance, accounting for 50% of the areal and volumetric change detected, with some locations subsiding more than six meters over the study period. Calculation of rugosity, or surface roughness, in the two datasets showed a doubling in microtopography on average across the burned portion of the study area, with a 340% increase in yedoma upland terrain. An additional LiDAR dataset was acquired in April 2015 to document the role of thermokarst development on enhanced snow accumulation and subsequent snowmelt runoff within the burn area. Our findings will enable future vulnerability assessments of ice-rich permafrost terrain as a result of shifting disturbance regimes. Such assessments are needed to address questions focused on the impact of permafrost degradation on physical, ecological, and socio-economic processes.
Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Revie, Crawford W.; Sanchez, Javier
2013-01-01
Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt–Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel. PMID:23576782
Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Revie, Crawford W; Sanchez, Javier
2013-06-06
Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt-Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel.
Property Differencing for Incremental Checking
NASA Technical Reports Server (NTRS)
Yang, Guowei; Khurshid, Sarfraz; Person, Suzette; Rungta, Neha
2014-01-01
This paper introduces iProperty, a novel approach that facilitates incremental checking of programs based on a property di erencing technique. Speci cally, iProperty aims to reduce the cost of checking properties as they are initially developed and as they co-evolve with the program. The key novelty of iProperty is to compute the di erences between the new and old versions of expected properties to reduce the number and size of the properties that need to be checked during the initial development of the properties. Furthermore, property di erencing is used in synergy with program behavior di erencing techniques to optimize common regression scenarios, such as detecting regression errors or checking feature additions for conformance to new expected properties. Experimental results in the context of symbolic execution of Java programs annotated with properties written as assertions show the e ectiveness of iProperty in utilizing change information to enable more ecient checking.
Kiage, L.M.; Walker, N.D.; Balasubramanian, S.; Babin, A.; Barras, J.
2005-01-01
The Louisiana coast is subjected to hurricane impacts including flooding of human settlements, river channels and coastal marshes, and salt water intrusion. Information on the extent of flooding is often required quickly for emergency relief, repairs of infrastructure, and production of flood risk maps. This study investigates the feasibility of using Radarsat-1 SAR imagery to detect flooded areas in coastal Louisiana after Hurricane Lili, October 2002. Arithmetic differencing and multi-temporal enhancement techniques were employed to detect flooding and to investigate relationships between backscatter and water level changes. Strong positive correlations (R2=0.7-0.94) were observed between water level and SAR backscatter within marsh areas proximate to Atchafalaya Bay. Although variations in elevation and vegetation type did influence and complicate the radar signature at individual sites, multi-date differences in backscatter largely reflected the patterns of flooding within large marsh areas. Preliminary analyses show that SAR imagery was not useful in mapping urban flooding in New Orleans after Hurricane Katrina's landfall on 29 August 2005. ?? 2005 Taylor & Francis.
Improved method for detecting local discontinuities in CMB data by finite differencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowyer, Jude; Jaffe, Andrew H.
2011-01-15
An unexpected distribution of temperatures in the CMB could be a sign of new physics. In particular, the existence of cosmic defects could be indicated by temperature discontinuities via the Kaiser-Stebbins effect. In this paper, we show how performing finite differences on a CMB map, with the noise regularized in harmonic space, may expose such discontinuities, and we report the results of this process on the 7-year Wilkinson Microwave Anisotropy Probe data.
Enhancement of snow cover change detection with sparse representation and dictionary learning
NASA Astrophysics Data System (ADS)
Varade, D.; Dikshit, O.
2014-11-01
Sparse representation and decoding is often used for denoising images and compression of images with respect to inherent features. In this paper, we adopt a methodology incorporating sparse representation of a snow cover change map using the K-SVD trained dictionary and sparse decoding to enhance the change map. The pixels often falsely characterized as "changes" are eliminated using this approach. The preliminary change map was generated using differenced NDSI or S3 maps in case of Resourcesat-2 and Landsat 8 OLI imagery respectively. These maps are extracted into patches for compressed sensing using Discrete Cosine Transform (DCT) to generate an initial dictionary which is trained by the K-SVD approach. The trained dictionary is used for sparse coding of the change map using the Orthogonal Matching Pursuit (OMP) algorithm. The reconstructed change map incorporates a greater degree of smoothing and represents the features (snow cover changes) with better accuracy. The enhanced change map is segmented using kmeans to discriminate between the changed and non-changed pixels. The segmented enhanced change map is compared, firstly with the difference of Support Vector Machine (SVM) classified NDSI maps and secondly with a reference data generated as a mask by visual interpretation of the two input images. The methodology is evaluated using multi-spectral datasets from Resourcesat-2 and Landsat-8. The k-hat statistic is computed to determine the accuracy of the proposed approach.
Digital data registration and differencing compression system
NASA Technical Reports Server (NTRS)
Ransford, Gary A. (Inventor); Cambridge, Vivien J. (Inventor)
1992-01-01
A process for x ray registration and differencing results in more efficient compression is discussed. Differencing of registered modeled subject image with a modeled reference image forms a differential image for compression with conventional compression algorithms. Obtention of a modeled reference image includes modeling a relatively unrelated standard reference image upon a three dimensional model, which three dimensional model is also used to model the subject image for obtaining the modeled subject image. The registration process of the modeled subject image and modeled reference image translationally correlates such modeled images for resulting correlation thereof in spatial and spectral dimensions. Prior to compression, a portion of the image falling outside a designated area of interest may be eliminated, for subsequent replenishment with a standard reference image. The compressed differenced image may be subsequently transmitted and/or stored, for subsequent decompression and addition to a standard reference image so as to form a reconstituted or approximated subject image at either remote location and/or at a later moment in time. Overall effective compression ratios of 100:1 are possible for thoracic x ray digital images.
NASA Astrophysics Data System (ADS)
Biswas, T.; Maus, P.; Megown, K.
2011-12-01
The U.S. Forest Service (USFS) provided technical support to the Resource Information Management System (RIMS) unit of the Forest Department (FD) of Bangladesh in developing a method to monitor changes within the Sundarbans Reserve Forest using remote sensing and GIS technology to meet the Reducing Emissions from Deforestation and Degradation (REDD+) initiatives within Bangladesh. It included comparing the simple image differencing method with the Z-score outlier change detection method to examine changes within the mangroves of Bangladesh. Landsat data from three time periods (1989, 1999, 2009) were used to quantify change within four canopy cover classes (High, Medium, Low, and Very Low) within Sundarbans. The Z-score change analysis and image differencing was done for all the 6 reflective bands obtained from Landsat and two spectral indices NDVI and NDMI, derived from these bands for each year. Our results indicated very subtle changes in the mangrove forest within the past twenty years and the Z-score analysis was found to be more useful in capturing these subtle changes than the simple image difference method. Percent change in Z-score of NDVI provided the most meaningful index of vegetation change. It was used to summarize change for the entire study area by pixel, by canopy cover classes and the management compartment during this analysis. Our analysis showed less than 5% overall change in area within the mangroves for the entire study period. Percent change in forest canopy cover reduced from 4% in 1989-99 to 2% by 1999-2009 indicating an increase in forest canopy cover. Percent change in NDVI Z-score of each pixel was used to compute the overall percent change in z-score within the entire study area, mean percent change within each canopy cover class and management compartments from 1989 to 1999 and from 1999 to 2009. The above analysis provided insight to the spatial distribution of percent change in NDVI between the study periods and helped in identifying potential area for management intervention. The mean distribution of change from both study periods was observed within ± 20% SD.Our results were in agreement with the independent field study conducted by the US Forest Service earlier the same year for biomass and carbon stock estimation. The 10m field plots that showed a decline in carbon stock between 1995 and 2010 overall coincided with the compartments or region that showed a decline in forest canopy cover between 1999 and 2009 from the present analysis. These results led us to believe that the Z-score analysis can be a potential quantitatively rigorous tool to quantify change in ecosystems that are mostly stable and do not undergo drastic land use or land cover change. The field and remote sensing study together provided important scientific information and direction for future management of the forest resources, baseline information for long term monitoring of the forest, and identifying potential REDD+ Carbon financing projects in Sundarbans, as well as other potential REDD+ sites within forested area of Bangladesh. Given the rising concern and interest in REDD+ initiative we consider the Z-score analysis to be a potential tool in monitoring and providing a quick spatial assessment of change using remote sensing technology.
NASA Technical Reports Server (NTRS)
Berman, A. L.
1977-01-01
Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.
Continuous non-invasive blood glucose monitoring by spectral image differencing method
NASA Astrophysics Data System (ADS)
Huang, Hao; Liao, Ningfang; Cheng, Haobo; Liang, Jing
2018-01-01
Currently, the use of implantable enzyme electrode sensor is the main method for continuous blood glucose monitoring. But the effect of electrochemical reactions and the significant drift caused by bioelectricity in body will reduce the accuracy of the glucose measurements. So the enzyme-based glucose sensors need to be calibrated several times each day by the finger-prick blood corrections. This increases the patient's pain. In this paper, we proposed a method for continuous Non-invasive blood glucose monitoring by spectral image differencing method in the near infrared band. The method uses a high-precision CCD detector to switch the filter in a very short period of time, obtains the spectral images. And then by using the morphological method to obtain the spectral image differences, the dynamic change of blood sugar is reflected in the image difference data. Through the experiment proved that this method can be used to monitor blood glucose dynamically to a certain extent.
Determination of mangrove change in Matang Mangrove Forest using multi temporal satellite imageries
NASA Astrophysics Data System (ADS)
Ibrahim, N. A.; Mustapha, M. A.; Lihan, T.; Ghaffar, M. A.
2013-11-01
Mangrove protects shorelines from damaging storm and hurricane winds, waves, and floods. Mangroves also help prevent erosion by stabilizing sediments with their tangled root systems. They maintain water quality and clarity, filtering pollutants and trapping sediments originating from land. However, mangrove has been reported to be threatened by land conversion for other activities. In this study, land use and land cover changes in Matang Mangrove Forest during the past 18 years (1993 to 2011) were determined using multi-temporal satellite imageries by Landsat TM and RapidEye. In this study, classification of land use and land cover approach was performed using the maximum likelihood classifier (MCL) method along with vegetation index differencing (NDVI) technique. Data obtained was evaluated through Kappa coefficient calculation for accuracy and results revealed that the classification accuracy was 81.25% with Kappa Statistics of 0.78. The results indicated changes in mangrove forest area to water body with 2,490.6 ha, aquaculture with 890.7 ha, horticulture with 1,646.1 ha, palm oil areas with 1,959.2 ha, dry land forest with 2,906.7 ha and urban settlement area with 224.1 ha. Combinations of these approaches were useful for change detection and for indication of the nature of these changes.
Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper
NASA Astrophysics Data System (ADS)
Renza, Diego; Martinez, Estibaliz; Molina, Iñigo; Ballesteros L., Dora M.
2017-04-01
This paper presents a new unsupervised change detection methodology for multispectral images applied to specific land covers. The proposed method involves comparing each image against a reference spectrum, where the reference spectrum is obtained from the spectral signature of the type of coverage you want to detect. In this case the method has been tested using multispectral images (SPOT5) of the community of Madrid (Spain), and multispectral images (Quickbird) of an area over Indonesia that was impacted by the December 26, 2004 tsunami; here, the tests have focused on the detection of changes in vegetation. The image comparison is obtained by applying Spectral Angle Mapper between the reference spectrum and each multitemporal image. Then, a threshold to produce a single image of change is applied, which corresponds to the vegetation zones. The results for each multitemporal image are combined through an exclusive or (XOR) operation that selects vegetation zones that have changed over time. Finally, the derived results were compared against a supervised method based on classification with the Support Vector Machine. Furthermore, the NDVI-differencing and the Spectral Angle Mapper techniques were selected as unsupervised methods for comparison purposes. The main novelty of the method consists in the detection of changes in a specific land cover type (vegetation), therefore, for comparison purposes, the best scenario is to compare it with methods that aim to detect changes in a specific land cover type (vegetation). This is the main reason to select NDVI-based method and the post-classification method (SVM implemented in a standard software tool). To evaluate the improvements using a reference spectrum vector, the results are compared with the basic-SAM method. In SPOT5 image, the overall accuracy was 99.36% and the κ index was 90.11%; in Quickbird image, the overall accuracy was 97.5% and the κ index was 82.16%. Finally, the precision results of the method are comparable to those of a supervised method, supported by low detection of false positives and false negatives, along with a high overall accuracy and a high kappa index. On the other hand, the execution times were comparable to those of unsupervised methods of low computational load.
Application of non-coherent Doppler data types for deep space navigation
NASA Technical Reports Server (NTRS)
Bhaskaran, Shyam
1995-01-01
Recent improvements in computational capability and Deep Space Network technology have renewed interest in examining the possibility of using one-way Doppler data alone to navigate interplanetary spacecraft. The one-way data can be formulated as the standard differenced-count Doppler or as phase measurements, and the data can be received at a single station or differenced if obtained simultaneously at two stations. A covariance analysis is performed which analyzes the accuracy obtainable by combinations of one-way Doppler data and compared with similar results using standard two-way Doppler and range. The sample interplanetary trajectory used was that of the Mars Pathfinder mission to Mars. It is shown that differenced one-way data is capable of determining the angular position of the spacecraft to fairly high accuracy, but has relatively poor sensitivity to the range. When combined with single station data, the position dispersions are roughly an order of magnitude larger in range and comparable in angular position as compared to dispersions obtained with standard data two-way types. It was also found that the phase formulation is less sensitive to data weight variations and data coverage than the differenced-count Doppler formulation.
The application of noncoherent Doppler data types for Deep Space Navigation
NASA Technical Reports Server (NTRS)
Bhaskaran, S.
1995-01-01
Recent improvements in computational capability and DSN technology have renewed interest in examining the possibility of using one-way Doppler data alone to navigate interplanetary spacecraft. The one-way data can be formulated as the standard differenced-count Doppler or as phase measurements, and the data can be received at a single station or differenced if obtained simultaneously at two stations. A covariance analysis, which analyzes the accuracy obtainable by combinations of one-way Doppler data, is performed and compared with similar results using standard two-way Doppler and range. The sample interplanetary trajectory used was that of the Mars Pathfinder mission to Mars. It is shown that differenced one-way data are capable of determining the angular position of the spacecraft to fairly high accuracy, but have relatively poor sensitivity to the range. When combined with single-station data, the position dispersions are roughly an order of magnitude larger in range and comparable in angular position as compared to dispersions obtained with standard two-way data types. It was also found that the phase formulation is less sensitive to data weight variations and data coverage than the differenced-count Doppler formulation.
Relating fire-caused change in forest structure to remotely sensed estimates of fire severity
Jamie M. Lydersen; Brandon M. Collins; Jay D. Miller; Danny L. Fry; Scott L. Stephens
2016-01-01
Fire severity maps are an important tool for understanding fire effects on a landscape. The relative differenced normalized burn ratio (RdNBR) is a commonly used severity index in California forests, and is typically divided into four categories: unchanged, low, moderate, and high. RdNBR is often calculated twice--from images collected the year of the fire (initial...
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.
1985-01-01
A methodological evaluation for two-finite differencing schemes for computer-aided gas turbine design is presented. The two computational schemes include; a Bounded Skewed Finite Differencing Scheme (BSUDS); and a Quadratic Upwind Differencing Scheme (QSDS). In the evaluation, the derivations of the schemes were incorporated into two-dimensional and three-dimensional versions of the Teaching Axisymmetric Characteristics Heuristically (TEACH) computer code. Assessments were made according to performance criteria for the solution of problems of turbulent, laminar, and coannular turbulent flow. The specific performance criteria used in the evaluation were simplicity, accuracy, and computational economy. It is found that the BSUDS scheme performed better with respect to the criteria than the QUDS. Some of the reasons for the more successful performance BSUDS are discussed.
Path length differencing and energy conservation of the S[sub N] Boltzmann/Spencer-Lewis equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippone, W.L.; Monahan, S.P.
It is shown that the S[sub N] Boltzmann/Spencer-Lewis equations conserve energy locally if and only if they satisfy particle balance and diamond differencing is used in path length. In contrast, the spatial differencing schemes have no bearing on the energy balance. Energy is conserved globally if it is conserved locally and the multigroup cross sections are energy conserving. Although the coupled electron-photon cross sections generated by CEPXS conserve particles and charge, they do not precisely conserve energy. It is demonstrated that these cross sections can be adjusted such that particles, charge, and energy are conserved. Finally, since a conventional negativemore » flux fixup destroys energy balance when applied to path legend, a modified fixup scheme that does not is presented.« less
Non-oscillatory central differencing for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Nessyahu, Haim; Tadmor, Eitan
1988-01-01
Many of the recently developed high resolution schemes for hyperbolic conservation laws are based on upwind differencing. The building block for these schemes is the averaging of an appropriate Godunov solver; its time consuming part involves the field-by-field decomposition which is required in order to identify the direction of the wind. Instead, the use of the more robust Lax-Friedrichs (LxF) solver is proposed. The main advantage is simplicity: no Riemann problems are solved and hence field-by-field decompositions are avoided. The main disadvantage is the excessive numerical viscosity typical to the LxF solver. This is compensated for by using high-resolution MUSCL-type interpolants. Numerical experiments show that the quality of results obtained by such convenient central differencing is comparable with those of the upwind schemes.
Perignon, M. C.; Tucker, G.E.; Griffin, Eleanor R.; Friedman, Jonathan M.
2013-01-01
The spatial distribution of riparian vegetation can strongly influence the geomorphic evolution of dryland rivers during large floods. We present the results of an airborne lidar differencing study that quantifies the topographic change that occurred along a 12 km reach of the Lower Rio Puerco, New Mexico, during an extreme event in 2006. Extensive erosion of the channel banks took place immediately upstream of the study area, where tamarisk and sandbar willow had been removed. Within the densely vegetated study reach, we measure a net volumetric change of 578,050 ± ∼ 490,000 m3, with 88.3% of the total aggradation occurring along the floodplain and channel and 76.7% of the erosion focusing on the vertical valley walls. The sediment derived from the devegetated reach deposited within the first 3.6 km of the study area, with depth decaying exponentially with distance downstream. Elsewhere, floodplain sediments were primarily sourced from the erosion of valley walls. Superimposed on this pattern are the effects of vegetation and valley morphology on sediment transport. Sediment thickness is seen to be uniform among sandbar willows and highly variable within tamarisk groves. These reach-scale patterns of sedimentation observed in the lidar differencing likely reflect complex interactions of vegetation, flow, and sediment at the scale of patches to individual plants.
NASA Astrophysics Data System (ADS)
Cheong, Chin Wen
2008-02-01
This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.
Compressive sampling by artificial neural networks for video
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt
2011-06-01
We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.
NASA Astrophysics Data System (ADS)
Tzanos, Constantine P.
1992-10-01
A higher-order differencing scheme (Tzanos, 1990) is used in conjunction with a multigrid approach to obtain accurate solutions of the Navier-Stokes convection-diffusion equations at high Re numbers. Flow in a square cavity with a moving lid is used as a test problem. a multigrid approach based on the additive correction method (Settari and Aziz) and an iterative incomplete lower and upper solver demonstrated good performance for the whole range of Re number under consideration (from 1000 to 10,000) and for both uniform and nonuniform grids. It is concluded that the combination of the higher-order differencing scheme with a multigrid approach proved to be an effective technique for giving accurate solutions of the Navier-Stokes equations at high Re numbers.
Assessment of land cover changes in Lampedusa Island (Italy) using Landsat TM and OLI data
NASA Astrophysics Data System (ADS)
Mei, Alessandro; Manzo, Ciro; Fontinovo, Giuliano; Bassani, Cristiana; Allegrini, Alessia; Petracchini, Francesco
2016-10-01
The Lampedusa Island displays important socio-economic criticalities related to an intensive touristic activity, which implies an increase in electricity consumption and waste production. An adequate island conversion to a more environmental, sustainable community needs to be faced by the local Management Plans establishment. For this purpose, several thematic datasets have to be produced and evaluated. Socio-economic and bio-ecological components as well as land cover/use assessment are some of the main topics to be managed within the Decision Support Systems. Considering the lack of Land Cover (LC) and vegetation change detection maps in Lampedusa Island (Italy), this paper focuses on the retrieval of these topics by remote sensing techniques. The analysis was carried out by Landsat 5 TM and Landsat 8 OLI multispectral images from 1984 to 2014 in order to obtain spatial and temporal information of changes occurred in the island. Firstly, imagery was co-registered and atmospherically corrected; secondly, it was then classified for land cover and vegetation distribution analysis with the use of QGIS and Saga GIS open source softwares. The Maximum Likelihood Classifier (MLC) was used for LC maps production, while the Normalized Difference Vegetation Index (NDVI) was used for vegetation examination and distribution. Topographic maps, historical aerial photos, ortophotos and field data are merged in the GIS for accuracy assessment. Finally, change detection of MLC and NDVI are provided respectively by Post-Classification Comparison (PCC) and Image Differencing (ID). The provided information, combined with local socio-economic parameters, is essential for the improvement of environmental sustainability of anthropogenic activities in Lampedusa.
Njemanze, Philip C
2010-11-30
The present study was designed to examine the effects of color stimulation on cerebral blood mean flow velocity (MFV) in men and women. The study included 16 (8 men and 8 women) right-handed healthy subjects. The MFV was recorded simultaneously in both right and left middle cerebral arteries in Dark and white Light conditions, and during color (Blue, Yellow and Red) stimulations, and was analyzed using functional transcranial Doppler spectroscopy (fTCDS) technique. Color processing occurred within cortico-subcortical circuits. In men, wavelength-differencing of Yellow/Blue pairs occurred within the right hemisphere by processes of cortical long-term depression (CLTD) and subcortical long-term potentiation (SLTP). Conversely, in women, frequency-differencing of Blue/Yellow pairs occurred within the left hemisphere by processes of cortical long-term potentiation (CLTP) and subcortical long-term depression (SLTD). In both genders, there was luminance effect in the left hemisphere, while in men it was along an axis opposite (orthogonal) to that of chromatic effect, in women, it was parallel. Gender-related differences in color processing demonstrated a right hemisphere cognitive style for wavelength-differencing in men, and a left hemisphere cognitive style for frequency-differencing in women. There are potential applications of fTCDS technique, for stroke rehabilitation and monitoring of drug effects.
High order filtering methods for approximating hyperbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1991-01-01
The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.
1987-09-01
Eulerian or Lagrangian flow problems, use of real equations of state and transport properties from the Los Alamos National Laboratory SESAME package...permissible problem geometries; time differencing; and spatial discretization, centering, and differ- encing of MACH2. /. I." - Magnetohydrodynamics...R-A & Y7 24 9 5.2 THE IDEAL COORDINATE SYSTEM DTIC TAB 13 24 5.3 THE MATERIAL DERIVATIVE Uannounoed 0 26 Justifloatlo- 6. TIME DIFFERENCING 31 6.1
TLE uncertainty estimation using robust weighted differencing
NASA Astrophysics Data System (ADS)
Geul, Jacco; Mooij, Erwin; Noomen, Ron
2017-05-01
Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).
Miller, J.D.; Knapp, E.E.; Key, C.H.; Skinner, C.N.; Isbell, C.J.; Creasy, R.M.; Sherlock, J.W.
2009-01-01
Multispectral satellite data have become a common tool used in the mapping of wildland fire effects. Fire severity, defined as the degree to which a site has been altered, is often the variable mapped. The Normalized Burn Ratio (NBR) used in an absolute difference change detection protocol (dNBR), has become the remote sensing method of choice for US Federal land management agencies to map fire severity due to wildland fire. However, absolute differenced vegetation indices are correlated to the pre-fire chlorophyll content of the vegetation occurring within the fire perimeter. Normalizing dNBR to produce a relativized dNBR (RdNBR) removes the biasing effect of the pre-fire condition. Employing RdNBR hypothetically allows creating categorical classifications using the same thresholds for fires occurring in similar vegetation types without acquiring additional calibration field data on each fire. In this paper we tested this hypothesis by developing thresholds on random training datasets, and then comparing accuracies for (1) fires that occurred within the same geographic region as the training dataset and in similar vegetation, and (2) fires from a different geographic region that is climatically and floristically similar to the training dataset region but supports more complex vegetation structure. We additionally compared map accuracies for three measures of fire severity: the composite burn index (CBI), percent change in tree canopy cover, and percent change in tree basal area. User's and producer's accuracies were highest for the most severe categories, ranging from 70.7% to 89.1%. Accuracies of the moderate fire severity category for measures describing effects only to trees (percent change in canopy cover and basal area) indicated that the classifications were generally not much better than random. Accuracies of the moderate category for the CBI classifications were somewhat better, averaging in the 50%-60% range. These results underscore the difficulty in isolating fire effects to individual vegetation strata when fire effects are mixed. We conclude that the models presented here and in Miller and Thode ([Miller, J.D. & Thode, A.E., (2007). Quantifying burn severity in a heterogeneous landscape with a relative version of the delta Normalized Burn Ratio (dNBR). Remote Sensing of Environment, 109, 66-80.]) can produce fire severity classifications (using either CBI, or percent change in canopy cover or basal area) that are of similar accuracy in fires not used in the original calibration process, at least in conifer dominated vegetation types in Mediterranean-climate California.
NASA Astrophysics Data System (ADS)
Mertes, J. R.; Zant, C. N.; Gulley, J. D.; Thomsen, T. L.
2017-08-01
Monitoring, managing and preserving submerged cultural resources (SCR) such as shipwrecks can involve time consuming detailed physical surveys, expensive side-scan sonar surveys, the study of photomosaics and even photogrammetric analysis. In some cases, surveys of SCR have produced 3D models, though these models have not typically been used to document patterns of site degradation over time. In this study, we report a novel approach for quantifying degradation and changes to SCR that relies on diver-acquired video surveys, generation of 3D models from data acquired at different points in time using structure from motion, and differencing of these models. We focus our study on the shipwreck S.S. Wisconsin, which is located roughly 10.2 km southeast of Kenosha, Wisconsin, in Lake Michigan. We created two digital elevation models of the shipwreck using surveys performed during the summers of 2006 and 2015 and differenced these models to map spatial changes within the wreck. Using orthomosaics and difference map data, we identified a change in degradation patterns. Degradation was anecdotally believed to be caused by inward collapse, but maps indicated a pattern of outward collapse of the hull structure, which has resulted in large scale shifting of material in the central upper deck. In addition, comparison of the orthomosaics with the difference map clearly shows movement of objects, degradation of smaller pieces and in some locations, an increase in colonization of mussels.
1984–2010 trends in fire burn severity and area for the conterminous US
Picotte, Joshua J.; Peterson, Birgit E.; Meier, Gretchen; Howard, Stephen M.
2016-01-01
Burn severity products created by the Monitoring Trends in Burn Severity (MTBS) project were used to analyse historical trends in burn severity. Using a severity metric calculated by modelling the cumulative distribution of differenced Normalized Burn Ratio (dNBR) and Relativized dNBR (RdNBR) data, we examined burn area and burn severity of 4893 historical fires (1984–2010) distributed across the conterminous US (CONUS) and mapped by MTBS. Yearly mean burn severity values (weighted by area), maximum burn severity metric values, mean area of burn, maximum burn area and total burn area were evaluated within 27 US National Vegetation Classification macrogroups. Time series assessments of burned area and severity were performed using Mann–Kendall tests. Burned area and severity varied by vegetation classification, but most vegetation groups showed no detectable change during the 1984–2010 period. Of the 27 analysed vegetation groups, trend analysis revealed burned area increased in eight, and burn severity has increased in seven. This study suggests that burned area and severity, as measured by the severity metric based on dNBR or RdNBR, have not changed substantially for most vegetation groups evaluated within CONUS.
NASA Astrophysics Data System (ADS)
Chang, Guobin; Xu, Tianhe; Yao, Yifei; Wang, Qianxin
2018-01-01
In order to incorporate the time smoothness of ionospheric delay to aid the cycle slip detection, an adaptive Kalman filter is developed based on variance component estimation. The correlations between measurements at neighboring epochs are fully considered in developing a filtering algorithm for colored measurement noise. Within this filtering framework, epoch-differenced ionospheric delays are predicted. Using this prediction, the potential cycle slips are repaired for triple-frequency signals of global navigation satellite systems. Cycle slips are repaired in a stepwise manner; i.e., for two extra wide lane combinations firstly and then for the third frequency. In the estimation for the third frequency, a stochastic model is followed in which the correlations between the ionospheric delay prediction errors and the errors in the epoch-differenced phase measurements are considered. The implementing details of the proposed method are tabulated. A real BeiDou Navigation Satellite System data set is used to check the performance of the proposed method. Most cycle slips, no matter trivial or nontrivial, can be estimated in float values with satisfactorily high accuracy and their integer values can hence be correctly obtained by simple rounding. To be more specific, all manually introduced nontrivial cycle slips are correctly repaired.
Human detection in sensitive security areas through recognition of omega shapes using MACH filters
NASA Astrophysics Data System (ADS)
Rehman, Saad; Riaz, Farhan; Hassan, Ali; Liaquat, Muwahida; Young, Rupert
2015-03-01
Human detection has gained considerable importance in aggravated security scenarios over recent times. An effective security application relies strongly on detailed information regarding the scene under consideration. A larger accumulation of humans than the number of personal authorized to visit a security controlled area must be effectively detected, amicably alarmed and immediately monitored. A framework involving a novel combination of some existing techniques allows an immediate detection of an undesirable crowd in a region under observation. Frame differencing provides a clear visibility of moving objects while highlighting those objects in each frame acquired by a real time camera. Training of a correlation pattern recognition based filter on desired shapes such as elliptical representations of human faces (variants of an Omega Shape) yields correct detections. The inherent ability of correlation pattern recognition filters caters for angular rotations in the target object and renders decision regarding the existence of the number of persons exceeding an allowed figure in the monitored area.
NASA Astrophysics Data System (ADS)
Vincent, C.; Ramanathan, A.; Wagnon, P.; Dobhal, D. P.; Linda, A.; Berthier, E.; Sharma, P.; Arnaud, Y.; Azam, M. F.; Jose, P. G.; Gardelle, J.
2012-09-01
The volume change of Chhota Shigri Glacier (India, 32° N) between 1988 and 2010 has been determined using in-situ geodetic measurements. This glacier has experienced only a slight mass loss over the last 22 yr (-3.8 ± 1.8 m w.e.). Using satellite digital elevation models (DEM) differencing and field measurements, we measure a negative mass balance (MB) between 1999 and 2011 (-4.7 ± 1.8 m w.e.). Thus, we deduce a positive MB between 1988 and 1999 (+1.0 ± 2.5 m w.e.). Furthermore, satellite DEM differencing reveals a good correspondence between the MB of Chhota Shigri Glacier and the MB of an over 2000 km2 glaciarized area in the Lahaul and Spiti region during 1999-2011. We conclude that there has been no large ice wastage in this region over the last 22 yr, ice mass loss being limited to the last decade. This contrasts to the most recent compilation of MB data in the Himalayan range that indicates ice wastage since 1975, accelerating after 1990. For the rest of western Himalaya, available observations of glacier MBs are too sparse and discontinuous to provide a clear and relevant regional pattern of glacier volume change over the last two decades.
Finite Element Simulations of Kaikoura, NZ Earthquake using DInSAR and High-Resolution DSMs
NASA Astrophysics Data System (ADS)
Barba, M.; Willis, M. J.; Tiampo, K. F.; Glasscoe, M. T.; Clark, M. K.; Zekkos, D.; Stahl, T. A.; Massey, C. I.
2017-12-01
Three-dimensional displacements from the Kaikoura, NZ, earthquake in November 2016 are imaged here using Differential Interferometric Synthetic Aperture Radar (DInSAR) and high-resolution Digital Surface Model (DSM) differencing and optical pixel tracking. Full-resolution co- and post-seismic interferograms of Sentinel-1A/B images are constructed using the JPL ISCE software. The OSU SETSM software is used to produce repeat 0.5 m posting DSMs from commercial satellite imagery, which are supplemented with UAV derived DSMs over the Kaikoura fault rupture on the eastern South Island, NZ. DInSAR provides long-wavelength motions while DSM differencing and optical pixel tracking provides both horizontal and vertical near fault motions, improving the modeling of shallow rupture dynamics. JPL GeoFEST software is used to perform finite element modeling of the fault segments and slip distributions and, in turn, the associated asperity distribution. The asperity profile is then used to simulate event rupture, the spatial distribution of stress drop, and the associated stress changes. Finite element modeling of slope stability is accomplished using the ultra high-resolution UAV derived DSMs to examine the evolution of post-earthquake topography, landslide dynamics and volumes. Results include new insights into shallow dynamics of fault slip and partitioning, estimates of stress change, and improved understanding of its relationship with the associated seismicity, deformation, and triggered cascading hazards.
A study of pressure-based methodology for resonant flows in non-linear combustion instabilities
NASA Technical Reports Server (NTRS)
Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.
1992-01-01
This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.
Prediction and control of chaotic processes using nonlinear adaptive networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.D.; Barnes, C.W.; Flake, G.W.
1990-01-01
We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.
Assessment of post forest fire reclamation in Algarve, Portugal
NASA Astrophysics Data System (ADS)
Andrade, Rita; Panagopoulos, Thomas; Guerrero, Carlos; Martins, Fernando; Zdruli, Pandi; Ladisa, Gaetano
2014-05-01
Fire is a common phenomenon in Mediterranean landscapes and it plays a crucial role in its transformations, making the determination of its impact on the ecosystem essential for land management. During summer of 2012, a wildfire took place in Algarve, Portugal, on an area mainly covered by sclerophyllous vegetation (39.44%, 10080ha), broad-leaved forest (20.80%, 5300ha), agriculture land with significant areas of natural vegetation (17.40%, 4400ha) and transitional woodlands-shrubs (16.17%, 4100ha). The objective of the study was to determine fire severity in order to plan post-fire treatments and to aid vegetation recovery and land reclamation. Satellite imagery was used to estimate burn severity by detecting physical and ecological changes in the landscape caused by fire. Differenced Normalized Burn Ratio (DNBR) was used to measure burn severity with pre and post fire data of four Landsat images acquired in October 2011, February and August 2012 and April 2013. The initial and extended differenced normalized burn ratio (DiNBR and DeNBR) were calculated. The calculated burned area of 24291 ha was 552ha lower than the map data determined with field reports. The 19.5% of that area was burned with high severity, 45% with moderate severity and 28.3% with low severity. Comparing fire severity and regrowth with land use, it is shown in DiNBR that the most severely burned areas were predominantly sclerophyllous vegetation (37.6%) and broad-leaved forests (31.1%). From the DeNRB it was found that the reestablishment of vegetation was slower in mixed forests and higher in sclerophyllous vegetation and in land with significant areas of natural vegetation. Faster recovery was calculated for the land uses of sclerophyllous vegetation (46.7%) and significant regrowth in areas of natural vegetation and lands occupied by agriculture (25.4%). Next steps of the study are field validation and crossing with erosion risk maps before to take land reclamation decisions.
Superconducting gravity gradiometer and a test of inverse square law
NASA Technical Reports Server (NTRS)
Moody, M. V.; Paik, Ho Jung
1989-01-01
The equivalence principle prohibits the distinction of gravity from acceleration by a local measurement. However, by making a differential measurement of acceleration over a baseline, platform accelerations can be cancelled and gravity gradients detected. In an in-line superconducting gravity gradiometer, this differencing is accomplished with two spring-mass accelerometers in which the proof masses are confined to motion in a single degree of freedom and are coupled together by superconducting circuits. Platform motions appear as common mode accelerations and are cancelled by adjusting the ratio of two persistent currents in the sensing circuit. The sensing circuit is connected to a commercial SQUID amplifier to sense changes in the persistent currents generated by differential accelerations, i.e., gravity gradients. A three-axis gravity gradiometer is formed by mounting six accelerometers on the faces of a precision cube, with the accelerometers on opposite faces of the cube forming one of three in-line gradiometers. A dedicated satellite mission for mapping the earth's gravity field is an important one. Additional scientific goals are a test of the inverse square law to a part in 10(exp 10) at 100 km, and a test of the Lense-Thirring effect by detecting the relativistic gravity magnetic terms in the gravity gradient tensor for the earth.
Upwind differencing and LU factorization for chemical non-equilibrium Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun
1992-01-01
By means of either the Roe or the Van Leer flux-splittings for inviscid terms, in conjunction with central differencing for viscous terms in the explicit operator and the Steger-Warming splitting and lower-upper approximate factorization for the implicit operator, the present, robust upwind method for solving the chemical nonequilibrium Navier-Stokes equations yields formulas for finite-volume discretization in general coordinates. Numerical tests in the illustrative cases of a hypersonic blunt body, a ramped duct, divergent nozzle flows, and shock wave/boundary layer interactions, establish the method's efficiency.
NASA Astrophysics Data System (ADS)
Rahman, Syazila; Yusoff, Mohd. Zamri; Hasini, Hasril
2012-06-01
This paper describes the comparison between the cell centered scheme and cell vertex scheme in the calculation of high speed compressible flow properties. The calculation is carried out using Computational Fluid Dynamic (CFD) in which the mass, momentum and energy equations are solved simultaneously over the flow domain. The geometry under investigation consists of a Binnie and Green convergent-divergent nozzle and structured mesh scheme is implemented throughout the flow domain. The finite volume CFD solver employs second-order accurate central differencing scheme for spatial discretization. In addition, the second-order accurate cell-vertex finite volume spatial discretization is also introduced in this case for comparison. The multi-stage Runge-Kutta time integration is implemented for solving a set of non-linear governing equations with variables stored at the vertices. Artificial dissipations used second and fourth order terms with pressure switch to detect changes in pressure gradient. This is important to control the solution stability and capture shock discontinuity. The result is compared with experimental measurement and good agreement is obtained for both cases.
Rickbeil, Gregory J M; Hermosilla, Txomin; Coops, Nicholas C; White, Joanne C; Wulder, Michael A
2017-03-01
Fire regimes are changing throughout the North American boreal forest in complex ways. Fire is also a major factor governing access to high-quality forage such as terricholous lichens for barren-ground caribou (Rangifer tarandus groenlandicus). Additionally, fire alters forest structure which can affect barren-ground caribou's ability to navigate in a landscape. Here, we characterize how the size and severity of fires are changing across five barren-ground caribou herd ranges in the Northwest Territories and Nunavut, Canada. Additionally, we demonstrate how time since fire, fire severity, and season result in complex changes in caribou behavioural metrics estimated using telemetry data. Fire disturbances were identified using novel gap-free Landsat surface reflectance composites from 1985 to 2011 across all herd ranges. Burn severity was estimated using the differenced normalized burn ratio. Annual area burned and burn severity were assessed through time for each herd and related to two behavioural metrics: velocity and relative turning angle. Neither annual area burned nor burn severity displayed any temporal trend within the study period. However, certain herds, such as the Ahiak/Beverly, have more exposure to fire than other herds (i.e. Cape Bathurst had a maximum forested area burned of less than 4 km 2 ). Time since fire and burn severity both significantly affected velocity and relative turning angles. During fall, winter, and spring, fire virtually eliminated foraging-focused behaviour for all 26 years of analysis while more severe fires resulted in a marked increase in movement-focused behaviour compared to unburnt patches. Between seasons, caribou used burned areas as early as 1-year postfire, demonstrating complex, nonlinear reactions to time since fire, fire severity, and season. In all cases, increases in movement-focused behaviour were detected postfire. We conclude that changes in caribou behaviour immediately postfire are primarily driven by changes in forest structure rather than changes in terricholous lichen availability. © 2016 John Wiley & Sons Ltd.
Analysis and control of supersonic vortex breakdown flows
NASA Technical Reports Server (NTRS)
Kandil, Osama A.
1990-01-01
Analysis and computation of steady, compressible, quasi-axisymmetric flow of an isolated, slender vortex are considered. The compressible, Navier-Stokes equations are reduced to a simpler set by using the slenderness and quasi-axisymmetry assumptions. The resulting set along with a compatibility equation are transformed from the diverging physical domain to a rectangular computational domain. Solving for a compatible set of initial profiles and specifying a compatible set of boundary conditions, the equations are solved using a type-differencing scheme. Vortex breakdown locations are detected by the failure of the scheme to converge. Computational examples include isolated vortex flows at different Mach numbers, external axial-pressure gradients and swirl ratios.
Automatic differentiation evaluated as a tool for rotorcraft design and optimization
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.; Young, Katherine C.
1995-01-01
This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.
Prediction of the Thrust Performance and the Flowfield of Liquid Rocket Engines
NASA Technical Reports Server (NTRS)
Wang, T.-S.
1990-01-01
In an effort to improve the current solutions in the design and analysis of liquid propulsive engines, a computational fluid dynamics (CFD) model capable of calculating the reacting flows from the combustion chamber, through the nozzle to the external plume, was developed. The Space Shuttle Main Engine (SSME) fired at sea level, was investigated as a sample case. The CFD model, FDNS, is a pressure based, non-staggered grid, viscous/inviscid, ideal gas/real gas, reactive code. An adaptive upwinding differencing scheme is employed for the spatial discretization. The upwind scheme is based on fourth order central differencing with fourth order damping for smooth regions, and second order central differencing with second order damping for shock capturing. It is equipped with a CHMQGM equilibrium chemistry algorithm and a PARASOL finite rate chemistry algorithm using the point implicit method. The computed flow results and performance compared well with those of other standard codes and engine hot fire test data. In addition, the transient nozzle flowfield calculation was also performed to demonstrate the ability of FDNS in capturing the flow separation during the startup process.
Satellite mapping of Nile Delta coastal changes
NASA Technical Reports Server (NTRS)
Blodget, H. W.; Taylor, P. T.; Roark, J. H.
1989-01-01
Multitemporal, multispectral scanner (MSS) landsat data have been used to monitor erosion and sedimentation along the Rosetta Promontory of the Nile Delta. These processes have accelerated significantly since the completion of the Aswan High Dam in 1964. Digital differencing of four MSS data sets, using standard algorithms, show that changes observed over a single year period generally occur as strings of single mixed pixels along the coast. Therefore, these can only be used qualitatively to indicate areas where changes occur. Areas of change recorded over a multi-year period are generally larger and thus identified by clusters of pixels; this reduces errors introduced by mixed pixels. Satellites provide a synoptic perspective utilizing data acquired at frequent time intervals. This permits multiple year monitoring of delta evolution on a regional scale.
NASA Astrophysics Data System (ADS)
Crosby, B. T.; Rodgers, D. W.; Lauer, I. H.
2017-12-01
The 1983 Borah Peak, Idaho, earthquake (M 7.0) produced both local ground surface rupture and notable far-field geodetic elevation changes that inspired a suite of investigations into coseismic flexural response. Shortly after the earthquake, Stein and Barrientos revisited a 50 km leveling line that runs roughly perpendicular to and spanning the Lost River normal fault. They found 1 meter of surface subsidence adjacent to the fault on the hanging wall that decays to no detectable change over 25 km distance from the fault. On the footwall, 20 cm of surface uplift was observed adjacent to the fault, decaying to zero change over 17 km. Though the changes in elevation are calculated as a difference between the first leveling in 1933 and the post-event leveling in 1984, they treat this change as the coseismic period, assuming little change between 1933 and 1983. A subsequent survey in 1985 revealed no significant change, suggesting that postseismic relaxation was complete. We evaluate the assumption that no detectable interseismic slip occurred between 1933 and the Borah Peak event by resurveying the line and differencing elevations between 2017 and 1985. If interseismic slip is insignificant, then there should be no detectable change over these 32 years. Using RTK GNSS with a 3D error ellipse of 0.9 cm, we resurveyed all leveling monuments in June, 2017. Significant deformation was observed. Between 1985 and 2017, 28 cm of displacement occurred across the fault. The hanging wall, adjacent to the fault, subsided 8 cm while the footwall rose 20 cm. Subsidence on the hanging wall increases slightly with distance away from the fault, reaching a maximum of 10 cm at a distance of 4 km from the fault and decaying to zero by 17 km. On the footwall surface uplift increases from 20 cm at the fault to 42 cm by 6.5 km before decaying. Clearly interseismic deformation has occurred over the last 32 years, including both discrete slip at the fault and distributed subsidence or surface uplift with distance away from the fault. A difference between the 2017 and 1933 data reveal that the opposing patterns of deformation pre and post event at on the footwall largely balance each other out, creating block-like surface uplift. These vertical changes are complemented by observations from continuous geodetic GNSS that corroborate the interseismic extension.
NASA Technical Reports Server (NTRS)
Ross, Kenton; Graham, William; Prados, Don; Spruce, Joseph
2007-01-01
MVDI, which effectively involves the differencing of NDMI and NDVI, appears to display increased noise that is consistent with a differencing technique. This effect masks finer variations in vegetation moisture, preventing MVDI from fulfilling the requirement of giving decision makers insight into spatial variation of fire risk. MVDI shows dependencies on land cover and phenology which also argue against its use as a fire risk proxy in an area of diverse and fragmented land covers. The conclusion of the rapid prototyping effort is that MVDI should not be implemented for SSC decision support.
Relative motion using analytical differential gravity
NASA Technical Reports Server (NTRS)
Gottlieb, Robert G.
1988-01-01
This paper presents a new approach to the computation of the motion of one satellite relative to another. The trajectory of the reference satellite is computed accurately subject to geopotential perturbations. This precise trajectory is used as a reference in computing the position of a nearby body, or bodies. The problem that arises in this approach is differencing nearly equal terms in the geopotential model, especially as the separation of the reference and nearby bodies approaches zero. By developing closed form expressions for differences in higher order and degree geopotential terms, the numerical problem inherent in the differencing approach is eliminated.
SCISEAL: A CFD code for analysis of fluid dynamic forces in seals
NASA Technical Reports Server (NTRS)
Athavale, Mahesh; Przekwas, Andrzej
1994-01-01
A viewgraph presentation is made of the objectives, capabilities, and test results of the computer code SCISEAL. Currently, the seal code has: a finite volume, pressure-based integration scheme; colocated variables with strong conservation approach; high-order spatial differencing, up to third-order; up to second-order temporal differencing; a comprehensive set of boundary conditions; a variety of turbulence models and surface roughness treatment; moving grid formulation for arbitrary rotor whirl; rotor dynamic coefficients calculated by the circular whirl and numerical shaker methods; and small perturbation capabilities to handle centered and eccentric seals.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
High-precision coseismic displacement estimation with a single-frequency GPS receiver
NASA Astrophysics Data System (ADS)
Guo, Bofeng; Zhang, Xiaohong; Ren, Xiaodong; Li, Xingxing
2015-07-01
To improve the performance of Global Positioning System (GPS) in the earthquake/tsunami early warning and rapid response applications, minimizing the blind zone and increasing the stability and accuracy of both the rapid source and rupture inversion, the density of existing GPS networks must be increased in the areas at risk. For economic reasons, low-cost single-frequency receivers would be preferable to make the sparse dual-frequency GPS networks denser. When using single-frequency GPS receivers, the main problem that must be solved is the ionospheric delay, which is a critical factor when determining accurate coseismic displacements. In this study, we introduce a modified Satellite-specific Epoch-differenced Ionospheric Delay (MSEID) model to compensate for the effect of ionospheric error on single-frequency GPS receivers. In the MSEID model, the time-differenced ionospheric delays observed from a regional dual-frequency GPS network to a common satellite are fitted to a plane rather than part of a sphere, and the parameters of this plane are determined by using the coordinates of the stations. When the parameters are known, time-differenced ionospheric delays for a single-frequency GPS receiver could be derived from the observations of those dual-frequency receivers. Using these ionospheric delay corrections, coseismic displacements of a single-frequency GPS receiver can be accurately calculated based on time-differenced carrier-phase measurements in real time. The performance of the proposed approach is validated using 5 Hz GPS data collected during the 2012 Nicoya Peninsula Earthquake (Mw 7.6, 2012 September 5) in Costa Rica. This shows that the proposed approach improves the accuracy of the displacement of a single-frequency GPS station, and coseismic displacements with an accuracy of a few centimetres are achieved over a 10-min interval.
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.
1990-01-01
The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.
Five-Year Wilkinson Microwave Anisotropy Probe (WMAP)Observations: Beam Maps and Window Functions
NASA Technical Reports Server (NTRS)
Hill, R.S.; Weiland, J.L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C.L.; Halpern, M.; Kogut, A.; Page, L.;
2008-01-01
Cosmology and other scientific results from the WMAP mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of approximately 2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of approximately 1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of approximately 2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly. errors in the measured disk temperature are approximately 0.5%.
Hsieh, Hui-Min; Bazzoli, Gloria J.
2012-01-01
This study examines the association between hospital uncompensated care (UC) and reductions in Medicaid Disproportionate Share Hospital (DSH) payments resulting from the 1997 Balanced Budget Act. Data on California hospitals from 1996 to 2003 were examined using two-stage least squares with a first-differencing model to control for potential feedback effects. Our findings suggest that not-for-profit hospitals did reduce UC provision in response to reductions in Medicaid DSH, but the response was inelastic in value. Policy makers need to continue to monitor how UC changes as sources of support for indigent care change with the Patient Protection and Affordable Care Act (PPACA). PMID:23230705
Trend time-series modeling and forecasting with neural networks.
Qi, Min; Zhang, G Peter
2008-05-01
Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.
On the geodetic applications of simultaneous range-differencing to LAGEOS
NASA Technical Reports Server (NTRS)
Pablis, E. C.
1982-01-01
The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.
FPGA-based gating and logic for multichannel single photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pooser, Raphael C; Earl, Dennis Duncan; Evans, Philip G
2012-01-01
We present results characterizing multichannel InGaAs single photon detectors utilizing gated passive quenching circuits (GPQC), self-differencing techniques, and field programmable gate array (FPGA)-based logic for both diode gating and coincidence counting. Utilizing FPGAs for the diode gating frontend and the logic counting backend has the advantage of low cost compared to custom built logic circuits and current off-the-shelf detector technology. Further, FPGA logic counters have been shown to work well in quantum key distribution (QKD) test beds. Our setup combines multiple independent detector channels in a reconfigurable manner via an FPGA backend and post processing in order to perform coincidencemore » measurements between any two or more detector channels simultaneously. Using this method, states from a multi-photon polarization entangled source are detected and characterized via coincidence counting on the FPGA. Photons detection events are also processed by the quantum information toolkit for application testing (QITKAT)« less
Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah
2016-01-01
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values. PMID:26963093
NASA Astrophysics Data System (ADS)
Smith, J. Torquil; Morrison, H. Frank; Doolittle, Lawrence R.; Tseng, Hung-Wen
2007-03-01
Equivalent dipole polarizabilities are a succinct way to summarize the inductive response of an isolated conductive body at distances greater than the scale of the body. Their estimation requires measurement of secondary magnetic fields due to currents induced in the body by time varying magnetic fields in at least three linearly independent (e.g., orthogonal) directions. Secondary fields due to an object are typically orders of magnitude smaller than the primary inducing fields near the primary field sources (transmitters). Receiver coils may be oriented orthogonal to primary fields from one or two transmitters, nulling their response to those fields, but simultaneously nulling to fields of additional transmitters is problematic. If transmitter coils are constructed symmetrically with respect to inversion in a point, their magnetic fields are symmetric with respect to that point. If receiver coils are operated in pairs symmetric with respect to inversion in the same point, then their differenced output is insensitive to the primary fields of any symmetrically constructed transmitters, allowing nulling to three (or more) transmitters. With a sufficient number of receivers pairs, object equivalent dipole polarizabilities can be estimated in situ from measurements at a single instrument sitting, eliminating effects of inaccurate instrument location on polarizability estimates. The method is illustrated with data from a multi-transmitter multi-receiver system with primary field nulling through differenced receiver pairs, interpreted in terms of principal equivalent dipole polarizabilities as a function of time.
Idbeaa, Tarik; Abdul Samad, Salina; Husain, Hafizah
2016-01-01
This paper presents a novel secure and robust steganographic technique in the compressed video domain namely embedding-based byte differencing (EBBD). Unlike most of the current video steganographic techniques which take into account only the intra frames for data embedding, the proposed EBBD technique aims to hide information in both intra and inter frames. The information is embedded into a compressed video by simultaneously manipulating the quantized AC coefficients (AC-QTCs) of luminance components of the frames during MPEG-2 encoding process. Later, during the decoding process, the embedded information can be detected and extracted completely. Furthermore, the EBBD basically deals with two security concepts: data encryption and data concealing. Hence, during the embedding process, secret data is encrypted using the simplified data encryption standard (S-DES) algorithm to provide better security to the implemented system. The security of the method lies in selecting candidate AC-QTCs within each non-overlapping 8 × 8 sub-block using a pseudo random key. Basic performance of this steganographic technique verified through experiments on various existing MPEG-2 encoded videos over a wide range of embedded payload rates. Overall, the experimental results verify the excellent performance of the proposed EBBD with a better trade-off in terms of imperceptibility and payload, as compared with previous techniques while at the same time ensuring minimal bitrate increase and negligible degradation of PSNR values.
Multitask assessment of roads and vehicles network (MARVN)
NASA Astrophysics Data System (ADS)
Yang, Fang; Yi, Meng; Cai, Yiran; Blasch, Erik; Sullivan, Nichole; Sheaff, Carolyn; Chen, Genshe; Ling, Haibin
2018-05-01
Vehicle detection in wide area motion imagery (WAMI) has drawn increasing attention from the computer vision research community in recent decades. In this paper, we present a new architecture for vehicle detection on road using multi-task network, which is able to detect and segment vehicles, estimate their pose, and meanwhile yield road isolation for a given region. The multi-task network consists of three components: 1) vehicle detection, 2) vehicle and road segmentation, and 3) detection screening. Segmentation and detection components share the same backbone network and are trained jointly in an end-to-end way. Unlike background subtraction or frame differencing based methods, the proposed Multitask Assessment of Roads and Vehicles Network (MARVN) method can detect vehicles which are slowing down, stopped, and/or partially occluded in a single image. In addition, the method can eliminate the detections which are located at outside road using yielded road segmentation so as to decrease the false positive rate. As few WAMI datasets have road mask and vehicles bounding box anotations, we extract 512 frames from WPAFB 2009 dataset and carefully refine the original annotations. The resulting dataset is thus named as WAMI512. We extensively compare the proposed method with state-of-the-art methods on WAMI512 dataset, and demonstrate superior performance in terms of efficiency and accuracy.
NASA Astrophysics Data System (ADS)
Baloloy, A. B.; Blanco, A. C.; Gana, B. S.; Sta. Ana, R. C.; Olalia, L. C.
2016-09-01
The Philippines has a booming sugarcane industry contributing about PHP 70 billion annually to the local economy through raw sugar, molasses and bioethanol production (SRA, 2012). Sugarcane planters adapt different farm practices in cultivating sugarcane, one of which is cane burning to eliminate unwanted plant material and facilitate easier harvest. Information on burned sugarcane extent is significant in yield estimation models to calculate total sugar lost during harvest. Pre-harvest burning can lessen sucrose by 2.7% - 5% of the potential yield (Gomez, et al 2006; Hiranyavasit, 2016). This study employs a method for detecting burn sugarcane area and determining burn severity through Differenced Normalized Burn Ratio (dNBR) using Landsat 8 Images acquired during the late milling season in Tarlac, Philippines. Total burned area was computed per burn severity based on pre-fire and post-fire images. Results show that 75.38% of the total sugarcane fields in Tarlac were burned with post-fire regrowth; 16.61% were recently burned; and only 8.01% were unburned. The monthly dNBR for February to March generated the largest area with low severity burn (1,436 ha) and high severity burn (31.14 ha) due to pre-harvest burning. Post-fire regrowth is highest in April to May when previously burned areas were already replanted with sugarcane. The maximum dNBR of the entire late milling season (February to May) recorded larger extent of areas with high and low post-fire regrowth compared to areas with low, moderate and high burn severity. Normalized Difference Vegetation Index (NDVI) was used to analyse vegetation dynamics between the burn severity classes. Significant positive correlation, rho = 0.99, was observed between dNBR and dNDVI at 5% level (p = 0.004). An accuracy of 89.03% was calculated for the Landsat-derived NBR validated using actual mill data for crop year 2015-2016.
ASDA - Advanced Suit Design Analyzer computer program
NASA Technical Reports Server (NTRS)
Bue, Grant C.; Conger, Bruce C.; Iovine, John V.; Chang, Chi-Min
1992-01-01
An ASDA model developed to evaluate the heat and mass transfer characteristics of advanced pressurized suit design concepts for low pressure or vacuum planetary applications is presented. The model is based on a generalized 3-layer suit that uses the Systems Integrated Numerical Differencing Analyzer '85 in conjunction with a 41-node FORTRAN routine. The latter simulates the transient heat transfer and respiratory processes of a human body in a suited environment. The user options for the suit encompass a liquid cooled garment, a removable jacket, a CO2/H2O permeable layer, and a phase change layer.
Sisson, T.W.; Robinson, J.E.; Swinney, D.D.
2011-01-01
Net changes in thickness and volume of glacial ice and perennial snow at Mount Rainier, Washington State, have been mapped over the entire edifice by differencing between a highresolution LiDAR (light detection and ranging) topographic survey of September-October 2007/2008 and the 10 m lateral resolution U.S. Geological Survey digital elevation model derived from September 1970 aerial photography. Excepting the large Emmons and Winthrop Glaciers, all of Mount Rainier's glaciers thinned and retreated in their terminal regions, with substantial thinning mainly at elevations <2000 m and the greatest thinning on southfacing glaciers. Mount Rainier's glaciers and snowfields also lost volume over the interval, excepting the east-flank Fryingpan and Emmons Glaciers and minor near-summit snowfields; maximum volume losses were centered from ~1750 m (north flank) to ~2250 m (south fl ank) elevation. The greatest single volume loss was from the Carbon Glacier, despite its northward aspect, due to its sizeable area at <2000 m elevation. Overall, Mount Rainier lost ~14 vol% glacial ice and perennial snow over the 37 to 38 yr interval between surveys. Enhanced thinning of south-flank glaciers may be meltback from the high snowfall period of the mid-1940s to mid-1970s associated with the cool phase of the Pacific Decadal Oscillation.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kupferman, R.
The author presents a numerical study of the axisymmetric Couette-Taylor problem using a finite difference scheme. The scheme is based on a staggered version of a second-order central-differencing method combined with a discrete Hodge projection. The use of central-differencing operators obviates the need to trace the characteristic flow associated with the hyperbolic terms. The result is a simple and efficient scheme which is readily adaptable to other geometries and to more complicated flows. The scheme exhibits competitive performance in terms of accuracy, resolution, and robustness. The numerical results agree accurately with linear stability theory and with previous numerical studies.
Response functions of free mass gravitational wave antennas
NASA Technical Reports Server (NTRS)
Estabrook, F. B.
1985-01-01
The work of Gursel, Linsay, Spero, Saulson, Whitcomb and Weiss (1984) on the response of a free-mass interferometric antenna is extended. Starting from first principles, the earlier work derived the response of a 2-arm gravitational wave antenna to plane polarized gravitational waves. Equivalent formulas (generalized slightly to allow for arbitrary elliptical polarization) are obtained by a simple differencing of the '3-pulse' Doppler response functions of two 1-arm antennas. A '4-pulse' response function is found, with quite complicated angular dependences for arbitrary incident polarization. The differencing method can as readily be used to write exact response functions ('3n+1 pulse') for antennas having multiple passes or more arms.
NASA Technical Reports Server (NTRS)
Jackson, James A.; Marr, Greg C.; Maher, Michael J.
1995-01-01
NASA GSFC VNS TSG personnel have proposed the use of TDRSS to obtain telemetry and/or S-band one-way return Doppler tracking data for spacecraft which do not have TDRSS-compatible transponders and therefore were never considered candidates for TDRSS support. For spacecraft with less stable local oscillators (LO), one-way return Doppler tracking data is typically of poor quality. It has been demonstrated using UARS, WIND, and NOAA-J tracking data that the simultaneous use of two TDRSS spacecraft can yield differenced one-way return Doppler data of high quality which is usable for orbit determination by differencing away the effects of oscillator instability.
Flux splitting algorithms for two-dimensional viscous flows with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing
1989-01-01
The Roe flux difference splitting method was extended to treat 2-D viscous flows with nonequilibrium chemistry. The derivations have avoided unnecessary assumptions or approximations. For spatial discretization, the second-order Roe upwind differencing is used for the convective terms and central differencing for the viscous terms. An upwind-based TVD scheme is applied to eliminate oscillations and obtain a sharp representation of discontinuities. A two-state Runge-Kutta method is used to time integrate the discretized Navier-Stokes and species transport equations for the asymptotic steady solutions. The present method is then applied to two types of flows: the shock wave/boundary layer interaction problems and the jet in cross flows.
Flux splitting algorithms for two-dimensional viscous flows with finite-rate chemistry
NASA Technical Reports Server (NTRS)
Shuen, Jian-Shun; Liou, Meng-Sing
1989-01-01
The Roe flux-difference splitting method has been extended to treat two-dimensional viscous flows with nonequilibrium chemistry. The derivations have avoided unnecessary assumptions or approximations. For spatial discretization, the second-order Roe upwind differencing is used for the convective terms and central differencing for the viscous terms. An upwind-based TVD scheme is applied to eliminate oscillations and obtain a sharp representation of discontinuities. A two-stage Runge-Kutta method is used to time integrate the discretized Navier-Stokes and species transport equations for the asymptotic steady solutions. The present method is then applied to two types of flows: the shock wave/boundary layer interaction problems and the jet in cross flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHugh, P.R.; Ramshaw, J.D.
MAGMA is a FORTRAN computer code designed to viscous flow in in situ vitrification melt pools. It models three-dimensional, incompressible, viscous flow and heat transfer. The momentum equation is coupled to the temperature field through the buoyancy force terms arising from the Boussinesq approximation. All fluid properties, except density, are assumed variable. Density is assumed constant except in the buoyancy force terms in the momentum equation. A simple melting model based on the enthalpy method allows the study of the melt front progression and latent heat effects. An indirect addressing scheme used in the numerical solution of the momentum equationmore » voids unnecessary calculations in cells devoid of liquid. Two-dimensional calculations can be performed using either rectangular or cylindrical coordinates, while three-dimensional calculations use rectangular coordinates. All derivatives are approximated by finite differences. The incompressible Navier-Stokes equations are solved using a new fully implicit iterative technique, while the energy equation is differenced explicitly in time. Spatial derivatives are written in conservative form using a uniform, rectangular, staggered mesh based on the marker and cell placement of variables. Convective terms are differenced using a weighted average of centered and donor cell differencing to ensure numerical stability. Complete descriptions of MAGMA governing equations, numerics, code structure, and code verification are provided. 14 refs.« less
Progress in multi-dimensional upwind differencing
NASA Technical Reports Server (NTRS)
Vanleer, Bram
1992-01-01
Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.
An Explicit Upwind Algorithm for Solving the Parabolized Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Korte, John J.
1991-01-01
An explicit, upwind algorithm was developed for the direct (noniterative) integration of the 3-D Parabolized Navier-Stokes (PNS) equations in a generalized coordinate system. The new algorithm uses upwind approximations of the numerical fluxes for the pressure and convection terms obtained by combining flux difference splittings (FDS) formed from the solution of an approximate Riemann (RP). The approximate RP is solved using an extension of the method developed by Roe for steady supersonic flow of an ideal gas. Roe's method is extended for use with the 3-D PNS equations expressed in generalized coordinates and to include Vigneron's technique of splitting the streamwise pressure gradient. The difficulty associated with applying Roe's scheme in the subsonic region is overcome. The second-order upwind differencing of the flux derivatives are obtained by adding FDS to either an original forward or backward differencing of the flux derivative. This approach is used to modify an explicit MacCormack differencing scheme into an upwind differencing scheme. The second order upwind flux approximations, applied with flux limiters, provide a method for numerically capturing shocks without the need for additional artificial damping terms which require adjustment by the user. In addition, a cubic equation is derived for determining Vegneron's pressure splitting coefficient using the updated streamwise flux vector. Decoding the streamwise flux vector with the updated value of Vigneron's pressure splitting improves the stability of the scheme. The new algorithm is applied to 2-D and 3-D supersonic and hypersonic laminar flow test cases. Results are presented for the experimental studies of Holden and of Tracy. In addition, a flow field solution is presented for a generic hypersonic aircraft at a Mach number of 24.5 and angle of attack of 1 degree. The computed results compare well to both experimental data and numerical results from other algorithms. Computational times required for the upwind PNS code are approximately equal to an explicit PNS MacCormack's code and existing implicit PNS solvers.
NASA Astrophysics Data System (ADS)
Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang
2018-01-01
Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.
Recovery of biological soil crust richness and cover 12-16 years after wildfires in Idaho, USA
NASA Astrophysics Data System (ADS)
Root, Heather T.; Brinda, John C.; Dodson, E. Kyle
2017-09-01
Changing fire regimes in western North America may impact biological soil crust (BSC) communities that influence many ecosystem functions, such as soil stability and C and N cycling. However, longer-term effects of wildfire on BSC abundance, species richness, functional groups, and ecosystem functions after wildfire (i.e., BSC resilience) are still poorly understood. We sampled BSC lichen and bryophyte communities at four sites in Idaho, USA, within foothill steppe communities that included wildfires from 12 to 16 years old. We established six plots outside each burn perimeter and compared them with six plots of varying severity within each fire perimeter at each site. BSC cover was most strongly negatively impacted by wildfire at sites that had well-developed BSC communities in adjacent unburned plots. BSC species richness was estimated to be 65 % greater in unburned plots compared with burned plots, and fire effects did not vary among sites. In contrast, there was no evidence that vascular plant functional groups or fire severity (as measured by satellite metrics differenced normalized burn ratio (dNBR) or relativized differenced normalized burn ratio (RdNBR)) significantly affected longer-term BSC responses. Three large-statured BSC functional groups that may be important in controlling wind and water erosion (squamulose lichens, vagrant lichens, and tall turf mosses) exhibited a significant decrease in abundance in burned areas relative to adjacent unburned areas. The decreases in BSC cover and richness along with decreased abundance of several functional groups suggest that wildfire can negatively impact ecosystem function in these semiarid ecosystems for at least 1 to 2 decades. This is a concern given that increased fire frequency is predicted for the region due to exotic grass invasion and climate change.
Forecasting conditional climate-change using a hybrid approach
Esfahani, Akbar Akbari; Friedel, Michael J.
2014-01-01
A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.
Berkeley UXO Discriminator (BUD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasperikova, Erika; Smith, J. Torquil; Morrison, H. Frank
2007-01-01
The Berkeley UXO Discriminator (BUD) is an optimally designed active electromagnetic system that not only detects but also characterizes UXO. The system incorporates three orthogonal transmitters and eight pairs of differenced receivers. it has two modes of operation: (1) search mode, in which BUD moves along a profile and exclusively detects targets in its vicinity, providing target depth and horizontal location, and (2) discrimination mode, in which BUD, stationary above a target, from a single position, determines three discriminating polarizability responses together with the object location and orientation. The performance of the system is governed by a target size-depth curve.more » Maximum detection depth is 1.5 m. While UXO objects have a single major polarizability coincident with the long axis of the object and two equal transverse polarizabilities, scrap metal has three different principal polarizabilities. The results clearly show that there are very clear distinctions between symmetric intact UXO and irregular scrap metal, and that BUD can resolve the intrinsic polarizabilities of the target. The field survey at the Yuma Proving Ground in Arizona showed excellent results within the predicted size-depth range.« less
Thermal modeling of a cryogenic turbopump for space shuttle applications.
NASA Technical Reports Server (NTRS)
Knowles, P. J.
1971-01-01
Thermal modeling of a cryogenic pump and a hot-gas turbine in a turbopump assembly proposed for the Space Shuttle is described in this paper. A model, developed by identifying the heat-transfer regimes and incorporating their dependencies into a turbopump system model, included heat transfer for two-phase cryogen, hot-gas (200 R) impingement on turbine blades, gas impingement on rotating disks and parallel plate fluid flow. The ?thermal analyzer' program employed to develop this model was the TRW Systems Improved Numerical Differencing Analyzer (SINDA). This program uses finite differencing with lumped parameter representation for each node. Also discussed are model development, simulations of turbopump startup/shutdown operations, and the effects of varying turbopump parameters on the thermal performance.
Analysis of airfoil transitional separation bubbles
NASA Technical Reports Server (NTRS)
Davis, R. L.; Carter, J. E.
1984-01-01
A previously developed local inviscid-viscous interaction technique for the analysis of airfoil transitional separation bubbles, ALESEP (Airfoil Leading Edge Separation) has been modified to utilize a more accurate windward finite difference procedure in the reversed flow region, and a natural transition/turbulence model has been incorporated for the prediction of transition within the separation bubble. Numerous calculations and experimental comparisons are presented to demonstrate the effects of the windward differencing scheme and the natural transition/turbulence model. Grid sensitivity and convergence capabilities of this inviscid-viscous interaction technique are briefly addressed. A major contribution of this report is that with the use of windward differencing, a second, counter-rotating eddy has been found to exist in the wall layer of the primary separation bubble.
SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dormody, M.; Johnson, R. P.; Atwood, W. B.
2011-12-01
We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less
Multi-decadal elevation changes on Bagley Ice Valley and Malaspina Glacier, Alaska
NASA Astrophysics Data System (ADS)
Muskett, Reginald R.; Lingle, Craig S.; Tangborn, Wendell V.; Rabus, Bernhard T.
2003-08-01
Digital elevation models (DEMs) of Bagley Ice Valley and Malaspina Glacier produced by (i) Intermap Technologies, Inc. (ITI) from airborne interferometric synthetic aperture radar (InSAR) data acquired 4-13 September 2000, (ii) the German Aerospace Center (DRL) from spaceborne InSAR data acquired by the Shuttle Radar Topography Mission (SRTM) 11-22 February 2000, and (iii) the US Geological Survey (USGS) from aerial photographs acquired in 1972/73, were differenced to estimate glacier surface elevation changes from 1972 to 2000. Spatially non-uniform thickening, 10 +/- 7 m on average, is observed on Bagley Ice Valley (accumulation area) while non-uniform thinning, 47 +/- 5 m on average, is observed on the glaciers of the Malaspina complex (mostly ablation area). Even larger thinning is observed on the retreating tidewater Tyndall Glacier. These changes have resulted from increased temperature and precipitation associated with climate warming, and rapid tidewater retreat.
On the effect of using the Shapiro filter to smooth winds on a sphere
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Balgovind, R. C.
1984-01-01
Spatial differencing schemes which are not enstrophy conserving nor implicitly damping require global filtering of short waves to eliminate the build-up of energy in the shortest wavelengths due to aliasing. Takacs and Balgovind (1983) have shown that filtering on a sphere with a latitude dependent damping function will cause spurious vorticity and divergence source terms to occur if care is not taken to ensure the irrotationality of the gradients of the stream function and velocity potential. Using a shallow water model with fourth-order energy-conserving spatial differencing, it is found that using a 16th-order Shapiro (1979) filter on the winds and heights to control nonlinear instability also creates spurious source terms when the winds are filtered in the meridional direction.
Black hole evolution by spectral methods
NASA Astrophysics Data System (ADS)
Kidder, Lawrence E.; Scheel, Mark A.; Teukolsky, Saul A.; Carlson, Eric D.; Cook, Gregory B.
2000-10-01
Current methods of evolving a spacetime containing one or more black holes are plagued by instabilities that prohibit long-term evolution. Some of these instabilities may be due to the numerical method used, traditionally finite differencing. In this paper, we explore the use of a pseudospectral collocation (PSC) method for the evolution of a spherically symmetric black hole spacetime in one dimension using a hyperbolic formulation of Einstein's equations. We demonstrate that our PSC method is able to evolve a spherically symmetric black hole spacetime forever without enforcing constraints, even if we add dynamics via a Klein-Gordon scalar field. We find that, in contrast with finite-differencing methods, black hole excision is a trivial operation using PSC applied to a hyperbolic formulation of Einstein's equations. We discuss the extension of this method to three spatial dimensions.
NASA Astrophysics Data System (ADS)
Koehler-Sidki, A.; Dynes, J. F.; Lucamarini, M.; Roberts, G. L.; Sharpe, A. W.; Yuan, Z. L.; Shields, A. J.
2018-04-01
Fast-gated avalanche photodiodes (APDs) are the most commonly used single photon detectors for high-bit-rate quantum key distribution (QKD). Their robustness against external attacks is crucial to the overall security of a QKD system, or even an entire QKD network. We investigate the behavior of a gigahertz-gated, self-differencing (In,Ga)As APD under strong illumination, a tactic Eve often uses to bring detectors under her control. Our experiment and modeling reveal that the negative feedback by the photocurrent safeguards the detector from being blinded through reducing its avalanche probability and/or strengthening the capacitive response. Based on this finding, we propose a set of best-practice criteria for designing and operating fast-gated APD detectors to ensure their practical security in QKD.
Investigation on Glacier Thinning in Baspa, Western Himalaya.
NASA Astrophysics Data System (ADS)
S, P.; Kulkarni, A. V.; Bhushan, S.
2017-12-01
Mass balance studies are important to assess the state of glaciers. Previously, numerous field investigations have been carried out in Baspa basin to measure mass balance. However, mass balance data from field are limited to a small number of glaciers and for short durations. Therefore, this study uses geodetic mass balance technique to evaluate the mass loss at decadal scale. Geodetic method involves differencing Digital Elevation Model (DEM) from different years to obtain change in glacier elevation, which will be subsequently used to evaluate mass balance. This study derives mass balance from 2000 to 2014 for 16 glaciers covering a total area of 70 Sq Km. The study uses Shuttle Radar Topography Mission (SRTM) DEM for year 2000 and DEM for year 2014 was derived from Cartosat-1 stereo pair using photogrammetric principles. A Differential Global Positioning System (DGPS) survey was conducted in Baspa basin at different elevation zones to collect Ground Control Points (GCP) with millimeters accuracy. These GCP were used to derive Cartosat DEM. Various corrections were applied before differencing the two DEMs. They were co-registered using an analytical approach to account for horizontal shift. Corrections were also applied to remove the bias due to satellite acquisition geometry. SRTM DEM was acquired in February when the study area was covered by seasonal snow, whereas, Cartosat data was acquired during the ablation season. As the season of data acquisition varies for the two DEM, we have corrected for the bias that could be caused due to seasonal snow. Snowfall data from a meteorological station in the Baspa valley and a local precipitation gradient were used to determine the seasonal snow depth. Further, corrections were applied to account for the bias due to radar penetration in SRTM DEM. Then, the elevation changes were determined by subtracting the two DEMs to estimate mass balance. The figure below shows the change in glacier elevation. These results will be validated with field estimates. This investigation, after validation, will be an important addition in understanding changes in Himalayan glaciers.
NASA Technical Reports Server (NTRS)
Holt, James M.; Clanton, Stephen E.
1999-01-01
Results of the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS) gross leakage analysis are presented for evaluating total leakage flowrates and volume discharge caused by a gross leakage event (i.e. open boundary condition). A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) thermal hydraulic mathematical model (THMM) representing the Node 2 IATCS was developed to simulate system performance under steady-state nominal conditions as well as the transient flow effects resulting from an open line exposed to ambient. The objective of the analysis was to determine the adequacy of the leak detection software in limiting the quantity of fluid lost during a gross leakage event to within an acceptable level.
NASA Technical Reports Server (NTRS)
Holt, James M.; Clanton, Stephen E.
2001-01-01
Results of the International Space Station (ISS) Node 2 Internal Active Thermal Control System (IATCS) gross leakage analysis are presented for evaluating total leakage flow rates and volume discharge caused by a gross leakage event (i.e. open boundary condition). A Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA85/FLUINT) thermal hydraulic mathematical model (THMM) representing the Node 2 IATCS was developed to simulate system performance under steady-state nominal conditions as well as the transient flow effect resulting from an open line exposed to ambient. The objective of the analysis was to determine the adequacy of the leak detection software in limiting the quantity of fluid lost during a gross leakage event to within an acceptable level.
NASA Astrophysics Data System (ADS)
Lee, Min Soo; Park, Byung Kwon; Woo, Min Ki; Park, Chang Hoon; Kim, Yong-Su; Han, Sang-Wook; Moon, Sung
2016-12-01
We developed a countermeasure against blinding attacks on low-noise detectors with a background-noise-cancellation scheme in quantum key distribution (QKD) systems. Background-noise cancellation includes self-differencing and balanced avalanche photon diode (APD) schemes and is considered a promising solution for low-noise APDs, which are critical components in high-performance QKD systems. However, its vulnerability to blinding attacks has been recently reported. In this work, we propose a countermeasure that prevents this potential security loophole from being used in detector blinding attacks. An experimental QKD setup is implemented and various tests are conducted to verify the feasibility and performance of the proposed method. The obtained measurement results show that the proposed scheme successfully detects occurring blinding-attack-based hacking attempts.
Megathrust splay faults at the focus of the Prince William Sound asperity, Alaska
Liberty, Lee M.; Finn, Shaun P.; Haeussler, Peter J.; Pratt, Thomas L.; Peterson, Andrew
2013-01-01
High-resolution sparker and crustal-scale air gun seismic reflection data, coupled with repeat bathymetric surveys, document a region of repeated coseismic uplift on the portion of the Alaska subduction zone that ruptured in 1964. This area defines the western limit of Prince William Sound. Differencing of vintage and modern bathymetric surveys shows that the region of greatest uplift related to the 1964 Great Alaska earthquake was focused along a series of subparallel faults beneath Prince William Sound and the adjacent Gulf of Alaska shelf. Bathymetric differencing indicates that 12 m of coseismic uplift occurred along two faults that reached the seafloor as submarine terraces on the Cape Cleare bank southwest of Montague Island. Sparker seismic reflection data provide cumulative Holocene slip estimates as high as 9 mm/yr along a series of splay thrust faults within both the inner wedge and transition zone of the accretionary prism. Crustal seismic data show that these megathrust splay faults root separately into the subduction zone décollement. Splay fault divergence from this megathrust correlates with changes in midcrustal seismic velocity and magnetic susceptibility values, best explained by duplexing of the subducted Yakutat terrane rocks above Pacific plate rocks along the trailing edge of the Yakutat terrane. Although each splay fault is capable of independent motion, we conclude that the identified splay faults rupture in a similar pattern during successive megathrust earthquakes and that the region of greatest seismic coupling has remained consistent throughout the Holocene.
Five-Year Wilkinson Microwave Anisotropy Probe Observations: Beam Maps and Window Functions
NASA Astrophysics Data System (ADS)
Hill, R. S.; Weiland, J. L.; Odegard, N.; Wollack, E.; Hinshaw, G.; Larson, D.; Bennett, C. L.; Halpern, M.; Page, L.; Dunkley, J.; Gold, B.; Jarosik, N.; Kogut, A.; Limon, M.; Nolta, M. R.; Spergel, D. N.; Tucker, G. S.; Wright, E. L.
2009-02-01
Cosmology and other scientific results from the Wilkinson Microwave Anisotropy Probe (WMAP) mission require an accurate knowledge of the beam patterns in flight. While the degree of beam knowledge for the WMAP one-year and three-year results was unprecedented for a CMB experiment, we have significantly improved the beam determination as part of the five-year data release. Physical optics fits are done on both the A and the B sides for the first time. The cutoff scale of the fitted distortions on the primary mirror is reduced by a factor of ~2 from previous analyses. These changes enable an improvement in the hybridization of Jupiter data with beam models, which is optimized with respect to error in the main beam solid angle. An increase in main-beam solid angle of ~1% is found for the V2 and W1-W4 differencing assemblies. Although the five-year results are statistically consistent with previous ones, the errors in the five-year beam transfer functions are reduced by a factor of ~2 as compared to the three-year analysis. We present radiometry of the planet Jupiter as a test of the beam consistency and as a calibration standard; for an individual differencing assembly, errors in the measured disk temperature are ~0.5%. WMAP is the result of a partnership between Princeton University and NASA's Goddard Space Flight Center. Scientific guidance is provided by the WMAP Science Team.
Reproducibility of UAV-based earth surface topography based on structure-from-motion algorithms.
NASA Astrophysics Data System (ADS)
Clapuyt, François; Vanacker, Veerle; Van Oost, Kristof
2014-05-01
A representation of the earth surface at very high spatial resolution is crucial to accurately map small geomorphic landforms with high precision. Very high resolution digital surface models (DSM) can then be used to quantify changes in earth surface topography over time, based on differencing of DSMs taken at various moments in time. However, it is compulsory to have both high accuracy for each topographic representation and consistency between measurements over time, as DSM differencing automatically leads to error propagation. This study investigates the reproducibility of reconstructions of earth surface topography based on structure-from-motion (SFM) algorithms. To this end, we equipped an eight-propeller drone with a standard reflex camera. This equipment can easily be deployed in the field, as it is a lightweight, low-cost system in comparison with classic aerial photo surveys and terrestrial or airborne LiDAR scanning. Four sets of aerial photographs were created for one test field. The sets of airphotos differ in focal length, and viewing angles, i.e. nadir view and ground-level view. In addition, the importance of the accuracy of ground control points for the construction of a georeferenced point cloud was assessed using two different GPS devices with horizontal accuracy at resp. the sub-meter and sub-decimeter level. Airphoto datasets were processed with SFM algorithm and the resulting point clouds were georeferenced. Then, the surface representations were compared with each other to assess the reproducibility of the earth surface topography. Finally, consistency between independent datasets is discussed.
Efficient high-rate satellite clock estimation for PPP ambiguity resolution using carrier-ranges.
Chen, Hua; Jiang, Weiping; Ge, Maorong; Wickert, Jens; Schuh, Harald
2014-11-25
In order to catch up the short-term clock variation of GNSS satellites, clock corrections must be estimated and updated at a high-rate for Precise Point Positioning (PPP). This estimation is already very time-consuming for the GPS constellation only as a great number of ambiguities need to be simultaneously estimated. However, on the one hand better estimates are expected by including more stations, and on the other hand satellites from different GNSS systems must be processed integratively for a reliable multi-GNSS positioning service. To alleviate the heavy computational burden, epoch-differenced observations are always employed where ambiguities are eliminated. As the epoch-differenced method can only derive temporal clock changes which have to be aligned to the absolute clocks but always in a rather complicated way, in this paper, an efficient method for high-rate clock estimation is proposed using the concept of "carrier-range" realized by means of PPP with integer ambiguity resolution. Processing procedures for both post- and real-time processing are developed, respectively. The experimental validation shows that the computation time could be reduced to about one sixth of that of the existing methods for post-processing and less than 1 s for processing a single epoch of a network with about 200 stations in real-time mode after all ambiguities are fixed. This confirms that the proposed processing strategy will enable the high-rate clock estimation for future multi-GNSS networks in post-processing and possibly also in real-time mode.
Megathrust splay faults at the focus of the Prince William Sound asperity, Alaska
NASA Astrophysics Data System (ADS)
Liberty, Lee M.; Finn, Shaun P.; Haeussler, Peter J.; Pratt, Thomas L.; Peterson, Andrew
2013-10-01
sparker and crustal-scale air gun seismic reflection data, coupled with repeat bathymetric surveys, document a region of repeated coseismic uplift on the portion of the Alaska subduction zone that ruptured in 1964. This area defines the western limit of Prince William Sound. Differencing of vintage and modern bathymetric surveys shows that the region of greatest uplift related to the 1964 Great Alaska earthquake was focused along a series of subparallel faults beneath Prince William Sound and the adjacent Gulf of Alaska shelf. Bathymetric differencing indicates that 12 m of coseismic uplift occurred along two faults that reached the seafloor as submarine terraces on the Cape Cleare bank southwest of Montague Island. Sparker seismic reflection data provide cumulative Holocene slip estimates as high as 9 mm/yr along a series of splay thrust faults within both the inner wedge and transition zone of the accretionary prism. Crustal seismic data show that these megathrust splay faults root separately into the subduction zone décollement. Splay fault divergence from this megathrust correlates with changes in midcrustal seismic velocity and magnetic susceptibility values, best explained by duplexing of the subducted Yakutat terrane rocks above Pacific plate rocks along the trailing edge of the Yakutat terrane. Although each splay fault is capable of independent motion, we conclude that the identified splay faults rupture in a similar pattern during successive megathrust earthquakes and that the region of greatest seismic coupling has remained consistent throughout the Holocene.
Analyzing Hydro-Geomorphic Responses in Post-Fire Stream Channels with Terrestrial LiDAR
NASA Astrophysics Data System (ADS)
Nourbakhshbeidokhti, S.; Kinoshita, A. M.; Chin, A.
2015-12-01
Wildfires have potential to significantly alter soil properties and vegetation within watersheds. These alterations often contribute to accelerated erosion, runoff, and sediment transport in stream channels and hillslopes. This research applies repeated Terrestrial Laser Scanning (TLS) Light Detection and Ranging (LiDAR) to stream reaches within the Pike National Forest in Colorado following the 2012 Waldo Canyon Fire. These scans allow investigation of the relationship between sediment delivery and environmental characteristics such as precipitation, soil burn severity, and vegetation. Post-fire LiDAR images provide high resolution information of stream channel changes in eight reaches for three years (2012-2014). All images are processed with RiSCAN PRO to remove vegetation and triangulated and smoothed to create a Digital Elevation Model (DEM) with 0.1 m resolution. Study reaches with two or more successive DEM images are compared using a differencing method to estimate the volume of sediment erosion and deposition. Preliminary analysis of four channel reaches within Williams Canyon and Camp Creek yielded erosion estimates between 0.035 and 0.618 m3 per unit area. Deposition was estimated as 0.365 to 1.67 m3 per unit area. Reaches that experienced higher soil burn severity or larger rainfall events produced the greatest geomorphic changes. Results from LiDAR analyses can be incorporated into post-fire hydrologic models to improve estimates of runoff and sediment yield. These models will, in turn, provide guidance for water resources management and downstream hazards mitigation.
Geomorphological change detection of fluvial processes of lower Siret channel using LIDAR data
NASA Astrophysics Data System (ADS)
Niculita, Mihai; Obreja, Florin; Boca, Bogdan
2015-04-01
Geomorphological change detection is a relatively new method risen from the availability of high resolution multitemporal DEMs (James et. al., 2011; Brodu & Lague, 2012; Barnhart & Crosby, 2013). The main issue in regard with this method is the identification of real change, given by geomorphologic processes, and not by the noise, method artefacts, vegetation or various other errors (Wheaton et. al., 2009). We present the results of geomorphological change detection applied to a part of the lower Siret river channel (from 60 to 140 km above the Siret-Dunăre confluence, between Adjud and Namoloasa). The data sources used were LIDAR DEMs provided by the Siret and Prut-Barlad Water Administrations, one version for 2008, at 2 m resolution, and the other at 0.5 m resolution for 2012. The geomorphological change detection was performed at a resolution of 2 m using the methodology of Wheaton et. al., 2009, on 4 sites with a cumulated length of 47 km, with 41.6 km covering meandering channels and 5.4 km Movileni anthropic lake shore. In the studied period (2008-2012), two major flood events were registered, one in 2008 and the other in 2010 (Olariu et. al., 2009, Serbu et. al., 2009, Nedelcu et. al., 2011). The geomorphological change detection approach managed to outline the presence and the rate of process (expressed as volumetric change) for: channel erosion, channel aggradation, lateral migration of river bank, meander migration, lake bank erosion, alluvial fan deposition and anthropic excavation of channel and river bank. Barnhart T.B., Crosby B.T., 2013. Comparing Two Methods of Surface Change Detection on an Evolving Thermokarst Using High-Temporal-Frequency Terrestrial Laser Scanning, Selawik River, Alaska. Remote Sensing, 5:2813-23937. Brodu N, Lague D. 2012. 3D Terrestrial LiDAR data classification of complex natural scenes using a multi-scale dimensionality criterion: applications in geomorphology, ISPRS journal of Photogrammmetry and Remote Sensing, 68:121-134. Lague D., Brodu N., Leroux J., 2013. Accurate 3D comparison of complex topography with terrestrial laser scanner: application to the Rangitikei canyon (N-Z), ISPRS journal of Photogrammmetry and Remote Sensing, 80:10-26. James L.A., Hodgson M.E., Ghoshal S., Latiolais M.M., 2012. Geomorphic change detection using historic maps and DEM differencing: the temporal dimension of geospatial analysis. Geomorphology, 137:181-198. Nedelcu G., Borcan M., Branescu E., Petre C., Teleanu B., Preda A., Murafa R., 2011. Exceptional floods from the years 2008 and 2010 in Siret river basin, Proceedings of the Annual Scientific Conference of National Romanian Institute of Hydrology and Water Administration, 1-3 November 2011. (in Romanian) Olariu P., Obreja F., Obreja I., 2009. Some aspects regarding the sediment transit from Trotus catchment and lower sector of Siret river during the exceptional floods from 1991 and 2005, Annals of Stefan cel Mare University of Suceava, XVIII:93-104.(in Romanian) Serbu M., Obreja F., Olariu P., 2009. The 2008 floods from upper Siret catchment. Causes, effects, evaluation, Hidrotechnics, 54(12):1-38. (in Romanian) Wheaton J.M., Brasington J., Darby S., Sear D., 2009. Accounting for uncertainty in DEMs from repeat topographic surveys: improved sediment budgets. Earth Surface Processes and Landforms, 35(2):136-156.
Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad
2017-01-01
Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
NASA Technical Reports Server (NTRS)
Goad, Clyde C.; Chadwell, C. David
1993-01-01
GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.
A multisensor system for detection and characterization of UXO(MM-0437) - Demonstration Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasperikova, Erika; Smith, J.T.; Morrison, H.F.
2006-06-01
The Berkeley UXO discriminator (BUD) (Figure 1) is a portable Active Electromagnetic (AEM) system for UXO detection and characterization that quickly determines the location, size, and symmetry properties of a suspected UXO. The BUD comprises of three orthogonal transmitters that 'illuminate' a target with fields in three independent directions in order to stimulate the three polarization modes that, in general, characterize the target EM response. In addition, the BUD uses eight pairs of differenced receivers for response recording. Eight receiver coils are placed horizontally along the two diagonals of the upper and lower planes of the two horizontal transmitter loops.more » These receiver coil pairs are located on symmetry lines through the center of the system and each pair sees identical fields during the on-time of the pulse in all of the transmitter coils. They are wired in opposition to produce zero output during the on-time of the pulses in three orthogonal transmitters. Moreover, this configuration dramatically reduces noise in the measurements by canceling the background electromagnetic fields (these fields are uniform over the scale of the receiver array and are consequently nulled by the differencing operation), and by canceling the noise contributed by the tilt of the receivers in the Earth's magnetic field, and greatly enhances receivers sensitivity to the gradients of the target response. The BUD performs target characterization from a single position of the sensor platform above a target. BUD was designed to detect and characterize UXO in the 20 mm to 155 mm size range for depths between 0 and 1 m. The relationship between the object size and the depth at which it can be detected is illustrated in Figure 2. This curve was calculated for BUD assuming that the receiver plane is 20 cm above the ground. Figure 2 shows that, for example, BUD can detect and characterize an object with 10 cm diameter down to the depth of 90 cm with depth uncertainty of 10%. Any objects buried at the depth more than 1 m have a low probability of detection. With existing algorithms in the system computer it is not possible to recover the principal polarizabilities of large objects close to the system. Detection of large shallow objects is assured, but at present real time discrimination for shallow objects is not. Post processing of the field data is required for shape discrimination of large shallow targets. Next generation of BUD software will not have this limitation. Successful application of the inversion algorithm that solves for the target parameters is contingent upon resolution of this limitation. At the moment, interpretation software is developed for a single object only. In case of multiple objects the software indicates the presence of a cluster of objects but is unable to provide characteristics of each individual object.« less
NASA Astrophysics Data System (ADS)
Hwang, C.; Cheng, Y. S.
2015-12-01
In most cases, mountain glaciers are narrow and situated over steep slopes. A laser-based altimeter such as ICESat has a small illuminated footprint at about 70 m, thus allowing to measure precise elevations over narrow mountain glaciers. However, unlike a typical radar altimeter mission, ICESat does not have repeat ground tracks (except in its early phase) to measure heights of a specific point at different times. Within a time span, usually a reference digital elevation model is used to compute height anomalies at ICESat's measurement sites over a designated area, which are then averaged to produce a representative height change (anomaly) in this area. In contrast, a radar altimeter such as TOPEX/Poseidon (TP; its follow-on missions are Jason-1 and -2), repeats its ground tracks at an even time interval (10 days for TP), but has a larger illuminated footprint than ICESat's (about 1 km or larger), making it difficult to measure precise elevations over narrow mountain glaciers. Here we demonstrate the potential of TP and Jason-2 radar altimeters in detecting elevation changes over mountain glaciers that are sufficiently wide and smooth. We select several glacier-covered sites in Mt. Tanggula (Tibet) and the Himalayas to experiment with methods that can generate precise height measurements from the two altimeters. Over the same spot, ranging errors due to slope, volume scattering and radar penetration can be common between repeat cycles, and may be reduced by differencing successive heights. We retracked radar waveforms and classify the surfaces using the SRTM-derived elevations. The effects of terrain and slope are reduced by fitting a surface to the height measurements from repeat cycles. We remove outlier heights and apply a smoothing filter to form final time series of glacier elevation change at the selected sites, which are compared with the results from ICESat (note the different mission times). Because TP and Jason-2 measure height changes every 10 days, clear annual and inter-annual oscillations of glacier heights are present in the resulting time series, in comparison to the unevenly sampled height changes from ICESat that do not show such oscillations. The rates of glacier elevation change from T/P and Jason-2 are mostly negative, but vary with locations and heights.
NASA Technical Reports Server (NTRS)
Shuman, Christopher A.; Sigurdsson, Oddur; Williams, Richard, Jr.; Hall, Dorothy K.
2009-01-01
Located on the Vestfirdir Northwest Fjords), DrangaJokull is the northernmost ice map in Iceland. Currently, the ice cap exceeds 900 m in elevation and covered an area of approx.l46 sq km in August 2004. It was about 204 sq km in area during 1913-1914 and so has lost mass during the 20th century. Drangajokull's size and accessibility for GPS surveys as well as the availability of repeat satellite altimetry profiles since late 2003 make it a good subject for change-detection analysis. The ice cap was surveyed by four GPS-equipped snowmobiles on 19-20 April 2005 and has been profiled in two places by Ice, Cloud. and land Elevation Satellite (ICESat) 'repeat tracks,' fifteen times from late to early 2009. In addition, traditional mass-balance measurements have been taken seasonally at a number of locations across the ice cap and they show positive net mass balances in 2004/2005 through 2006/2007. Mean elevation differences between the temporally-closest ICESat profiles and the GPS-derived digital-elevation model (DEM)(ICESat - DEM) are about 1.1 m but have standard deviations of 3 to 4 m. Differencing all ICESat repeats from the DEM shows that the overall elevation difference trend since 2003 is negative with losses of as much as 1.5 m/a from same season to same season (and similar elevation) data subsets. However, the mass balance assessments by traditional stake re-measurement methods suggest that the elevation changes where ICESat tracks 0046 and 0307 cross Drangajokull are not representative of the whole ice cap. Specifically, the area has experienced positive mass balance years during the time frame when ICESat data indicates substantial losses. This analysis suggests that ICESat-derived elevations may be used for multi-year change detection relative to other data but suggests that large uncertainties remain. These uncertainties may be due to geolocation uncertainty on steep slopes and continuing cloud cover that limits temporal and spatial coverage across the area.
NASA Astrophysics Data System (ADS)
Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian
2015-09-01
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
The study and realization of BDS un-differenced network-RTK based on raw observations
NASA Astrophysics Data System (ADS)
Tu, Rui; Zhang, Pengfei; Zhang, Rui; Lu, Cuixian; Liu, Jinhai; Lu, Xiaochun
2017-06-01
A BeiDou Navigation Satellite System (BDS) Un-Differenced (UD) Network Real Time Kinematic (URTK) positioning algorithm, which is based on raw observations, is developed in this study. Given an integer ambiguity datum, the UD integer ambiguity can be recovered from Double-Differenced (DD) integer ambiguities, thus the UD observation corrections can be calculated and interpolated for the rover station to achieve the fast positioning. As this URTK model uses raw observations instead of the ionospheric-free combinations, it is applicable for both dual- and single-frequency users to realize the URTK service. The algorithm was validated with the experimental BDS data collected at four regional stations from day of year 080 to 083 in 2016. The achieved results confirmed the high efficiency of the proposed URTK for providing the rover users a rapid and precise positioning service compared to the standard NRTK. In our test, the BDS URTK can provide a positioning service with cm level accuracy, i.e., 1 cm in the horizontal components, and 2-3 cm in the vertical component. Within the regional network, the mean convergence time for the users to fix the UD ambiguities is 2.7 s for the dual-frequency observations and of 6.3 s for the single-frequency observations after the DD ambiguity resolution. Furthermore, due to the feature of realizing URTK technology under the UD processing mode, it is possible to integrate the global Precise Point Positioning (PPP) and the local NRTK into a seamless positioning service.
NASA Technical Reports Server (NTRS)
Folkner, W. M.; Border, J. S.; Nandi, S.; Zukor, K. S.
1993-01-01
A new radio metric positioning technique has demonstrated improved orbit determination accuracy for the Magellan and Pioneer Venus Orbiter orbiters. The new technique, known as Same-Beam Interferometry (SBI), is applicable to the positioning of multiple planetary rovers, landers, and orbiters which may simultaneously be observed in the same beamwidth of Earth-based radio antennas. Measurements of carrier phase are differenced between spacecraft and between receiving stations to determine the plane-of-sky components of the separation vector(s) between the spacecraft. The SBI measurements complement the information contained in line-of-sight Doppler measurements, leading to improved orbit determination accuracy. Orbit determination solutions have been obtained for a number of 48-hour data arcs using combinations of Doppler, differenced-Doppler, and SBI data acquired in the spring of 1991. Orbit determination accuracy is assessed by comparing orbit solutions from adjacent data arcs. The orbit solution differences are shown to agree with expected orbit determination uncertainties. The results from this demonstration show that the orbit determination accuracy for Magellan obtained by using Doppler plus SBI data is better than the accuracy achieved using Doppler plus differenced-Doppler by a factor of four and better than the accuracy achieved using only Doppler by a factor of eighteen. The orbit determination accuracy for Pioneer Venus Orbiter using Doppler plus SBI data is better than the accuracy using only Doppler data by 30 percent.
Artificial dissipation and central difference schemes for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1987-01-01
An artificial dissipation model, including boundary treatment, that is employed in many central difference schemes for solving the Euler and Navier-Stokes equations is discussed. Modifications of this model such as the eigenvalue scaling suggested by upwind differencing are examined. Multistage time stepping schemes with and without a multigrid method are used to investigate the effects of changes in the dissipation model on accuracy and convergence. Improved accuracy for inviscid and viscous airfoil flow is obtained with the modified eigenvalue scaling. Slower convergence rates are experienced with the multigrid method using such scaling. The rate of convergence is improved by applying a dissipation scaling function that depends on mesh cell aspect ratio.
Real-time optimizations for integrated smart network camera
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois
2005-02-01
We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.
The Deep Lens Survey : Real--time Optical Transient and Moving Object Detection
NASA Astrophysics Data System (ADS)
Becker, Andy; Wittman, David; Stubbs, Chris; Dell'Antonio, Ian; Loomba, Dinesh; Schommer, Robert; Tyson, J. Anthony; Margoniner, Vera; DLS Collaboration
2001-12-01
We report on the real-time optical transient program of the Deep Lens Survey (DLS). Meeting the DLS core science weak-lensing objective requires repeated visits to the same part of the sky, 20 visits for 63 sub-fields in 4 filters, on a 4-m telescope. These data are reduced in real-time, and differenced against each other on all available timescales. Our observing strategy is optimized to allow sensitivity to transients on several minute, one day, one month, and one year timescales. The depth of the survey allows us to detect and classify both moving and stationary transients down to ~ 25th magnitude, a relatively unconstrained region of astronomical variability space. All transients and moving objects, including asteroids, Kuiper belt (or trans-Neptunian) objects, variable stars, supernovae, 'unknown' bursts with no apparent host, orphan gamma-ray burst afterglows, as well as airplanes, are posted on the web in real-time for use by the community. We emphasize our sensitivity to detect and respond in real-time to orphan afterglows of gamma-ray bursts, and present one candidate orphan in the field of Abell 1836. See http://dls.bell-labs.com/transients.html.
Enhanced ASTER DEMs for Decadal Measurements of Glacier Elevation Changes
NASA Astrophysics Data System (ADS)
Girod, L.; Nuth, C.; Kääb, A.
2016-12-01
Elevation change data is critical to the understanding of a number of geophysical processes, including glaciers through the measurement their volume change. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) system on-board the Terra (EOS AM-1) satellite has been a unique source of systematic stereoscopic images covering the whole globe at 15m resolution and at a consistent quality for over 15 years. While satellite stereo sensors with significantly improved radiometric and spatial resolution are available today, the potential of ASTER data lies in its long consistent time series that is unrivaled, though not fully exploited for change analysis due to lack of data accuracy and precision. ASTER data are strongly affected by attitude jitter, mainly of approximately 4 and 30 km wavelength, and improving the generation of ASTER DEMs requires removal of this effect. We developed MMASTER, an improved method for ASTER DEM generation and implemented it in the open source photogrammetric library and software suite MicMac. The method relies on the computation of a rational polynomial coefficients (RPC) model and the detection and correction of cross-track sensor jitter in order to compute DEMs. Our sensor modeling does not require ground control points and thus potentially allows for automatic processing of large data volumes. When compared to ground truth data, we have assessed a ±5m accuracy in DEM differencing when using our processing method, improved from the ±30m when using the AST14DMO DEM product. We demonstrate and discuss this improved ASTER DEM quality for a number of glaciers in Greenland (See figure attached), Alaska, and Svalbard. The quality of our measurements promises to further unlock the underused potential of ASTER DEMs for glacier volume change time series on a global scale. The data produced by our method will thus help to better understand the response of glaciers to climate change and their influence on runoff and sea level.
Earth orientation from lunar laser range-differencing. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Leick, A.
1978-01-01
For the optimal use of high precision lunar laser ranging (LLR), an investigation regarding a clear definition of the underlying coordinate systems, identification of estimable quantities, favorable station geometry and optimal observation schedule is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGhee, J.M.; Roberts, R.M.; Morel, J.E.
1997-06-01
A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less
NASA Astrophysics Data System (ADS)
Jiang, Mu-Sheng; Sun, Shi-Hai; Tang, Guang-Zhao; Ma, Xiang-Chun; Li, Chun-Yan; Liang, Lin-Mei
2013-12-01
Thanks to the high-speed self-differencing single-photon detector (SD-SPD), the secret key rate of quantum key distribution (QKD), which can, in principle, offer unconditionally secure private communications between two users (Alice and Bob), can exceed 1 Mbit/s. However, the SD-SPD may contain loopholes, which can be exploited by an eavesdropper (Eve) to hack into the unconditional security of the high-speed QKD systems. In this paper, we analyze the fact that the SD-SPD can be remotely controlled by Eve in order to spy on full information without being discovered, then proof-of-principle experiments are demonstrated. Here, we point out that this loophole is introduced directly by the operating principle of the SD-SPD, thus, it cannot be removed, except for the fact that some active countermeasures are applied by the legitimate parties.
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Roelke, R. J.; Steinthorsson, E.
1991-01-01
A numerical code is developed for computing three-dimensional, turbulent, compressible flow within coolant passages of turbine blades. The code is based on a formulation of the compressible Navier-Stokes equations in a rotating frame of reference in which the velocity dependent variable is specified with respect to the rotating frame instead of the inertial frame. The algorithm employed to obtain solutions to the governing equation is a finite-volume LU algorithm that allows convection, source, as well as diffusion terms to be treated implicitly. In this study, all convection terms are upwind differenced by using flux-vector splitting, and all diffusion terms are centrally differenced. This paper describes the formulation and algorithm employed in the code. Some computed solutions for the flow within a coolant passage of a radial turbine are also presented.
Shi, Junpeng; Hu, Guoping; Sun, Fenggang; Zong, Binfeng; Wang, Xin
2017-08-24
This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions.
Hu, Guoping; Zong, Binfeng; Wang, Xin
2017-01-01
This paper proposes an improved spatial differencing (ISD) scheme for two-dimensional direction of arrival (2-D DOA) estimation of coherent signals with uniform rectangular arrays (URAs). We first divide the URA into a number of row rectangular subarrays. Then, by extracting all the data information of each subarray, we only perform difference-operation on the auto-correlations, while the cross-correlations are kept unchanged. Using the reconstructed submatrices, both the forward only ISD (FO-ISD) and forward backward ISD (FB-ISD) methods are developed under the proposed scheme. Compared with the existing spatial smoothing techniques, the proposed scheme can use more data information of the sample covariance matrix and also suppress the effect of additive noise more effectively. Simulation results show that both FO-ISD and FB-ISD can improve the estimation performance largely as compared to the others, in white or colored noise conditions. PMID:28837115
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
The terminal area simulation system. Volume 1: Theoretical formulation
NASA Technical Reports Server (NTRS)
Proctor, F. H.
1987-01-01
A three-dimensional numerical cloud model was developed for the general purpose of studying convective phenomena. The model utilizes a time splitting integration procedure in the numerical solution of the compressible nonhydrostatic primitive equations. Turbulence closure is achieved by a conventional first-order diagnostic approximation. Open lateral boundaries are incorporated which minimize wave reflection and which do not induce domain-wide mass trends. Microphysical processes are governed by prognostic equations for potential temperature water vapor, cloud droplets, ice crystals, rain, snow, and hail. Microphysical interactions are computed by numerous Orville-type parameterizations. A diagnostic surface boundary layer is parameterized assuming Monin-Obukhov similarity theory. The governing equation set is approximated on a staggered three-dimensional grid with quadratic-conservative central space differencing. Time differencing is approximated by the second-order Adams-Bashforth method. The vertical grid spacing may be either linear or stretched. The model domain may translate along with a convective cell, even at variable speeds.
Assessment of trend and seasonality in road accident data: an Iranian case study.
Razzaghi, Alireza; Bahrampour, Abbas; Baneshi, Mohammad Reza; Zolala, Farzaneh
2013-06-01
Road traffic accidents and their related deaths have become a major concern, particularly in developing countries. Iran has adopted a series of policies and interventions to control the high number of accidents occurring over the past few years. In this study we used a time series model to understand the trend of accidents, and ascertain the viability of applying ARIMA models on data from Taybad city. This study is a cross-sectional study. We used data from accidents occurring in Taybad between 2007 and 2011. We obtained the data from the Ministry of Health (MOH) and used the time series method with a time lag of one month. After plotting the trend, non-stationary data in mean and variance were removed using Box-Cox transformation and a differencing method respectively. The ACF and PACF plots were used to control the stationary situation. The traffic accidents in our study had an increasing trend over the five years of study. Based on ACF and PACF plots gained after applying Box-Cox transformation and differencing, data did not fit to a time series model. Therefore, neither ARIMA model nor seasonality were observed. Traffic accidents in Taybad have an upward trend. In addition, we expected either the AR model, MA model or ARIMA model to have a seasonal trend, yet this was not observed in this analysis. Several reasons may have contributed to this situation, such as uncertainty of the quality of data, weather changes, and behavioural factors that are not taken into account by time series analysis.
NASA Astrophysics Data System (ADS)
Otto, M.; Scherer, D.; Richters, J.
2011-05-01
High Altitude Wetlands of the Andes (HAWA) belong to a unique type of wetland within the semi-arid high Andean region. Knowledge about HAWA has been derived mainly from studies at single sites within different parts of the Andes at only small time scales. On the one hand, HAWA depend on water provided by glacier streams, snow melt or precipitation. On the other hand, they are suspected to influence hydrology through water retention and vegetation growth altering stream flow velocity. We derived HAWA land cover from satellite data at regional scale and analysed changes in connection with precipitation over the last decade. Perennial and temporal HAWA subtypes can be distinguished by seasonal changes of photosynthetically active vegetation (PAV) indicating the perennial or temporal availability of water during the year. HAWA have been delineated within a region of 12 800 km2 situated in the Northwest of Lake Titicaca. The multi-temporal classification method used Normalized Differenced Vegetation Index (NDVI) and Normalized Differenced Infrared Index (NDII) data derived from two Landsat ETM+ scenes at the end of austral winter (September 2000) and at the end of austral summer (May 2001). The mapping result indicates an unexpected high abundance of HAWA covering about 800 km2 of the study region (6 %). Annual HAWA mapping was computed using NDVI 16-day composites of Moderate Resolution Imaging Spectroradiometer (MODIS). Analyses on the relation between HAWA and precipitation was based on monthly precipitation data of the Tropical Rain Measurement Mission (TRMM 3B43) and MODIS Eight Day Maximum Snow Extent data (MOD10A2) from 2000 to 2010. We found HAWA subtype specific dependencies on precipitation conditions. A strong relation exists between perennial HAWA and snow fall (r2: 0.82) in dry austral winter months (June to August) and between temporal HAWA and precipitation (r2: 0.75) during austral summer (March to May). Annual changes in spatial extend of perennial HAWA indicate alterations in annual water supply generated from snow melt.
Estimation of bladder wall location in ultrasound images.
Topper, A K; Jernigan, M E
1991-05-01
A method of automatically estimating the location of the bladder wall in ultrasound images is proposed. Obtaining this estimate is intended to be the first stage in the development of an automatic bladder volume calculation system. The first step in the bladder wall estimation scheme involves globally processing the images using standard image processing techniques to highlight the bladder wall. Separate processing sequences are required to highlight the anterior bladder wall and the posterior bladder wall. The sequence to highlight the anterior bladder wall involves Gaussian smoothing and second differencing followed by zero-crossing detection. Median filtering followed by thresholding and gradient detection is used to highlight as much of the rest of the bladder wall as was visible in the original images. Then a 'bladder wall follower'--a line follower with rules based on the characteristics of ultrasound imaging and the anatomy involved--is applied to the processed images to estimate the bladder wall location by following the portions of the bladder wall which are highlighted and filling in the missing segments. The results achieved using this scheme are presented.
Tracking flow of leukocytes in blood for drug analysis
NASA Astrophysics Data System (ADS)
Basharat, Arslan; Turner, Wesley; Stephens, Gillian; Badillo, Benjamin; Lumpkin, Rick; Andre, Patrick; Perera, Amitha
2011-03-01
Modern microscopy techniques allow imaging of circulating blood components under vascular flow conditions. The resulting video sequences provide unique insights into the behavior of blood cells within the vasculature and can be used as a method to monitor and quantitate the recruitment of inflammatory cells at sites of vascular injury/ inflammation and potentially serve as a pharmacodynamic biomarker, helping screen new therapies and individualize dose and combinations of drugs. However, manual analysis of these video sequences is intractable, requiring hours per 400 second video clip. In this paper, we present an automated technique to analyze the behavior and recruitment of human leukocytes in whole blood under physiological conditions of shear through a simple multi-channel fluorescence microscope in real-time. This technique detects and tracks the recruitment of leukocytes to a bioactive surface coated on a flow chamber. Rolling cells (cells which partially bind to the bioactive matrix) are detected counted, and have their velocity measured and graphed. The challenges here include: high cell density, appearance similarity, and low (1Hz) frame rate. Our approach performs frame differencing based motion segmentation, track initialization and online tracking of individual leukocytes.
Influence of flaps and engines on aircraft wake vortices
DOT National Transportation Integrated Search
1974-09-01
Although pervious investigations have shown that the nature of aircraft wake vortices depends on the aircraft type and flap configuration, the causes for these differences have not been clearly identified. In this Note we show that observed differenc...
Major, Jon J.; Mosbrucker, Adam; Spicer, Kurt R.; Crisafulli, Charles; Dale, V.
2018-01-01
Exceptional sediment yields persist in Toutle River valley more than 30 years after the major 1980 eruption of Mount St. Helens. Differencing of decadal-scale digital elevation models shows the elevated load comes largely from persistent lateral channel erosion across the debris-avalanche deposit. Since the mid-1980s, rates of channel-bed-elevation change have diminished, and magnitudes of lateral erosion have outpaced those of channel incision. A digital elevation model of difference from 1999 to 2009 shows erosion across the debris-avalanche deposit is more spatially distributed compared to a model from 1987 to 1999, in which erosion was strongly focused along specific reaches of the channel.
Foulger, G.R.; Julian, B.R.; Pitt, A.M.; Hill, D.P.; Malin, P.E.; Shalev, E.
2003-01-01
A temporary network of 69 three-component seismic stations captured a major seismic sequence in Long Valley caldera in 1997. We performed a tomographic inversion for crustal structure beneath a 28 km ?? 16 km area encompassing part of the resurgent dome, the south moat, and Mammoth Mountain. Resolution of crustal structure beneath the center of the study volume was good down to ???3 km below sea level (???5 km below the surface). Relatively high wave speeds are associated with the Bishop Tuff and lower wave speeds characterize debris in the surrounding moat. A low-Vp/Vs anomaly extending from near the surface to ???1 km below sea level beneath Mammoth Mountain may represent a CO2 reservoir that is supplying CO2-rich springs, venting at the surface, and killing trees. We investigated temporal variations in structure beneath Mammoth Mountain by differencing our results with tomographic images obtained using data from 1989/1990. Significant changes in both Vp and Vs were consistent with the migration of CO2 into the upper 2 km or so beneath Mammoth Mountain and its depletion in peripheral volumes that correlate with surface venting areas. Repeat tomography is capable of detecting the migration of gas beneath active silicic volcanoes and may thus provide a useful volcano monitoring tool.
AN IMMERSED BOUNDARY METHOD FOR COMPLEX INCOMPRESSIBLE FLOWS
An immersed boundary method for time-dependant, three- dimensional, incompressible flows is presented in this paper. The incompressible Navier-Stokes equations are discretized using a low-diffusion flux splitting method for the inviscid fluxes and a second order central differenc...
NASA Technical Reports Server (NTRS)
Cess, R. D.; Zhang, M. H.; Potter, G. L.; Alekseev, V.; Barker, H. W.; Bony, S.; Colman, R. A.; Dazlich, D. A.; DelGenio, A. D.; Deque, M.;
1997-01-01
We compare seasonal changes in cloud-radiative forcing (CRF) at the top of the atmosphere from 18 atmospheric general circulation models, and observations from the Earth Radiation Budget Experiment (ERBE). To enhance the CRF signal and suppress interannual variability, we consider only zonal mean quantities for which the extreme months (January and July), as well as the northern and southern hemispheres, have been differenced. Since seasonal variations of the shortwave component of CRF are caused by seasonal changes in both cloudiness and solar irradiance, the latter was removed. In the ERBE data, seasonal changes in CRF are driven primarily by changes in cloud amount. The same conclusion applies to the models. The shortwave component of seasonal CRF is a measure of changes in cloud amount at all altitudes, while the longwave component is more a measure of upper level clouds. Thus important insights into seasonal cloud amount variations of the models have been obtained by comparing both components, as generated by the models, with the satellite data. For example, in 10 of the 18 models the seasonal oscillations of zonal cloud patterns extend too far poleward by one latitudinal grid.
An efficient iteration strategy for the solution of the Euler equations
NASA Technical Reports Server (NTRS)
Walters, R. W.; Dwoyer, D. L.
1985-01-01
A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two-dimensions is described. The basic algorithm has the property that convergence to the steady-state is quadratic for fully supersonic flows and linear otherwise. This is in contrast to the block ADI methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented here is easily enhanced to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, thus yielding a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing both oblique and normal shock waves which confirm the efficiency of the iteration strategy.
Efficient solutions to the Euler equations for supersonic flow with embedded subsonic regions
NASA Technical Reports Server (NTRS)
Walters, Robert W.; Dwoyer, Douglas L.
1987-01-01
A line Gauss-Seidel (LGS) relaxation algorithm in conjunction with a one-parameter family of upwind discretizations of the Euler equations in two dimensions is described. Convergence of the basic algorithm to the steady state is quadratic for fully supersonic flows and is linear for other flows. This is in contrast to the block alternating direction implicit methods (either central or upwind differenced) and the upwind biased relaxation schemes, all of which converge linearly, independent of the flow regime. Moreover, the algorithm presented herein is easily coupled with methods to detect regions of subsonic flow embedded in supersonic flow. This allows marching by lines in the supersonic regions, converging each line quadratically, and iterating in the subsonic regions, and yields a very efficient iteration strategy. Numerical results are presented for two-dimensional supersonic and transonic flows containing oblique and normal shock waves which confirm the efficiency of the iteration strategy.
Second- and third-order upwind difference schemes for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Yang, J. Y.
1984-01-01
Second- and third-order two time-level five-point explicit upwind-difference schemes are described for the numerical solution of hyperbolic systems of conservation laws and applied to the Euler equations of inviscid gas dynamics. Nonliner smoothing techniques are used to make the schemes total variation diminishing. In the method both hyperbolicity and conservation properties of the hyperbolic conservation laws are combined in a very natural way by introducing a normalized Jacobian matrix of the hyperbolic system. Entropy satisfying shock transition operators which are consistent with the upwind differencing are locally introduced when transonic shock transition is detected. Schemes thus constructed are suitable for shockcapturing calculations. The stability and the global order of accuracy of the proposed schemes are examined. Numerical experiments for the inviscid Burgers equation and the compressible Euler equations in one and two space dimensions involving various situations of aerodynamic interest are included and compared.
Monitoring of the permeable pavement demonstration site at Edison Environmental Center
The EPA’s Urban Watershed Management Branch has installed an instrumented, working full-scale 110-space pervious pavement parking lot and has been monitoring several environmental stressors and runoff. This parking lot demonstration site has allowed the investigation of differenc...
Completion of the National Land Cover Database (NLCD) 1992–2001 Land Cover Change Retrofit product
Fry, J.A.; Coan, Michael; Homer, Collin G.; Meyer, Debra K.; Wickham, J.D.
2009-01-01
The Multi-Resolution Land Characteristics Consortium has supported the development of two national digital land cover products: the National Land Cover Dataset (NLCD) 1992 and National Land Cover Database (NLCD) 2001. Substantial differences in imagery, legends, and methods between these two land cover products must be overcome in order to support direct comparison. The NLCD 1992-2001 Land Cover Change Retrofit product was developed to provide more accurate and useful land cover change data than would be possible by direct comparison of NLCD 1992 and NLCD 2001. For the change analysis method to be both national in scale and timely, implementation required production across many Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) path/rows simultaneously. To meet these requirements, a hybrid change analysis process was developed to incorporate both post-classification comparison and specialized ratio differencing change analysis techniques. At a resolution of 30 meters, the completed NLCD 1992-2001 Land Cover Change Retrofit product contains unchanged pixels from the NLCD 2001 land cover dataset that have been cross-walked to a modified Anderson Level I class code, and changed pixels labeled with a 'from-to' class code. Analysis of the results for the conterminous United States indicated that about 3 percent of the land cover dataset changed between 1992 and 2001.
NASA Astrophysics Data System (ADS)
Kwon, J.; Yang, H.
2006-12-01
Although GPS provides continuous and accurate position information, there are still some rooms for improvement of its positional accuracy, especially in the medium and long range baseline determination. In general, in case of more than 50 km baseline length, the effect of ionospheric delay is the one causing the largest degradation in positional accuracy. For example, the ionospheric delay in terms of double differenced mode easily reaches 10 cm with baseline length of 101 km. Therefore, many researchers have been tried to mitigate/reduce the effect using various modeling methods. In this paper, the optimal stochastic modeling of the ionospheric delay in terms of baseline length is presented. The data processing has been performed by constructing a Kalman filter with states of positions, ambiguities, and the ionospheric delays in the double differenced mode. Considering the long baseline length, both double differenced GPS phase and code observations are used as observables and LAMBDA has been applied to fix the ambiguities. Here, the ionospheric delay is stochastically modeled by well-known Gaussian, 1st and 3rd order Gauss-Markov process. The parameters required in those models such as correlation distance and time is determined by the least-square adjustment using ionosphere-only observables. Mainly the results and analysis from this study show the effect of stochastic models of the ionospheric delay in terms of the baseline length, models, and parameters used. In the above example with 101 km baseline length, it was found that the positional accuracy with appropriate ionospheric modeling (Gaussian) was about ±2 cm whereas it reaches about ±15 cm with no stochastic modeling. It is expected that the approach in this study contributes to improve positional accuracy, especially in medium and long range baseline determination.
Single-Receiver GPS Phase Bias Resolution
NASA Technical Reports Server (NTRS)
Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.
2010-01-01
Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Weak associations between the daily number of suicide cases and amount of daily sunlight.
Seregi, Bernadett; Kapitány, Balázs; Maróti-Agóts, Ákos; Rihmer, Zoltán; Gonda, Xénia; Döme, Péter
2017-02-06
Several environmental factors with periodic changes in intensity during the calendar year have been put forward to explain the increase in suicide frequency during spring and summer. In the current study we investigated the effect of averaged daily sunshine duration of periods with different lengths and 'lags' (i.e. the number of days between the last day of the period for which the averaged sunshine duration was calculated and the day of suicide) on suicide risk. We obtained data on daily numbers of suicide cases and daily sunshine duration in Hungary from 1979 to 2013. In order to remove the seasonal components from the two time series (i.e. numbers of suicide and sunshine hours) we used the differencing method. Pearson correlations (n=22,950) were calculated to reveal associations between sunshine duration and suicide risk. The final sample consisted of 122,116 suicide cases. Regarding the entire investigated period, after differencing, sunshine duration and number of suicides on the same days showed a distinctly weak, but highly significant positive correlation in the total sample (r=0.067; p=1.17*10 -13 ). Positive significant correlations (p˂0.0001) between suicide risk on the index day and averaged sunshine duration in the previous days (up to 11days) were also found in the total sample. Our results from a large sample strongly support the hypothesis that sunshine has a prompt, but very weak increasing effect on the risk of suicide (especially violent cases among males). The main limitation is that possible confounding factors were not controlled for. Copyright © 2016 Elsevier Inc. All rights reserved.
Solving the Sea-Level Equation in an Explicit Time Differencing Scheme
NASA Astrophysics Data System (ADS)
Klemann, V.; Hagedoorn, J. M.; Thomas, M.
2016-12-01
In preparation of coupling the solid-earth to an ice-sheet compartment in an earth-system model, the dependency of initial topography on the ice-sheet history and viscosity structure has to be analysed. In this study, we discuss this dependency and how it influences the reconstruction of former sea level during a glacial cycle. The modelling is based on the VILMA code in which the field equations are solved in the time domain applying an explicit time-differencing scheme. The sea-level equation is solved simultaneously in the same explicit scheme as the viscoleastic field equations (Hagedoorn et al., 2007). With the assumption of only small changes, we neglect the iterative solution at each time step as suggested by e.g. Kendall et al. (2005). Nevertheless, the prediction of the initial paleo topography in case of moving coastlines remains to be iterated by repeated integration of the whole load history. The sensitivity study sketched at the beginning is accordingly motivated by the question if the iteration of the paleo topography can be replaced by a predefined one. This study is part of the German paleoclimate modelling initiative PalMod. Lit:Hagedoorn JM, Wolf D, Martinec Z, 2007. An estimate of global mean sea-level rise inferred from tide-gauge measurements using glacial-isostatic models consistent with the relative sea-level record. Pure appl. Geophys. 164: 791-818, doi:10.1007/s00024-007-0186-7Kendall RA, Mitrovica JX, Milne GA, 2005. On post-glacial sea level - II. Numerical formulation and comparative reesults on spherically symmetric models. Geophys. J. Int., 161: 679-706, doi:10.1111/j.365-246.X.2005.02553.x
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, S.; Gezari, S.; Heinis, S.
2015-03-20
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.« less
SINDA, Systems Improved Numerical Differencing Analyzer
NASA Technical Reports Server (NTRS)
Fink, L. C.; Pan, H. M. Y.; Ishimoto, T.
1972-01-01
Computer program has been written to analyze group of 100-node areas and then provide for summation of any number of 100-node areas to obtain temperature profile. SINDA program options offer user variety of methods for solution of thermal analog modes presented in network format.
NASA Astrophysics Data System (ADS)
He, Haizhen; Luo, Rongming; Hu, Zhenhua; Wen, Lei
2017-07-01
A current-mode field programmable analog array(FPAA) is presented in this paper. The proposed FPAA consists of 9 configurable analog blocks(CABs) which are based on current differencing transconductance amplifiers (CDTA) and trans-impedance amplifier (TIA). The proposed CABs interconnect through global lines. These global lines contain some bridge switches, which used to reduce the parasitic capacitance effectively. High-order current-mode low-pass and band-pass filter with transmission zeros based on the simulation of general passive RLC ladder prototypes is proposed and mapped into the FPAA structure in order to demonstrate the versatility of the FPAA. These filters exhibit good performance on bandwidth. Filter's cutoff frequency can be tuned from 1.2MHz to 40MHz.The proposed FPAA is simulated in a standard Charted 0.18μm CMOS process with +/-1.2V power supply to confirm the presented theory, and the results have good agreement with the theoretical analysis.
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.
1976-01-01
An uncoupled time asymptotic alternating direction implicit method for solving the Navier-Stokes equations was tested on two laminar parallel mixing flows. A constant total temperature was assumed in order to eliminate the need to solve the full energy equation; consequently, static temperature was evaluated by using algebraic relationship. For the mixing of two supersonic streams at a Reynolds number of 1,000, convergent solutions were obtained for a time step 5 times the maximum allowable size for an explicit method. The solution diverged for a time step 10 times the explicit limit. Improved convergence was obtained when upwind differencing was used for convective terms. Larger time steps were not possible with either upwind differencing or the diagonally dominant scheme. Artificial viscosity was added to the continuity equation in order to eliminate divergence for the mixing of a subsonic stream with a supersonic stream at a Reynolds number of 1,000.
Method of resolving radio phase ambiguity in satellite orbit determination
NASA Technical Reports Server (NTRS)
Councelman, Charles C., III; Abbot, Richard I.
1989-01-01
For satellite orbit determination, the most accurate observable available today is microwave radio phase, which can be differenced between observing stations and between satellites to cancel both transmitter- and receiver-related errors. For maximum accuracy, the integer cycle ambiguities of the doubly differenced observations must be resolved. To perform this ambiguity resolution, a bootstrapping strategy is proposed. This strategy requires the tracking stations to have a wide ranging progression of spacings. By conventional 'integrated Doppler' processing of the observations from the most widely spaced stations, the orbits are determined well enough to permit resolution of the ambiguities for the most closely spaced stations. The resolution of these ambiguities reduces the uncertainty of the orbit determination enough to enable ambiguity resolution for more widely spaced stations, which further reduces the orbital uncertainty. In a test of this strategy with six tracking stations, both the formal and the true errors of determining Global Positioning System satellite orbits were reduced by a factor of 2.
NASA Astrophysics Data System (ADS)
Wang, Xiaoqiang; Ju, Lili; Du, Qiang
2016-07-01
The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.
Initial Evaluation of Signal-Based Bayesian Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.; Russell, S.
2016-12-01
We present SIGVISA (Signal-based Vertically Integrated Seismic Analysis), a next-generation system for global seismic monitoring through Bayesian inference on seismic signals. Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a network of stations. We report results from an evaluation of SIGVISA monitoring the western United States for a two-week period following the magnitude 6.0 event in Wells, NV in February 2008. During this period, SIGVISA detects more than twice as many events as NETVISA, and three times as many as SEL3, while operating at the same precision; at lower precisions it detects up to five times as many events as SEL3. At the same time, signal-based monitoring reduces mean location errors by a factor of four relative to detection-based systems. We provide evidence that, given only IMS data, SIGVISA detects events that are missed by regional monitoring networks, indicating that our evaluations may even underestimate its performance. Finally, SIGVISA matches or exceeds the detection rates of existing systems for de novo events - events with no nearby historical seismicity - and detects through automated processing a number of such events missed even by the human analysts generating the LEB.
An efficient method for solving the steady Euler equations
NASA Technical Reports Server (NTRS)
Liou, M.-S.
1986-01-01
An efficient numerical procedure for solving a set of nonlinear partial differential equations, the steady Euler equations, using Newton's linearization procedure is presented. A theorem indicating quadratic convergence for the case of differential equations is demonstrated. A condition for the domain of quadratic convergence Omega(2) is obtained which indicates that whether an approximation lies in Omega(2) depends on the rate of change and the smoothness of the flow vectors, and hence is problem-dependent. The choice of spatial differencing, of particular importance for the present method, is discussed. The treatment of boundary conditions is addressed, and the system of equations resulting from the foregoing analysis is summarized and solution strategies are discussed. The convergence of calculated solutions is demonstrated by comparing them with exact solutions to one and two-dimensional problems.
Interfacing a General Purpose Fluid Network Flow Program with the SINDA/G Thermal Analysis Program
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Popok, Daniel
1999-01-01
A general purpose, one dimensional fluid flow code is currently being interfaced with the thermal analysis program Systems Improved Numerical Differencing Analyzer/Gaski (SINDA/G). The flow code, Generalized Fluid System Simulation Program (GFSSP), is capable of analyzing steady state and transient flow in a complex network. The flow code is capable of modeling several physical phenomena including compressibility effects, phase changes, body forces (such as gravity and centrifugal) and mixture thermodynamics for multiple species. The addition of GFSSP to SINDA/G provides a significant improvement in convective heat transfer modeling for SINDA/G. The interface development is conducted in multiple phases. This paper describes the first phase of the interface which allows for steady and quasi-steady (unsteady solid, steady fluid) conjugate heat transfer modeling.
NASA Technical Reports Server (NTRS)
Pollack, James B.; Rind, David; Lacis, Andrew; Hansen, James E.; Sato, Makiko; Ruedy, Reto
1993-01-01
The response of the climate system to a temporally and spatially constant amount of volcanic particles is simulated using a general circulation model (GCM). The optical depth of the aerosols is chosen so as to produce approximately the same amount of forcing as results from doubling the present CO2 content of the atmosphere and from the boundary conditions associated with the peak of the last ice age. The climate changes produced by long-term volcanic aerosol forcing are obtained by differencing this simulation and one made for the present climate with no volcanic aerosol forcing. The simulations indicate that a significant cooling of the troposphere and surface can occur at times of closely spaced multiple sulfur-rich volcanic explosions that span time scales of decades to centuries. The steady-state climate response to volcanic forcing includes a large expansion of sea ice, especially in the Southern Hemisphere; a resultant large increase in surface and planetary albedo at high latitudes; and sizable changes in the annually and zonally averaged air temperature.
NASA Astrophysics Data System (ADS)
Murray, J. E.; Brindley, H. E.; Bryant, R. G.; Russell, J. E.; Jenkins, K. F.; Washington, R.
2016-09-01
A method is described to significantly enhance the signature of dust events using observations from the Spinning Enhanced Visible and InfraRed Imager (SEVIRI). The approach involves the derivation of a composite clear-sky signal for selected channels on an individual time step and pixel basis. These composite signals are subtracted from each observation in the relevant channels to enhance weak transient signals associated with either (a) low levels of dust emission or (b) dust emissions with high salt or low quartz content. Different channel combinations, of the differenced data from the steps above, are then rendered in false color imagery for the purpose of improved identification of dust source locations and activity. We have applied this clear-sky difference (CSD) algorithm over three (globally significant) source regions in southern Africa: the Makgadikgadi Basin, Etosha Pan, and the Namibian and western South African coast. Case study analyses indicate three notable advantages associated with the CSD approach over established image rendering methods: (i) an improved ability to detect dust plumes, (ii) the observation of source activation earlier in the diurnal cycle, and (iii) an improved ability to resolve and pinpoint dust plume source locations.
Howle, James F.; Alpers, Charles N.; Bawden, Gerald W.; Bond, Sandra
2016-07-28
High-resolution ground-based light detection and ranging (lidar), also known as terrestrial laser scanning, was used to quantify the volume of mercury-contaminated sediment eroded from a stream cutbank at Stocking Flat along Deer Creek in the Sierra Nevada foothills, about 3 kilometers west of Nevada City, California. Terrestrial laser scanning was used to collect sub-centimeter, three-dimensional images of the complex cutbank surface, which could not be mapped non-destructively or in sufficient detail with traditional surveying techniques.The stream cutbank, which is approximately 50 meters long and 8 meters high, was surveyed on four occasions: December 1, 2010; January 20, 2011; May 12, 2011; and February 4, 2013. Volumetric changes were determined between the sequential, three-dimensional lidar surveys. Volume was calculated by two methods, and the average value is reported. Between the first and second surveys (December 1, 2010, to January 20, 2011), a volume of 143 plus or minus 15 cubic meters of sediment was eroded from the cutbank and mobilized by Deer Creek. Between the second and third surveys (January 20, 2011, to May 12, 2011), a volume of 207 plus or minus 24 cubic meters of sediment was eroded from the cutbank and mobilized by the stream. Total volumetric change during the winter and spring of 2010–11 was 350 plus or minus 28 cubic meters. Between the third and fourth surveys (May 12, 2011, to February 4, 2013), the differencing of the three-dimensional lidar data indicated that a volume of 18 plus or minus 10 cubic meters of sediment was eroded from the cutbank. The total volume of sediment eroded from the cutbank between the first and fourth surveys was 368 plus or minus 30 cubic meters.
NASA Astrophysics Data System (ADS)
Goerlich, Franz; Paul, Frank; Bolch, Tobias
2017-04-01
The variable and often complex dynamics of the glaciers in High Mountain Asia have recently been studied intensively from satellite imagery. Time-series of optical and SAR imagery revealed rapid changes and strong trends in glacier extent and surface flow velocities as well as elevation changes from differencing of DEMs and altimetry sensors over the 1990 to 2015 period. In contrast to nearly all other regions in the world, especially glaciers in the Karakoram had balanced budgets and often rapidly advanced during surge events and retreated thereafter. This complicates the interpretation of climate change impacts on the glaciers in the region and leaves high uncertainties for calculation of future glacier and run-off development. A key for an improved understanding of glacier dynamics in this region is an extension of the observation period. This can be achieved using Corona and Hexagon reconnaissance satellite imagery from the 1960s and 1970s providing a comparably high spatial resolution between 2.7 and 7.6 m. Thereby, the keyhole satellites allow both, determination of glacier extents and calculation of DTMs from stereo image pairs that can be used to determine geodetic volume/ mass changes. The latter has already been performed on a regional scale for glaciers in the Himalaya and Tien Shan using Hexagon and Corona imagery with high accuracies. However, due to a particular camera model and complex distortion effects, which is especially the case for Corona images, the analysis is a challenging task. Therefore, we have developed a workflow to generate DTMs and orthophotos from Corona that considers the complex camera model. This study will present the workflow with its limitations, challenges and the obtained accuracy over stable ground. With our generated DTMs and Orthophotos, we already calculated mass balances and length changes for the Ak-Shirak range in Tian Shan and currently adapting the workflow to the Karakoram and Pamir mountains. Furthermore, the DTMs help us to detect glaciers of surge-type behaviour and to reconstruct full surge cycles with the extent to the early and mid 1960s.
Gangl, Markus; Ziefle, Andrea
2015-09-01
The authors investigate the relationship between family policy and women's attachment to the labor market, focusing specifically on policy feedback on women's subjective work commitment. They utilize a quasi-experimental design to identify normative policy effects from changes in mothers' work commitment in conjunction with two policy changes that significantly extended the length of statutory parental leave entitlements in Germany. Using unique survey data from the German Socio-Economic Panel and difference-in-differences, triple-differenced, and instrumental variables estimators for panel data, they obtain consistent empirical evidence that increasing generosity of leave entitlements led to a decline in mothers' work commitment in both East and West Germany. They also probe potential mediating mechanisms and find strong evidence for role exposure and norm setting effects. Finally, they demonstrate that policy-induced shifts in mothers' preferences have contributed to. retarding women's labor force participation after childbirth in Germany, especially as far as mothers' return to full-time employment is concerned.
NASA Astrophysics Data System (ADS)
Cavalli, Marco; Goldin, Beatrice; Comiti, Francesco; Brardinoni, Francesco; Marchi, Lorenzo
2017-08-01
Digital elevation models (DEMs) built from repeated topographic surveys permit producing DEM of Difference (DoD) that enables assessment of elevation variations and estimation of volumetric changes through time. In the framework of sediment transport studies, DEM differencing enables quantitative and spatially-distributed representation of erosion and deposition within the analyzed time window, at both the channel reach and the catchment scale. In this study, two high-resolution Digital Terrain Models (DTMs) derived from airborne LiDAR data (2 m resolution) acquired in 2005 and 2011 were used to characterize the topographic variations caused by sediment erosion, transport and deposition in two adjacent mountain basins (Gadria and Strimm, Vinschgau - Venosta valley, Eastern Alps, Italy). These catchments were chosen for their contrasting morphology and because they feature different types and intensity of sediment transfer processes. A method based on fuzzy logic, which takes into account spatially variable DTMs uncertainty, was used to derive the DoD of the study area. Volumes of erosion and deposition calculated from the DoD were then compared with post-event field surveys to test the consistency of two independent estimates. Results show an overall agreement between the estimates, with differences due to the intrinsic approximations of the two approaches. The consistency of DoD with post-event estimates encourages the integration of these two methods, whose combined application may permit to overcome the intrinsic limitations of the two estimations. The comparison between 2005 and 2011 DTMs allowed to investigate the relationships between topographic changes and geomorphometric parameters expressing the role of topography on sediment erosion and deposition (i.e., slope and contributing area) and describing the morphology influenced by debris flows and fluvial processes (i.e., curvature). Erosion and deposition relations in the slope-area space display substantial differences between the Gadria and the Strimm basins. While in the former erosion and deposition clusters are reasonably well discriminated, in the latter, characterized by a complex stepped structure, we observe substantial overlapping. Erosion mostly occurred in areas that show persistency of concavity or transformation from convex and flat to concave surfaces, whereas deposition prevailingly took place on convex morphologies. Less expected correspondences between curvature and topographic changes can be explained by the variable sediment transport processes, which are often characterized by alternation of erosion and deposition between different events and even during the same event.
Lichen ecology and diversity of a sagebrush steppe in Oregon: 1977 to the present
USDA-ARS?s Scientific Manuscript database
A lichen checklist is presented of 141 species from the Lawrence Memorial Grassland Preserve and nearby lands in Wasco County, Oregon, based on collections made in the 1970s and 1990s. Collections include epiphytic, lignicolous, saxicolous, muscicolous and terricolous species. To evaluate differenc...
Interdependence of PRECIS Role Operators: A Quantitative Analysis of Their Associations.
ERIC Educational Resources Information Center
Mahapatra, Manoranjan; Biswas, Subal Chandra
1986-01-01
Analyzes associations among different role operators quantitatively by taking input strings from 200 abstracts, each related to subject fields of taxation, genetic psychology, and Shakespearean drama, and subjecting them to the Chi-square test. Significant associations by other differencing operators and connectives are discussed. A schema of role…
Electrocardiography (ECG) is one of the standard technologies used to monitor and assess cardiac function, and provide insight into the mechanisms driving myocardial pathology. Increased understanding of the effects of cardiovascular disease on rat ECG may help make ECG assessmen...
due to the dangers of utilizing convoy operations. However, enemy actions, austere conditions, and inclement weather pose a significant risk to a...squares temporal differencing for policy evaluation. We construct a representative problem instance based on an austere combat environment in order to
NASA Astrophysics Data System (ADS)
Otto, M.; Scherer, D.; Richters, J.
2011-01-01
High Altitude Wetlands of the Andes (HAWA) are unique types of wetlands within the semi-arid high Andean region. Knowledge about HAWA has been derived mainly from studies at single sites within different parts of the Andes at only small time scales. On the one hand HAWA depend on water provided by glacier streams, snow melt or precipitation. On the other hand, they are suspected to influence hydrology through water retention and vegetation growth altering stream flow velocity. We derived HAWA land cover from satellite data at regional scale and analysed changes in connection with precipitation over the last decade. Perennial and temporal HAWA subtypes can be distinguished by seasonal changes of photosynthetically active vegetation (PAV) indicating the perennial or temporal availability of water during the year. HAWA have been delineated within a region of 11 000 km2 situated in the Northwest of Lake Titicaca. The multi temporal classification method used Normalized Differenced Vegetation Index (NDVI) and Normalized Differenced Infrared Index (NDII) data derived from two Landsat ETM+ scenes at the end of austral winter (September 2000) and at the end of austral summer (May 2001). The mapping result indicates an unexpected high abundance of HAWA covering about 800 km2 of the study region (6%). Annual HAWA mapping was computed using NDVI 16-day composites of Moderate Resolution Imaging Spectroradiometer (MODIS). Analyses on the reletation between HAWA and precipitation was based on monthly precipitation data of the Tropical Rain Measurement Mission (TRMM 3B43) and MODIS Eight Day Maximum Snow Extent data (MOD10A2) from 2000 to 2010. We found HAWA subtype specific dependencies to precipitation conditions. Strong relation exists between perennial HAWA and snow fall (r2: 0.82) in dry austral winter months (June to August) and between temporal HAWA and precipitation (r2: 0.75) during austral summer (March to May). Annual spatial patterns of perennial HAWA indicated spatial alteration of water supply for PAV up to several hundred metres at a single HAWA site.
Larsen, C.F.; Motyka, R.J.; Arendt, A.A.; Echelmeyer, K.A.; Geissler, P.E.
2007-01-01
The digital elevation model (DEM) from the 2000 Shuttle Radar Topography Mission (SRTM) was differenced from a composite DEM based on air photos dating from 1948 to 1987 to detennine glacier volume changes in southeast Alaska and adjoining Canada. SRTM accuracy was assessed at ??5 in through comparison with airborne laser altimetry and control locations measured with GPS. Glacier surface elevations lowered over 95% of the 14,580 km2 glacier-covered area analyzed, with some glaciers thinning as much as 640 in. A combination of factors have contributed to this wastage, including calving retreats of tidewater and lacustrine glaciers and climate change. Many glaciers in this region are particularly sensitive to climate change, as they have large areas at low elevations. However, several tidewater glaciers that had historically undergone calving retreats are now expanding and appear to be in the advancing stage of the tidewater glacier cycle. The net average rate of ice loss is estimated at 16.7 ?? 4.4 km3/yr, equivalent to a global sea level rise contribution of 0.04 ?? 0.01 mm/yr. Copyright 2007 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Croke, Jacky; Todd, Peter; Thompson, Chris; Watson, Fiona; Denham, Robert; Khanal, Giri
2013-02-01
Advances in remote sensing and digital terrain processing now allow for a sophisticated analysis of spatial and temporal changes in erosion and deposition. Digital elevation models (DEMs) can now be constructed and differenced to produce DEMs of Difference (DoD), which are used to assess net landscape change for morphological budgeting. To date this has been most effectively achieved in gravel-bed rivers over relatively small spatial scales. If the full potential of the technology is to be realised, additional studies are required at larger scales and across a wider range of geomorphic features. This study presents an assessment of the basin-scale spatial patterns of erosion, deposition, and net morphological change that resulted from a catastrophic flood event in the Lockyer Creek catchment of SE Queensland (SEQ) in January 2011. Multitemporal Light Detection and Ranging (LiDAR) DEMs were used to construct a DoD that was then combined with a one-dimensional flow hydraulic model HEC-RAS to delineate five major geomorphic landforms, including inner-channel area, within-channel benches, macrochannel banks, and floodplain. The LiDAR uncertainties were quantified and applied together with a probabilistic representation of uncertainty thresholded at a conservative 95% confidence interval. The elevation change distribution (ECD) for the 100-km2 study area indicates a magnitude of elevation change spanning almost 10 m but the mean elevation change of 0.04 m confirms that a large part of the landscape was characterised by relatively low magnitude changes over a large spatial area. Mean elevation changes varied by geomorphic feature and only two, the within-channel benches and macrochannel banks, were net erosional with an estimated combined loss of 1,815,149 m3 of sediment. The floodplain was the zone of major net deposition but mean elevation changes approached the defined critical limit of uncertainty. Areal and volumetric ECDs for this extreme event provide a representative expression of the balance between erosion and deposition, and importantly sediment redistribution, which is extremely difficult to quantify using more traditional channel planform or cross-sectional surveys. The ability of LiDAR to make a rapid and accurate assessment of key geomorphic processes over large spatial scales contributes to our understanding of key processes and, as demonstrated here, to the assessment of major geomorphological hazards such as extreme flood events.
Sensible heat receiver for solar dynamic space power system
NASA Astrophysics Data System (ADS)
Perez-Davis, Marla E.; Gaier, James R.; Petrefski, Chris
A sensible heat receiver is considered which uses a vapor grown carbon fiber-carbon (VGCF/C) composite as the thermal storage medium and which was designed for a 7-kW Brayton engine. This heat receiver stores the required energy to power the system during eclipse in the VGCF/C composite. The heat receiver thermal analysis was conducted through the Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA) software package. The sensible heat receiver compares well with other latent and advanced sensible heat receivers analyzed in other studies, while avoiding the problems associated with latent heat storage salts and liquid metal heat pipes. The concept also satisfies the design requirements for a 7-kW Brayton engine system. The weight and size of the system can be optimized by changes in geometry and technology advances for this new material.
Sensible heat receiver for solar dynamic space power system
NASA Technical Reports Server (NTRS)
Perez-Davis, Marla E.; Gaier, James R.; Petrefski, Chris
1991-01-01
A sensible heat receiver considered in this study uses a vapor grown carbon fiber-carbon (VGCF/C) composite as the thermal storage media and was designed for a 7 kW Brayton engine. The proposed heat receiver stores the required energy to power the system during eclipse in the VGCF/C composite. The heat receiver thermal analysis was conducted through the Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA) software package. The sensible heat receiver compares well with other latent and advanced sensible heat receivers analyzed in other studies while avoiding the problems associated with latent heat storage salts and liquid metal heat pipes. The concept also satisfies the design requirements for a 7 kW Brayton engine system. The weight and size of the system can be optimized by changes in geometry and technology advances for this new material.
Sensible heat receiver for solar dynamic space power system
NASA Technical Reports Server (NTRS)
Perez-Davis, Marla E.; Gaier, James R.; Petrefski, Chris
1991-01-01
A sensible heat receiver is considered which uses a vapor grown carbon fiber-carbon (VGCF/C) composite as the thermal storage medium and which was designed for a 7-kW Brayton engine. This heat receiver stores the required energy to power the system during eclipse in the VGCF/C composite. The heat receiver thermal analysis was conducted through the Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA) software package. The sensible heat receiver compares well with other latent and advanced sensible heat receivers analyzed in other studies, while avoiding the problems associated with latent heat storage salts and liquid metal heat pipes. The concept also satisfies the design requirements for a 7-kW Brayton engine system. The weight and size of the system can be optimized by changes in geometry and technology advances for this new material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacon, D.P.
This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track movingmore » internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed.« less
Development of an efficient procedure for calculating the aerodynamic effects of planform variation
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Geller, E. W.
1981-01-01
Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.
Fatty acid composition of intramuscular fat from pastoral yak and Tibetan sheep
USDA-ARS?s Scientific Manuscript database
Fatty acid (FA) composition of intramuscular fat from mature male yak (n=6) and mature Tibetan sheep (n=6) grazed on the same pasture in the Qinghai-Tibetan Plateau was analyzed by gas chromatograph/mass spectrometer to characterize fat composition of these species and to evaluate possible differenc...
NASA Astrophysics Data System (ADS)
Rojali, Salman, Afan Galih; George
2017-08-01
Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.
NASA Technical Reports Server (NTRS)
Rawson, R. F.; Hamilton, R. E.; Liskow, C. L.; Dias, A. R.; Jackson, P. L.
1981-01-01
An analysis of synthetic aperture radar data of SP Mountain was undertaken to demonstrate the use of digital image processing techniques to aid in geologic interpretation of SAR data. These data were collected with the ERIM X- and L-band airborne SAR using like- and cross-polarizations. The resulting signal films were used to produce computer compatible tapes, from which four-channel imagery was generated. Slant range-to-ground range and range-azimuth-scale corrections were made in order to facilitate image registration; intensity corrections were also made. Manual interpretation of the imagery showed that L-band represented the geology of the area better than X-band. Several differences between the various images were also noted. Further digital analysis of the corrected data was done for enhancement purposes. This analysis included application of an MSS differencing routine and development of a routine for removal of relief displacement. It was found that accurate registration of the SAR channels is critical to the effectiveness of the differencing routine. Use of the relief displacement algorithm on the SP Mountain data demonstrated the feasibility of the technique.
NASA Technical Reports Server (NTRS)
DeBonis, James R.
2013-01-01
A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.
NASA Technical Reports Server (NTRS)
Yung, Chain Nan
1988-01-01
A method for predicting turbulent flow in combustors and diffusers is developed. The Navier-Stokes equations, incorporating a turbulence kappa-epsilon model equation, were solved in a nonorthogonal curvilinear coordinate system. The solution applied the finite volume method to discretize the differential equations and utilized the SIMPLE algorithm iteratively to solve the differenced equations. A zonal grid method, wherein the flow field was divided into several subsections, was developed. This approach permitted different computational schemes to be used in the various zones. In addition, grid generation was made a more simple task. However, treatment of the zonal boundaries required special handling. Boundary overlap and interpolating techniques were used and an adjustment of the flow variables was required to assure conservation of mass, momentum and energy fluxes. The numerical accuracy was assessed using different finite differencing methods, i.e., hybrid, quadratic upwind and skew upwind, to represent the convection terms. Flows in different geometries of combustors and diffusers were simulated and results compared with experimental data and good agreement was obtained.
High Performance Radiation Transport Simulations on TITAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Christopher G; Davidson, Gregory G; Evans, Thomas M
2012-01-01
In this paper we describe the Denovo code system. Denovo solves the six-dimensional, steady-state, linear Boltzmann transport equation, of central importance to nuclear technology applications such as reactor core analysis (neutronics), radiation shielding, nuclear forensics and radiation detection. The code features multiple spatial differencing schemes, state-of-the-art linear solvers, the Koch-Baker-Alcouffe (KBA) parallel-wavefront sweep algorithm for inverting the transport operator, a new multilevel energy decomposition method scaling to hundreds of thousands of processing cores, and a modern, novel code architecture that supports straightforward integration of new features. In this paper we discuss the performance of Denovo on the 10--20 petaflop ORNLmore » GPU-based system, Titan. We describe algorithms and techniques used to exploit the capabilities of Titan's heterogeneous compute node architecture and the challenges of obtaining good parallel performance for this sparse hyperbolic PDE solver containing inherently sequential computations. Numerical results demonstrating Denovo performance on early Titan hardware are presented.« less
GALEX Study of the UV Variability of Nearby Galaxies and a Deep Probe of the UV Luminosity Function
NASA Technical Reports Server (NTRS)
Schlegel, Eric
2005-01-01
The proposal has two aims - a deep exposure of NGC 300, about a factor of 10 deeper than the GALEX all-sky survey; and an examination of the UV variability. The data were received just prior to a series of proposal deadlines in early spring. A subsequent analysis delay includes a move from SAO to the University of Texas - San Antonio. Nevertheless, we have merged the data into a single deep exposure as well as undertaking a preliminary examination of the variability. No UV halo is present as detected in the GALEX observation of M83. No UV bursts are visible; however a more stringent limit will only be obtained through a differencing of the sub-images. Papers: we expect 2 papers at about 12 pages/paper to flow from this project. The first paper will report on the time variability while the second will focus on the deep UV image obtained from stacking the individual observations.
Development of a Near-Real Time Hail Damage Swath Identification Algorithm for Vegetation
NASA Technical Reports Server (NTRS)
Bell, Jordan R.; Molthan, Andrew L.; Schultz, Lori A.; McGrath, Kevin M.; Burks, Jason E.
2015-01-01
The Midwest is home to one of the world's largest agricultural growing regions. Between the time period of late May through early September, and with irrigation and seasonal rainfall these crops are able to reach their full maturity. Using moderate to high resolution remote sensors, the monitoring of the vegetation can be achieved using the red and near-infrared wavelengths. These wavelengths allow for the calculation of vegetation indices, such as Normalized Difference Vegetation Index (NDVI). The vegetation growth and greenness, in this region, grows and evolves uniformly as the growing season progresses. However one of the biggest threats to Midwest vegetation during the time period is thunderstorms that bring large hail and damaging winds. Hail and wind damage to crops can be very expensive to crop growers and, damage can be spread over long swaths associated with the tracks of the damaging storms. Damage to the vegetation can be apparent in remotely sensed imagery and is visible from space after storms slightly damage the crops, allowing for changes to occur slowly over time as the crops wilt or more readily apparent if the storms strip material from the crops or destroy them completely. Previous work on identifying these hail damage swaths used manual interpretation by the way of moderate and higher resolution satellite imagery. With the development of an automated and near-real time hail swath damage identification algorithm, detection can be improved, and more damage indicators be created in a faster and more efficient way. The automated detection of hail damage swaths will examine short-term, large changes in the vegetation by differencing near-real time eight day NDVI composites and comparing them to post storm imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Terra and Aqua and Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi NPP. In addition land surface temperatures from these instruments will be examined as for hail damage swath identification. Initial validation of the automated algorithm is based upon Storm Prediction Center storm reports but also the National Severe Storm Laboratory (NSSL) Maximum Estimated Size Hail (MESH) product. Opportunities for future work are also shown, with focus on expansion of this algorithm with pixel-based image classification techniques for tracking surface changes as a result of severe weather.
4D monitoring of actively failing rockslopes
NASA Astrophysics Data System (ADS)
Rosser, Nick; Williams, Jack; Hardy, Richard; Brain, Matthew
2017-04-01
Assessing the conditions which promote rockfall to collapse relies upon detailed monitoring, ideally before, during and immediately after failure. With standard repeat surveys it is common that surveys do not coincide with or capture precursors, or that surveys are widely spaced relative to the timing and duration of driving forces such as storms. As a result gaining insight into the controls on failure and the timescales over which precursors operate remains difficult to establish with certainty, and establishing direct links between environmental conditions and rock-falls, or sequences of events prior to rockfall, remain difficult to define. To address this, we present analysis of a high-frequency 3D laser scan dataset captured using a new permanently installed system developed to constantly monitor actively failing rock slopes. The system is based around a time of flight laser scanner, integrated with and remotely controlled by dedicated controls and analysis software. The system is configured to capture data at 0.1 m spacing across > 22,000 m3 at up to 30 minute intervals. Here we present results captured with this system over a period of 9 months, spanning spring to winter 2015. Our analysis is focussed upon improving the understanding of the nature of small (< 1m^3) rockfalls falling from near vertical rock cliffs. We focus here on the development of a set of algorithms for differencing that trade-off the temporal resolution of frequent surveys (hourly) against high spatial resolution point clouds (< 0.05 m) to enhance the precision of change detection, allowing both deformation and detachments to be monitored through time. From this dataset we derive rockfall volume frequency distributions based upon short-interval surveys, and identify the presence and/or absence of precursors, in what we believe to be the first constant volumetric measurement of rock face erosion. The results hold implications for understanding of rockfall mechanics, but also for how actively eroding surfaces can be monitored at high temporal frequency. Whilst high frequency data is ideal for describing processes that evolve rapidly through time, the cumulative errors that accumulate when monitored changes are dominated by inverse power-law distributed volumes are significant. To conclude we consider the benefits of defining survey frequency on the basis of the changes being detected relative to the accumulation of errors that inevitably arises when comparing high numbers of sequential surveys.
On the effects of signal processing on sample entropy for postural control.
Lubetzky, Anat V; Harel, Daphna; Lubetzky, Eyal
2018-01-01
Sample entropy, a measure of time series regularity, has become increasingly popular in postural control research. We are developing a virtual reality assessment of sensory integration for postural control in people with vestibular dysfunction and wished to apply sample entropy as an outcome measure. However, despite the common use of sample entropy to quantify postural sway, we found lack of consistency in the literature regarding center-of-pressure signal manipulations prior to the computation of sample entropy. We therefore wished to investigate the effect of parameters choice and signal processing on participants' sample entropy outcome. For that purpose, we compared center-of-pressure sample entropy data between patients with vestibular dysfunction and age-matched controls. Within our assessment, participants observed virtual reality scenes, while standing on floor or a compliant surface. We then analyzed the effect of: modification of the radius of similarity (r) and the embedding dimension (m); down-sampling or filtering and differencing or detrending. When analyzing the raw center-of-pressure data, we found a significant main effect of surface in medio-lateral and anterior-posterior directions across r's and m's. We also found a significant interaction group × surface in the medio-lateral direction when r was 0.05 or 0.1 with a monotonic increase in p value with increasing r in both m's. These effects were maintained with down-sampling by 2, 3, and 4 and with detrending but not with filtering and differencing. Based on these findings, we suggest that for sample entropy to be compared across postural control studies, there needs to be increased consistency, particularly of signal handling prior to the calculation of sample entropy. Procedures such as filtering, differencing or detrending affect sample entropy values and could artificially alter the time series pattern. Therefore, if such procedures are performed they should be well justified.
Performance Assessment of Two GPS Receivers on Space Shuttle
NASA Technical Reports Server (NTRS)
Schroeder, Christine A.; Schutz, Bob E.
1996-01-01
Space Shuttle STS-69 was launched on September 7, 1995, carrying the Wake Shield Facility (WSF-02) among its payloads. The mission included two GPS receivers: a Collins 3M receiver onboard the Endeavour and an Osborne flight TurboRogue, known as the TurboStar, onboard the WSF-02. Two of the WSF-02 GPS Experiment objectives were to: (1) assess the ability to use GPS in a relative satellite positioning mode using the receivers on Endeavour and WSF-02; and (2) assess the performance of the receivers to support high precision orbit determination at the 400 km altitude. Three ground tests of the receivers were conducted in order to characterize the respective receivers. The analysis of the tests utilized the Double Differencing technique. A similar test in orbit was conducted during STS-69 while the WSF-02 was held by the Endeavour robot arm for a one hour period. In these tests, biases were observed in the double difference pseudorange measurements, implying that biases up to 140 m exist which do not cancel in double differencing. These biases appear to exist in the Collins receiver, but their effect can be mitigated by including measurement bias parameters to accommodate them in an estimation process. An additional test was conducted in which the orbit of the combined Endeavour/WSF-02 was determined independently with each receiver. These one hour arcs were based on forming double differences with 13 TurboRogue receivers in the global IGS network and estimating pseudorange biases for the Collins. Various analyses suggest the TurboStar overall orbit accuracy is about one to two meters for this period, based on double differenced phase residuals of 34 cm. These residuals indicate the level of unmodeled forces on Endeavour produced by gravitational and nongravitational effects. The rms differences between the two independently determined orbits are better than 10 meters, thereby demonstrating the accuracy of the Collins-determined orbit at this level as well as the accuracy of the relative positioning using these two receivers.
NASA Astrophysics Data System (ADS)
Sisson, James B.; van Genuchten, Martinus Th.
1991-04-01
The unsaturated hydraulic properties are important parameters in any quantitative description of water and solute transport in partially saturated soils. Currently, most in situ methods for estimating the unsaturated hydraulic conductivity (K) are based on analyses that require estimates of the soil water flux and the pressure head gradient. These analyses typically involve differencing of field-measured pressure head (h) and volumetric water content (θ) data, a process that can significantly amplify instrumental and measurement errors. More reliable methods result when differencing of field data can be avoided. One such method is based on estimates of the gravity drainage curve K'(θ) = dK/dθ which may be computed from observations of θ and/or h during the drainage phase of infiltration drainage experiments assuming unit gradient hydraulic conditions. The purpose of this study was to compare estimates of the unsaturated soil hydraulic functions on the basis of different combinations of field data θ, h, K, and K'. Five different data sets were used for the analysis: (1) θ-h, (2) K-θ, (3) K'-θ (4) K-θ-h, and (5) K'-θ-h. The analysis was applied to previously published data for the Norfolk, Troup, and Bethany soils. The K-θ-h and K'-θ-h data sets consistently produced nearly identical estimates of the hydraulic functions. The K-θ and K'-θ data also resulted in similar curves, although results in this case were less consistent than those produced by the K-θ-h and K'-θ-h data sets. We conclude from this study that differencing of field data can be avoided and hence that there is no need to calculate soil water fluxes and pressure head gradients from inherently noisy field-measured θ and h data. The gravity drainage analysis also provides results over a much broader range of hydraulic conductivity values than is possible with the more standard instantaneous profile analysis, especially when augmented with independently measured soil water retention data.
Grain size is a physical measurement commonly made in the analysis of many benthic systems. Grain size influences benthic community composition, can influence contaminant loading and can indicate the energy regime of a system. We have recently investigated the relationship betw...
DOT National Transportation Integrated Search
2009-01-01
As part of the Innovative Bridge Research and Construction Program (IBRCP), this study was conducted to use the full-scale construction project of the Route 123 Bridge over the Occoquan River in Northern Virginia to identify and compare any differenc...
USDA-ARS?s Scientific Manuscript database
Differences in wing size in geographical races of Heliconius erato distributed over the western and eastern sides of the Andes are reported on here. Individuals from the eastern side of the Andes are statistically larger in size than the ones on the western side of the Andes. A statistical differenc...
Joint production and substitution in timber supply: a panel data analysis
Torjus F Bolkesjo; Joseph Buongiorno; Birger Solberg
2010-01-01
Supply equations for sawlog and pulpwood were developed with a panel of data from 102 Norwegian municipalities, observed from 1980 to 2000. Static and dynamic models were estimated by cross-section, time-series andpanel data methods. A static model estimated by first differencing gavethe best overall results in terms of theoretical expectations, pattern ofresiduals,...
Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G
NASA Technical Reports Server (NTRS)
Gaski, J. D.; Lewis, D. R.; Thompson, L. R.
1972-01-01
New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.
NASA Astrophysics Data System (ADS)
Leeper, R. J.; Barth, N. C.; Gray, A. B.
2017-12-01
Hydro-geomorphic response in recently burned watersheds is highly dependent on the timing and magnitude of subsequent rainstorms. Recent advancements in surveying and monitoring techniques using Unmanned Aerial Vehicles (UAV) and Structure-from-Motion (SfM) photogrammetry can support the rapid estimation of near cm-scale topographic response of headwater catchments (ha to km2). However, surface change due to shallow erosional processes such as sheetwash and rilling remain challenging to measure at this spatial extent and the storm event scale. To address this issue, we combined repeat UAV-SfM surveys with hydrologic monitoring techniques and field investigations to characterize post-wildfire erosional processes and topographic change on a storm-by-storm basis. The Las Lomas watershed ( 15 ha) burned in the 2016 San Gabriel Complex Fire along the front range of the San Gabriel Mountains, southern California. Surveys were conducted with a consumer grade UAV; twenty-six SfM control markers; two rain gages, and two pressure transducers were installed in the watershed. The initial SfM-derived point cloud generated from 422 photos contains 258 million points; the DEM has a resolution of 2.42 cm/pixel and a point density of 17.1 pts/cm2. Rills began forming on hillslopes and minor erosion occurred within the channel network during the first low intensity storms of the rainy season. Later more intense storms resulted in substantial geomorphic change. Hydrologic data indicate that during one of the intense storms total cumulative rainfall was 58.20 mm and peak 5-min intensity was 38.4 mm/hr. Poststorm field surveys revealed evidence of debris flows, flash flooding, erosion, and fluvial aggradation in the channel network, and rill growth and gully formation on hillslopes. Analyses of the SfM models indicate erosion dominated topographic change in steep channels and on hillslopes; aggradation dominated change in low gradient channels. A contrast of 5 cm exists between field measurements and change detected by differencing the SfM models. The quantitative and qualitative data sets obtained indicate that low-cost hydrologic monitoring techniques can be combined with SfM-derived high-resolution models to rapidly characterize post-wildfire hydrologic response and erosional processes on a storm event basis.
NASA Astrophysics Data System (ADS)
Beyeler, J. D.; Montgomery, D.; Kennard, P. M.
2016-12-01
Downwasting of all glaciers on the flanks of Mount Rainier, WA, in recent decades has debuttressed Little Ice Age glaciogenic sediments driving proglacial responses to regionally warming climate. Rivers draining the deglaciating edifice are responding to paraglacial sedimentation processes through transient storage of retreat-liberated sediments in aggrading (e.g., >5m) fluvial networks with widening channel corridors (i.e., 50-150%) post-LIA (ca., 1880-1910 locally). We hypothesize that the downstream transmission of proglacial fluxes (i.e., sediment and water) through deglaciating alpine terrain is a two-step geomorphic process. The ice-proximal portion of the proglacial system is dominated by the delivery of high sediment-to-water ratio flows (i.e., hyperconcentrated and debris slurries) and sediment retention by in-channel accumulation (e.g., confined debris fans within channel margins of valley segments) exacerbated by recruitment and accumulation of large wood (e.g., late seral stage conifers), whereas ice-distal fluvial reworking of transient sediment accumulations generates downstream aggradation. Historical Carbon River observations show restricted ice-proximal proglacial aggradation until a mainstem avulsion in 2009 initiated incision into sediment accumulations formed in recent decades, which is translating into aggradation farther down the network. Surficial morphology mapped with GPS, exposed subsurface sedimentology, and preliminary dating of buried trees suggest a transitional geomorphic process zone has persisted along the proglacial Carbon River through recent centuries and prior to the ultimate LIA glaciation. Structure-from-motion DEM differencing through the 2016 water year shows discrete zones of proglacial evolution through channel-spanning bed aggradation forced by interactions between large wood and sediment-rich flows that transition to fluvial process dominance as sediment is transported downstream. Long-term DEM differencing suggests these are persistent geomorphic processes as rivers respond to alpine deglaciation. This process-based study implies downstream river flooding in deglaciating alpine terrain globally is driven by glaciogenic sediment release and downstream channel aggradation irrespective of changes in discharge.
Shuttle Centaur engine cooldown evaluation and effects of expanded inlets on start transient
NASA Technical Reports Server (NTRS)
1987-01-01
As part of the integration of the RL10 engine into the Shuttle Centaur vehicle, a satisfactory method of conditioning the engine to operating temperatures had to be established. This procedure, known as cooldown, is different from the existing Atlas Centaur due to vehicle configuration and mission profile differenced. The program is described, and the results of a Shuttle Centaur cooldown program are reported. Mission peculiarities cause substantial variation in propellant inlet conditions between the substantiated Atlas Centaur and Shuttle Centaur with the Shuttle Centaur having much larger variation in conditions. A test program was conducted to demonstrate operation of the RL10 engine over the expanded inlet conditions. As a result of this program, the Shuttle Centaur requirements were proven satisfactory. Minor configuration changes incorporated as a result of this program provide substantial reduction in cooldown propellant consumption.
Dynamic modeling of environmental risk associated with drilling discharges to marine sediments.
Durgut, İsmail; Rye, Henrik; Reed, Mark; Smit, Mathijs G D; Ditlevsen, May Kristin
2015-10-15
Drilling discharges are complex mixtures of base-fluids, chemicals and particulates, and may, after discharge to the marine environment, result in adverse effects on benthic communities. A numerical model was developed to estimate the fate of drilling discharges in the marine environment, and associated environmental risks. Environmental risk from deposited drilling waste in marine sediments is generally caused by four types of stressors: oxygen depletion, toxicity, burial and change of grain size. In order to properly model these stressors, natural burial, biodegradation and bioturbation processes were also included. Diagenetic equations provide the basis for quantifying environmental risk. These equations are solved numerically by an implicit-central differencing scheme. The sediment model described here is, together with a fate and risk model focusing on the water column, implemented in the DREAM and OSCAR models, both available within the Marine Environmental Modeling Workbench (MEMW) at SINTEF in Trondheim, Norway. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Large Synoptic Survey Telescope as a Near-Earth Object discovery machine
NASA Astrophysics Data System (ADS)
Jones, R. Lynne; Slater, Colin T.; Moeyens, Joachim; Allen, Lori; Axelrod, Tim; Cook, Kem; Ivezić, Željko; Jurić, Mario; Myers, Jonathan; Petry, Catherine E.
2018-03-01
Using the most recent prototypes, design, and as-built system information, we test and quantify the capability of the Large Synoptic Survey Telescope (LSST) to discover Potentially Hazardous Asteroids (PHAs) and Near-Earth Objects (NEOs). We empirically estimate an expected upper limit to the false detection rate in LSST image differencing, using measurements on DECam data and prototype LSST software and find it to be about 450 deg-2. We show that this rate is already tractable with current prototype of the LSST Moving Object Processing System (MOPS) by processing a 30-day simulation consistent with measured false detection rates. We proceed to evaluate the performance of the LSST baseline survey strategy for PHAs and NEOs using a high-fidelity simulated survey pointing history. We find that LSST alone, using its baseline survey strategy, will detect 66% of the PHA and 61% of the NEO population objects brighter than H = 22 , with the uncertainty in the estimate of ± 5 percentage points. By generating and examining variations on the baseline survey strategy, we show it is possible to further improve the discovery yields. In particular, we find that extending the LSST survey by two additional years and doubling the MOPS search window increases the completeness for PHAs to 86% (including those discovered by contemporaneous surveys) without jeopardizing other LSST science goals (77% for NEOs). This equates to reducing the undiscovered population of PHAs by additional 26% (15% for NEOs), relative to the baseline survey.
Elizabeth E. Hoy; Nancy H.F. French; Merritt R. Turetsky; Simon N. Trigg; Eric S. Kasischke
2008-01-01
Satellite remotely sensed data of fire disturbance offers important information; however, current methods to study fire severity may need modifications for boreal regions. We assessed the potential of the differenced Normalized Burn Ratio (dNBR) and other spectroscopic indices and image transforms derived from Landsat TM/ETM+ data for mapping fire severity in Alaskan...
Viking S-band Doppler RMS phase fluctuations used to calibrate the mean 1976 equatorial corona
NASA Technical Reports Server (NTRS)
Berman, A. L.; Wackley, J. A.
1977-01-01
Viking S-band Doppler RMS phase fluctuations (noise) and comparisons of Viking Doppler noise to Viking differenced S-X range measurements are used to construct a mean equatorial electron density model for 1976. Using Pioneer Doppler noise results (at high heliographic latitudes, also from 1976), an equivalent nonequatorial electron density model is approximated.
Finite difference methods for the solution of unsteady potential flows
NASA Technical Reports Server (NTRS)
Caradonna, F. X.
1982-01-01
Various problems which are confronted in the development of an unsteady finite difference potential code are reviewed mainly in the context of what is done for a typical small disturbance and full potential method. The issues discussed include choice of equations, linearization and conservation, differencing schemes, and algorithm development. A number of applications, including unsteady three dimensional rotor calculations, are demonstrated.
Modeling of multi-strata forest fire severity using Landsat TM data
Q. Meng; R.K. Meentemeyer
2011-01-01
Most of fire severity studies use field measures of composite burn index (CBI) to represent forest fire severity and fit the relationships between CBI and Landsat imagery derived differenced normalized burn ratio (dNBR) to predict and map fire severity at unsampled locations. However, less attention has been paid on the multi-strata forest fire severity, which...
Donovan S. Birch; Penelope Morgan; Crystal A. Kolden; John T. Abatzoglou; Gregory K. Dillon; Andrew T. Hudak; Alistair M. S. Smith
2015-01-01
Burn severity as inferred from satellite-derived differenced Normalized Burn Ratio (dNBR) is useful for evaluating fire impacts on ecosystems but the environmental controls on burn severity across large forest fires are both poorly understood and likely to be different than those influencing fire extent. We related dNBR to environmental variables including vegetation,...
ERIC Educational Resources Information Center
Belfield, Clive; Bailey, Thomas
2017-01-01
Recently, studies have adopted fixed effects modeling to identify the returns to college. This method has the advantage over ordinary least squares estimates in that unobservable, individual-level characteristics that may bias the estimated returns are differenced out. But the method requires extensive longitudinal data and involves complex…
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
NASA Technical Reports Server (NTRS)
Yang, Cheng I.; Guo, Yan-Hu; Liu, C.- H.
1996-01-01
The analysis and design of a submarine propulsor requires the ability to predict the characteristics of both laminar and turbulent flows to a higher degree of accuracy. This report presents results of certain benchmark computations based on an upwind, high-resolution, finite-differencing Navier-Stokes solver. The purpose of the computations is to evaluate the ability, the accuracy and the performance of the solver in the simulation of detailed features of viscous flows. Features of interest include flow separation and reattachment, surface pressure and skin friction distributions. Those features are particularly relevant to the propulsor analysis. Test cases with a wide range of Reynolds numbers are selected; therefore, the effects of the convective and the diffusive terms of the solver can be evaluated separately. Test cases include flows over bluff bodies, such as circular cylinders and spheres, at various low Reynolds numbers, flows over a flat plate with and without turbulence effects, and turbulent flows over axisymmetric bodies with and without propulsor effects. Finally, to enhance the iterative solution procedure, a full approximation scheme V-cycle multigrid method is implemented. Preliminary results indicate that the method significantly reduces the computational effort.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasperikova, E.; Smith, J.T.; Kappler, K.N.
2010-04-01
With prior funding (UX-1225, MM-0437, and MM-0838), we have successfully designed and built a cart-mounted Berkeley UXO Discriminator (BUD) and demonstrated its performance at various test sites (e.g., Gasperikova et al., 2007, 2009). It is a multi-transmitter multi-receiver active electromagnetic system that is able to discriminate UXO from scrap at a single measurement position, hence eliminates equirement of a very accurate sensor location. The cart-mounted system comprises of three orthogonal transmitters and eight pairs of differenced receivers (Smith et al., 2007). Receiver coils are located on ymmetry lines through the center of the system and see identical fields during themore » on-time of the pulse in all of the transmitter coils. They can then be wired in opposition to produce zero output during the n-ime of the pulses in three orthogonal transmitters. Moreover, this configuration dramatically reduces noise in the measurements by canceling the background electromagnetic fields (these fields are uniform ver the scale of the receiver array and are consequently nulled by the differencing operation), and by canceling the noise contributed by the tilt of the receivers in the Earth's magnetic field, and therefore reatly enhances receivers sensitivity to the gradients of the target.« less
An RGB colour image steganography scheme using overlapping block-based pixel-value differencing
Pal, Arup Kumar
2017-01-01
This paper presents a steganographic scheme based on the RGB colour cover image. The secret message bits are embedded into each colour pixel sequentially by the pixel-value differencing (PVD) technique. PVD basically works on two consecutive non-overlapping components; as a result, the straightforward conventional PVD technique is not applicable to embed the secret message bits into a colour pixel, since a colour pixel consists of three colour components, i.e. red, green and blue. Hence, in the proposed scheme, initially the three colour components are represented into two overlapping blocks like the combination of red and green colour components, while another one is the combination of green and blue colour components, respectively. Later, the PVD technique is employed on each block independently to embed the secret data. The two overlapping blocks are readjusted to attain the modified three colour components. The notion of overlapping blocks has improved the embedding capacity of the cover image. The scheme has been tested on a set of colour images and satisfactory results have been achieved in terms of embedding capacity and upholding the acceptable visual quality of the stego-image. PMID:28484623
NASA Technical Reports Server (NTRS)
Kim, Hyoungin; Liou, Meng-Sing
2011-01-01
In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems
NASA Astrophysics Data System (ADS)
Hartung, Christine; Spraul, Raphael; Schuchert, Tobias
2017-10-01
Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.
Mass Balance of the Northern Antarctic Peninsula and its Ongoing Response to Ice Shelf Loss
NASA Astrophysics Data System (ADS)
Scambos, T. A.; Berthier, E.; Haran, T. M.; Shuman, C. A.; Cook, A. J.; Bohlander, J. A.
2012-12-01
An assessment of the most rapidly changing areas of the Antarctic Peninsula (north of 66°S) shows that ice mass loss for the region is dominated by areas affected by eastern-Peninsula ice shelf losses in the past 20 years. Little if any of the mass loss is compensated by increased snowfall in the northwestern or far northern areas. We combined satellite stereo-image DEM differencing and ICESat-derived along-track elevation changes to measure ice mass loss for the Antarctic Peninsula north of 66°S between 2001-2010, focusing on the ICESat-1 period of operation (2003-2009). This mapping includes all ice drainages affected by recent ice shelf loss in the northeastern Peninsula (Prince Gustav, Larsen Inlet, Larsen A, and Larsen B) as well as James Ross Island, Vega Island, Anvers Island, Brabant Island and the adjacent west-flowing glaciers. Polaris Glacier (feeding the Larsen Inlet, which collapsed in 1986) is an exception, and may have stabilized. Our method uses ASTER and SPOT-5 stereo-image DEMs to determine dh/dt for elevations below 800 m; at higher elevations ICESat along-track elevation differencing is used. To adjust along-track path offsets between its 2003-2009 campaigns, we use a recent DEM of the Peninsula to establish and correct for cross-track slope (Cook et al., 2012, doi:10.5194/essdd-5-365-2012; http://nsidc.org/data/nsidc-0516.html) . We reduce the effect of possible seasonal variations in elevation by using only integer-year repeats of the ICESat tracks for comparison. Mass losses are dominated by the major glaciers that had flowed into the Prince Gustav (Boydell, Sjorgren, Röhss), Larsen A (Edgeworth, Bombardier, Dinsmoor, Drygalski), and Larsen B (Hektoria, Jorum, and Crane) embayments. The pattern of mass loss emphasizes the significant and multi-decadal response to ice shelf loss. Areas with shelf losses occurring 30 to 100s of years ago seem to be relatively stable or losing mass only slowly (western glaciers, northernmost areas). The remnant of the Larsen B, Scar Inlet Ice Shelf, shows signs of imminent break-up, and its feeder glaciers (Flask and Leppard) are already increasing in speed as the ice shelf remnant decreases in area.
Surface elevation and mass changes of all Swiss glaciers 1980-2010
NASA Astrophysics Data System (ADS)
Fischer, M.; Huss, M.; Hoelzle, M.
2015-03-01
Since the mid-1980s, glaciers in the European Alps have shown widespread and accelerating mass losses. This article presents glacier-specific changes in surface elevation, volume and mass balance for all glaciers in the Swiss Alps from 1980 to 2010. Together with glacier outlines from the 1973 inventory, the DHM25 Level 1 digital elevation models (DEMs) for which the source data over glacierized areas were acquired from 1961 to 1991 are compared to the swissALTI3D DEMs from 2008 to 2011 combined with the new Swiss Glacier Inventory SGI2010. Due to the significant differences in acquisition dates of the source data used, mass changes are temporally homogenized to directly compare individual glaciers or glacierized catchments. Along with an in-depth accuracy assessment, results are validated against volume changes from independent photogrammetrically derived DEMs of single glaciers. Observed volume changes are largest between 2700 and 2800 m a.s.l. and remarkable even above 3500 m a.s.l. The mean geodetic mass balance is -0.62 ± 0.07 m w.e. yr-1 for the entire Swiss Alps over the reference period 1980-2010. For the main hydrological catchments, it ranges from -0.52 to -1.07 m w.e. yr-1. The overall volume loss calculated from the DEM differencing is -22.51 ± 1.76 km3.
Surface elevation and mass changes of all Swiss glaciers 1980-2010
NASA Astrophysics Data System (ADS)
Fischer, M.; Huss, M.; Hoelzle, M.
2014-08-01
Since the mid-1980s, glaciers in the European Alps have shown widespread and accelerating mass losses. This article presents glacier-specific changes in surface elevation, volume and mass balance for all glaciers in the Swiss Alps from 1980 to 2010. Together with glacier outlines from the 1973 inventory, the DHM25 Level 1 Digital Elevation Models (DEMs) for which the source data over glacierized areas was acquired from 1961 to 1991 are compared to the swissALTI3D DEMs from 2008-2011 combined with the new Swiss Glacier Inventory SGI2010. Due to the significant differences in acquisition date of the source data used, resulting mass changes are temporally homogenized to directly compare individual glaciers or glacierized catchments. Along with an in-depth accuracy assessment, results are validated against volume changes from independent photogrammetrically derived DEMs of single glaciers. Observed volume changes are largest between 2700-2800 m a.s.l. and remarkable even above 3500 m a.s.l. The mean geodetic mass balance is -0.62 ± 0.03 m w.e. yr-1 for the entire Swiss Alps over the reference period 1980-2010. For the main hydrological catchments, it ranges from -0.52 to -1.07 m w.e. yr-1. The overall volume loss calculated from the DEM differencing is -22.51 ± 0.97 km3.
A numerical study of the steady scalar convective diffusion equation for small viscosity
NASA Technical Reports Server (NTRS)
Giles, M. B.; Rose, M. E.
1983-01-01
A time-independent convection diffusion equation is studied by means of a compact finite difference scheme and numerical solutions are compared to the analytic inviscid solutions. The correct internal and external boundary layer behavior is observed, due to an inherent feature of the scheme which automatically produces upwind differencing in inviscid regions and the correct viscous behavior in viscous regions.
Atmospheric cloud physics thermal systems analysis
NASA Technical Reports Server (NTRS)
1977-01-01
Engineering analyses performed on the Atmospheric Cloud Physics (ACPL) Science Simulator expansion chamber and associated thermal control/conditioning system are reported. Analyses were made to develop a verified thermal model and to perform parametric thermal investigations to evaluate systems performance characteristics. Thermal network representations of solid components and the complete fluid conditioning system were solved simultaneously using the Systems Improved Numerical Differencing Analyzer (SINDA) computer program.
Interior Fluid Dynamics of Liquid-Filled Projectiles
1989-12-01
the Sandia code. The previous codes are primarily based on finite-difference approximations with relatively coarse grid and were designed without...exploits Chorin’s method of artificial compressibility. The steady solution at 11 X 24 X 21 grid points in r, 0, z-direction is obtained by integrating...differences in radial and axial direction and pseudoepectral differencing in the azimuthal direction. Nonuniform grids are introduced for increased
Domain Derivatives in Dielectric Rough Surface Scattering
2015-01-01
and require the gradient of the objective function in the unknown model parameter vector at each stage of iteration. For large N, finite...differencing becomes numerically intensive, and an efficient alternative is domain differentiation in which the full gradient is obtained by solving a single...derivative calculation of the gradient for a locally perturbed dielectric interface. The method is non-variational, and algebraic in nature in that it
Wave Current Interactions and Wave-blocking Predictions Using NHWAVE Model
2013-03-01
Navier-Stokes equation. In this approach, as with previous modeling techniques, there is difficulty in simulating the free surface that inhibits accurate...hydrostatic, free - surface , rotational flows in multiple dimensions. It is useful in predicting transformations of surface waves and rapidly varied...Stelling, G., and M. Zijlema, 2003: An accurate and efficient finite-differencing algorithm for non-hydrostatic free surface flow with application to
A. M. S. Smith; L. B. Lenilte; A. T. Hudak; P. Morgan
2007-01-01
The Differenced Normalized Burn Ratio (deltaNBR) is widely used to map post-fire effects in North America from multispectral satellite imagery, but has not been rigorously validated across the great diversity in vegetation types. The importance of these maps to fire rehabilitation crews highlights the need for continued assessment of alternative remote sensing...
Progress in Multi-Dimensional Upwind Differencing
1992-09-01
Fligure 4a a shiock less t raii~so ilc soliit ion is reached from Ii itial val ies conitaining 1shiocks anid S sonic poinits: agarin. thle residiial...8217 j~ vi are chu i.ewil thtii( lealst alliilied \\wit the conivectiloll direct nwi. in1 .3 tlimeIros alj’uled. (hiul *IS a treg h11f,1r p( ir~to1to .3
NASA Astrophysics Data System (ADS)
Liu, Qingsheng; Liang, Li; Liu, Gaohuan; Huang, Chong
2017-09-01
Vegetation often exists as patch in arid and semi-arid region throughout the world. Vegetation patch can be effectively monitored by remote sensing images. However, not all satellite platforms are suitable to study quasi-circular vegetation patch. This study compares fine (GF-1) and coarse (CBERS-04) resolution platforms, specifically focusing on the quasicircular vegetation patches in the Yellow River Delta (YRD), China. Vegetation patch features (area, shape) were extracted from GF-1 and CBERS-04 imagery using unsupervised classifier (K-Means) and object-oriented approach (Example-based feature extraction with SVM classifier) in order to analyze vegetation patterns. These features were then compared using vector overlay and differencing, and the Root Mean Squared Error (RMSE) was used to determine if the mapped vegetation patches were significantly different. Regardless of K-Means or Example-based feature extraction with SVM classification, it was found that the area of quasi-circular vegetation patches from visual interpretation from QuickBird image (ground truth data) was greater than that from both of GF-1 and CBERS-04, and the number of patches detected from GF-1 data was more than that of CBERS-04 image. It was seen that without expert's experience and professional training on object-oriented approach, K-Means was better than example-based feature extraction with SVM for detecting the patch. It indicated that CBERS-04 could be used to detect the patch with area of more than 300 m2, but GF-1 data was a sufficient source for patch detection in the YRD. However, in the future, finer resolution platforms such as Worldview are needed to gain more detailed insight on patch structures and components and formation mechanism.
NASA Astrophysics Data System (ADS)
Curran, M. L.; Hales, G.; Michalak, M.
2016-12-01
Digital Terrain Models (DTMs) generated in Agisoft Photoscan from photogrammetry provide a basis for a high resolution, quantitative analysis of geomorphic features that are difficult to describe using conventional, commonly used techniques. Photogrammetric analysis can be particularly useful in investigating the spatial and temporal dispersal of gravel in high gradient mountainous streams. The Oak Grove Fork (OGF), located in northwestern Oregon, is one of the largest tributaries to the Clackamas River. Lake Harriet Dam and diversion was built on the OGF in 1924 as part of a hydroelectric development by Portland General Electric. Decreased flow and sediment supply downstream of Lake Harriet Dam has resulted in geomorphic and biological changes, including reduced salmonid habitat. As part of a program to help restore a portion of the natural sediment supply and improve salmonid habitat, gravel augmentation is scheduled to begin September 2016. Tracking the downstream movement of augmented gravels is crucial to establishing program success. The OGF provides a unique setting for this study; flow is regulated at the dam, except for spillover during high flow events, and a streamflow gaging station downstream of the study area reports discharge. As such, the controlled environment of the OGF provides a natural laboratory to study how a sediment-depleted channel responds geomorphically to a known volume of added gravel. This study uses SfM to evaluate deposition of the augmented gravel following its introduction. The existing channel is characterized by coarse, angular gravel, cobble, and boulder; the augmented gravel is finer, rounded, and 5% of the volume is an exotic lithology to provide a visual tracer. Baseline, pre-gravel introduction DTMs are constructed and will be differenced with post-gravel introduction DTMs to calculate change at four study sites. Our preliminary pilot testing on another river shows that centimeter-scale accretion and aggradation within the wetted channel and on exposed gravel bars can be detected using this methodology. The resolution of the baseline DTMs on the Oak Grove Fork support these initial results. Continued monitoring and quantifying of vertical change within the study reach will inform future rehabilitation efforts and gravel augmentation practices.
NASA Astrophysics Data System (ADS)
Whorton, E.; Headman, A.; Shean, D. E.; McCann, E.
2017-12-01
Understanding the implications of glacier recession on water resources in the western U.S. requires quantifying glacier mass change across large regions over several decades. Very few glaciers in North America have long-term continuous field measurements of glacier mass balance. However, systematic aerial photography campaigns began in 1957 on many glaciers in the western U.S. and Alaska. These historical, vertical aerial stereo-photographs documenting glacier evolution have recently become publically available. Digital elevation models (DEM) of the transient glacier surface preserved in each imagery timestamp can be derived, then differenced to calculate glacier volume and mass change to improve regional geodetic solutions of glacier mass balance. In order to batch process these data, we use Python-based algorithms and Agisoft Photoscan structure from motion (SfM) photogrammetry software to semi-automate DEM creation, and orthorectify and co-register historical aerial imagery in a high-performance computing environment. Scanned photographs are rotated to reduce scaling issues, cropped to the same size to remove fiducials, and batch histogram equalization is applied to improve image quality and aid pixel-matching algorithms using the Python library OpenCV. Processed photographs are then passed to Photoscan through the Photoscan Python library to create DEMs and orthoimagery. To extend the period of record, the elevation products are co-registered to each other, airborne LiDAR data, and DEMs derived from sub-meter commercial satellite imagery. With the exception of the placement of ground control points, the process is entirely automated with Python. Current research is focused on: one, applying these algorithms to create geodetic mass balance time series for the 90 photographed glaciers in Washington State and two, evaluating the minimal amount of positional information required in Photoscan to prevent distortion effects that cannot be addressed during co-registration. Feature tracking and identification utilities in OpenCV have the potential to automate the georeferencing process. We aim to develop an algorithm suite that is flexible enough to enable its use for many landscape change detection and analysis problems.
Finite difference methods for the solution of unsteady potential flows
NASA Technical Reports Server (NTRS)
Caradonna, F. X.
1985-01-01
A brief review is presented of various problems which are confronted in the development of an unsteady finite difference potential code. This review is conducted mainly in the context of what is done for a typical small disturbance and full potential methods. The issues discussed include choice of equation, linearization and conservation, differencing schemes, and algorithm development. A number of applications including unsteady three-dimensional rotor calculation, are demonstrated.
Finite Difference Methods for the Solution of Unsteady Potential Flows.
1982-06-01
prediction of loads on helicopter rotors in forward flight. Although aeroelastic effects are important, in this case the main source of unsteadiness is in the...and conservation, differencing schemes, and algorithm development. A number of applications, including unsteady three-dimensional rotor calculations...concerning tunnel turbulence, wall and scaling effects , and sepa- ration. We now know that many of these problems are magnified by the inherent susceptibility
SCISEAL: A CFD Code for Analysis of Fluid Dynamic Forces in Seals
NASA Technical Reports Server (NTRS)
Althavale, Mahesh M.; Ho, Yin-Hsing; Przekwas, Andre J.
1996-01-01
A 3D CFD code, SCISEAL, has been developed and validated. Its capabilities include cylindrical seals, and it is employed on labyrinth seals, rim seals, and disc cavities. State-of-the-art numerical methods include colocated grids, high-order differencing, and turbulence models which account for wall roughness. SCISEAL computes efficient solutions for complicated flow geometries and seal-specific capabilities (rotor loads, torques, etc.).
2010-04-01
structure design showed that we could achieve both of these goals with a 14-in (0.35 m) sensor cube. To avoid the reliance on accurate multiple...differenced pair receiver. 4. Conclusions We have designed and built a sensor package of a 14-in (0.35 m) cube based on the...funding (UX-1225, MM-0437, and MM-0838), we have successfully designed and built a cart-mounted Berkeley UXO Discriminator (BUD) and demonstrated its
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1994-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. Roe's approximate Riemann solution scheme or the computationally less expensive advection upstream splitting method (AUSM) flux-splitting scheme is used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passages and the distribution of flow variables in the stationary inlet port region.
Two-dimensional CFD modeling of wave rotor flow dynamics
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Chima, Rodrick V.
1993-01-01
A two-dimensional Navier-Stokes solver developed for detailed study of wave rotor flow dynamics is described. The CFD model is helping characterize important loss mechanisms within the wave rotor. The wave rotor stationary ports and the moving rotor passages are resolved on multiple computational grid blocks. The finite-volume form of the thin-layer Navier-Stokes equations with laminar viscosity are integrated in time using a four-stage Runge-Kutta scheme. The Roe approximate Riemann solution scheme or the computationally less expensive Advection Upstream Splitting Method (AUSM) flux-splitting scheme are used to effect upwind-differencing of the inviscid flux terms, using cell interface primitive variables set by MUSCL-type interpolation. The diffusion terms are central-differenced. The solver is validated using a steady shock/laminar boundary layer interaction problem and an unsteady, inviscid wave rotor passage gradual opening problem. A model inlet port/passage charging problem is simulated and key features of the unsteady wave rotor flow field are identified. Lastly, the medium pressure inlet port and high pressure outlet port portion of the NASA Lewis Research Center experimental divider cycle is simulated and computed results are compared with experimental measurements. The model accurately predicts the wave timing within the rotor passage and the distribution of flow variables in the stationary inlet port region.
NASA Technical Reports Server (NTRS)
Radomski, M. S.; Doll, C. E.
1995-01-01
The Differenced Range (DR) Versus Integrated Doppler (ID) (DRVID) method exploits the opposition of high-frequency signal versus phase retardation by plasma media to obtain information about the plasma's corruption of simultaneous range and Doppler spacecraft tracking measurements. Thus, DR Plus ID (DRPID) is an observable independent of plasma refraction, while actual DRVID (DR minus ID) measures the time variation of the path electron content independently of spacecraft motion. The DRVID principle has been known since 1961. It has been used to observe interplanetary plasmas, is implemented in Deep Space Network tracking hardware, and has recently been applied to single-frequency Global Positioning System user navigation This paper discusses exploration at the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) of DRVID synthesized from simultaneous two-way range and Doppler tracking for low Earth-orbiting missions supported by the Tracking and Data Relay Satellite System (TDRSS) The paper presents comparisons of actual DR and ID residuals and relates those comparisons to predictions of the Bent model. The complications due to the pilot tone influence on relayed Doppler measurements are considered. Further use of DRVID to evaluate ionospheric models is discussed, as is use of DRPID in reducing dependence on ionospheric modeling in orbit determination.
Zhao, Qile; Wang, Guangxing; Liu, Zhizhao; Hu, Zhigang; Dai, Zhiqiang; Liu, Jingnan
2016-01-01
Using GNSS observable from some stations in the Asia-Pacific area, the carrier-to-noise ratio (CNR) and multipath combinations of BeiDou Navigation Satellite System (BDS), as well as their variations with time and/or elevation were investigated and compared with those of GPS and Galileo. Provided the same elevation, the CNR of B1 observables is the lowest among the three BDS frequencies, while B3 is the highest. The code multipath combinations of BDS inclined geosynchronous orbit (IGSO) and medium Earth orbit (MEO) satellites are remarkably correlated with elevation, and the systematic “V” shape trends could be eliminated through between-station-differencing or modeling correction. Daily periodicity was found in the geometry-free ionosphere-free (GFIF) combinations of both BDS geostationary Earth orbit (GEO) and IGSO satellites. The variation range of carrier phase GFIF combinations of GEO satellites is −2.0 to 2.0 cm. The periodicity of carrier phase GFIF combination could be significantly mitigated through between-station differencing. Carrier phase GFIF combinations of BDS GEO and IGSO satellites might also contain delays related to satellites. Cross-correlation suggests that the GFIF combinations’ time series of some GEO satellites might vary according to their relative geometries with the sun. PMID:26805831
Zhao, Qile; Wang, Guangxing; Liu, Zhizhao; Hu, Zhigang; Dai, Zhiqiang; Liu, Jingnan
2016-01-20
Using GNSS observable from some stations in the Asia-Pacific area, the carrier-to-noise ratio (CNR) and multipath combinations of BeiDou Navigation Satellite System (BDS), as well as their variations with time and/or elevation were investigated and compared with those of GPS and Galileo. Provided the same elevation, the CNR of B1 observables is the lowest among the three BDS frequencies, while B3 is the highest. The code multipath combinations of BDS inclined geosynchronous orbit (IGSO) and medium Earth orbit (MEO) satellites are remarkably correlated with elevation, and the systematic "V" shape trends could be eliminated through between-station-differencing or modeling correction. Daily periodicity was found in the geometry-free ionosphere-free (GFIF) combinations of both BDS geostationary Earth orbit (GEO) and IGSO satellites. The variation range of carrier phase GFIF combinations of GEO satellites is -2.0 to 2.0 cm. The periodicity of carrier phase GFIF combination could be significantly mitigated through between-station differencing. Carrier phase GFIF combinations of BDS GEO and IGSO satellites might also contain delays related to satellites. Cross-correlation suggests that the GFIF combinations' time series of some GEO satellites might vary according to their relative geometries with the sun.
CFD Sensitivity Analysis of a Modern Civil Transport Near Buffet-Onset Conditions
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Allison, Dennis O.; Biedron, Robert T.; Buning, Pieter G.; Gainer, Thomas G.; Morrison, Joseph H.; Rivers, S. Melissa; Mysko, Stephen J.; Witkowski, David P.
2001-01-01
A computational fluid dynamics (CFD) sensitivity analysis is conducted for a modern civil transport at several conditions ranging from mostly attached flow to flow with substantial separation. Two different Navier-Stokes computer codes and four different turbulence models are utilized, and results are compared both to wind tunnel data at flight Reynolds number and flight data. In-depth CFD sensitivities to grid, code, spatial differencing method, aeroelastic shape, and turbulence model are described for conditions near buffet onset (a condition at which significant separation exists). In summary, given a grid of sufficient density for a given aeroelastic wing shape, the combined approximate error band in CFD at conditions near buffet onset due to code, spatial differencing method, and turbulence model is: 6% in lift, 7% in drag, and 16% in moment. The biggest two contributers to this uncertainty are turbulence model and code. Computed results agree well with wind tunnel surface pressure measurements both for an overspeed 'cruise' case as well as a case with small trailing edge separation. At and beyond buffet onset, computed results agree well over the inner half of the wing, but shock location is predicted too far aft at some of the outboard stations. Lift, drag, and moment curves are predicted in good agreement with experimental results from the wind tunnel.
Submarine melting from repeat UAV surveys of icebergs
NASA Astrophysics Data System (ADS)
Hubbard, A., II; Ryan, J.; Smith, L. C.; Hamilton, G. S.
2017-12-01
Greenland's tidewater glaciers are a primary contributor to global sea-level rise, yet their future trajectory remains uncertain due to their non-linear response to oceanic forcing: particularly with respect to rapid submarine melting and under-cutting of their calving fronts. To improve understanding of ice-ocean interactions, we conducted repeat unmanned aerial vehicle (UAV) surveys across the terminus of Store Glacier and its adjacent fjord between May and June 2014. The derived imagery provides insight into frontal plume dynamics and the changing freeboard volume of icebergs in the fjord as they ablate. Following the methodology of Enderlin and Hamilton (2014), by differencing iceberg freeboard volume, we constrain submarine melt rates adjacent to the calving front. We find that plume and submarine melt rates are critical to mass loss variability across the calving front. Although the frontal ablation of Store Glacier is dominated by large mechanical calving events, the undercutting induced by the meltwater plume increases the frequency of calving and initiates frontal retreat. We conclude that even small increases in submarine melting due to changes in the meltwater plume duration and/or circulation patterns can have important consequences for frontal mass loss from large outlet glaciers draining the Greenland ice sheet.
Impact of India's watershed development programs on biomass productivity
NASA Astrophysics Data System (ADS)
Bhalla, R. S.; Devi Prasad, K. V.; Pelkey, Neil W.
2013-03-01
Watershed development (WSD) is an important and expensive rural development initiative in India. Proponents of the approach contend that treating watersheds will increase agricultural and overall biomass productivity, which in turn will reduce rural poverty. We used satellite-measured normalized differenced vegetation index as a proxy for land productivity to test this crucial contention. We compared microwatersheds that had received funding and completed watershed restoration with adjacent untreated microwatersheds in the same region. As the criteria used can influence results, we analyzed microwatersheds grouped by catchment, state, ecological region, and biogeographical zones for analysis. We also analyzed pre treatment and posttreatment changes for the same watersheds in those schemes. Our findings show that WSD has not resulted in a significant increase in productivity in treated microwatersheds at any grouping, when compared to adjacent untreated microwatershed or the same microwatershed prior to treatment. We conclude that the well-intentioned people-centric WSD efforts may be inhibited by failing to adequately address the basic geomorphology and hydraulic condition of the catchment areas at all scales.
Discrete models for the numerical analysis of time-dependent multidimensional gas dynamics
NASA Technical Reports Server (NTRS)
Roe, P. L.
1984-01-01
A possible technique is explored for extending to multidimensional flows some of the upwind-differencing methods that are highly successful in the one-dimensional case. Emphasis is on the two-dimensional case, and the flow domain is assumed to be divided into polygonal computational elements. Inside each element, the flow is represented by a local superposition of elementary solutions consisting of plane waves not necessarily aligned with the element boundaries.
NASA Technical Reports Server (NTRS)
Warming, R. F.; Beam, R. M.
1978-01-01
Efficient, noniterative, implicit finite difference algorithms are systematically developed for nonlinear conservation laws including purely hyperbolic systems and mixed hyperbolic parabolic systems. Utilization of a rational fraction or Pade time differencing formulas, yields a direct and natural derivation of an implicit scheme in a delta form. Attention is given to advantages of the delta formation and to various properties of one- and two-dimensional algorithms.
NASA Astrophysics Data System (ADS)
Vincent, C.; Ramanathan, Al.; Wagnon, P.; Dobhal, D. P.; Linda, A.; Berthier, E.; Sharma, P.; Arnaud, Y.; Azam, M. F.; Jose, P. G.; Gardelle, J.
2013-04-01
The volume change of the Chhota Shigri Glacier (India, 32° 20 N, 77° 30' E) between 1988 and 2010 has been determined using in situ geodetic measurements. This glacier has experienced only a slight mass loss between 1988 and 2010 (-3.8 ± 2.0 m w.e. (water equivalent) corresponding to -0.17 ± 0.09 m w.e. yr-1). Using satellite digital elevation models (DEM) differencing and field measurements, we measure a negative mass balance (MB) between 1999 and 2010 (-4.8 ± 1.8 m w.e. corresponding to -0.44 ± 0.16 m w.e. yr-1). Thus, we deduce a slightly positive or near-zero MB between 1988 and 1999 (+1.0 ± 2.7 m w.e. corresponding to +0.09 ± 0.24 m w.e. yr-1). Furthermore, satellite DEM differencing reveals that the MB of the Chhota Shigri Glacier (-0.39 ± 0.15 m w.e. yr-1) has been only slightly less negative than the MB of a 2110 km2 glaciarized area in the Lahaul and Spiti region (-0.44 ± 0.09 m w.e. yr-1) during 1999-2011. Hence, we conclude that the ice wastage is probably moderate in this region over the last 22 yr, with near equilibrium conditions during the nineties, and an ice mass loss after. The turning point from balanced to negative mass budget is not known but lies probably in the late nineties and at the latest in 1999. This positive or near-zero MB for Chhota Shigri Glacier (and probably for the surrounding glaciers of the Lahaul and Spiti region) during at least part of the 1990s contrasts with a recent compilation of MB data in the Himalayan range that indicated ice wastage since 1975. However, in agreement with this compilation, we confirm more negative balances since the beginning of the 21st century.
NASA Astrophysics Data System (ADS)
Brasington, J.; Hicks, M.; Wheaton, J. M.; Williams, R. D.; Vericat, D.
2013-12-01
Repeat surveys of channel morphology provide a means to quantify fluvial sediment storage and enable inferences about changes in long-term sediment supply, watershed delivery and bed level adjustment; information vital to support effective river and land management. Over shorter time-scales, direct differencing of fluvial terrain models may also offer a route to predict reach-averaged sediment transport rates and quantify the patterns of channel morphodynamics and the processes that force them. Recent and rapid advances in geomatics have facilitated these goals by enabling the acquisition of topographic data at spatial resolutions and precisions suitable for characterising river morphology at the scale of individual grains over multi-kilometre reaches. Despite improvements in topographic surveying, inverting the terms of the sediment budget to derive estimates of sediment transport and link these to morphodynamic processes is, nonetheless, often confounded by limited knowledge of either the sediment supply or efflux across a boundary of the control volume, or unobserved cut-and-fill taking place between surveys. This latter problem is particularly poorly constrained, as field logistics frequently preclude surveys at a temporal frequency sufficient to capture changes in sediment storage associated with each competent event, let alone changes during individual floods. In this paper, we attempt to quantify the principal sources of uncertainty in morphologically-derived bedload transport rates for the large, labile, gravel-bed braided Rees River which drains the Southern Alps of NZ. During the austral summer of 2009-10, a unique timeseries of 10 high quality DEMs was derived for a 3 x 0.7 km reach of the Rees, using a combination of mobile terrestrial laser scanning, aDcp soundings and aerial image analysis. Complementary measurements of the forcing flood discharges and estimates of event-based particle step lengths were also acquired during the field campaign. Together, the resulting dataset quantifies the evolution of the study reach over an annual flood season and provides an unprecedented insight into the patterns and processes of braiding. Uncertainties in the inferred rates of bedload transport are associated with the temporal and spatial frequency of measurements used to estimate the storage term of the sediment budget, and methods used to derive the boundary sediment flux. Results obtained reveal that over the annual flood season, over 80% of the braidplain was mobilised and that more than 50% of the bed experienced multiple cycles of cut and fill. Integration of cut and fill volumes event-by-event were found to be approximately 300% of the net change between October and May. While significant uncertainties reside in estimates of the boundary flux, rates of bedload transport derived for individual events are shown to correlate well with total energy expenditure and suggest that a relatively simple relationship may exist between the driving hydraulic forces at the reach scale and the geomorphic work performed.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
Joint Cross Well and Single Well Seismic Studies at Lost Hills, California
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritto, Roland; Daley, Thomas M.; Myer, Larry R.
2002-06-25
A series of time-lapse seismic cross well and single well experiments were conducted in a diatomite reservoir to monitor the injection of CO{sub 2} into a hydrofracture zone, based on P- and S-wave data. A high-frequency piezo-electric P-wave source and an orbital-vibrator S-wave source were used to generate waves that were recorded by hydrophones as well as three-component geophones. The injection well was located about 12 m from the source well. During the pre-injection phase water was injected into the hydrofrac-zone. The set of seismic experiments was repeated after a time interval of 7 months during which CO{sub 2} wasmore » injected into the hydrofractured zone. The questions to be answered ranged from the detectability of the geologic structure in the diatomic reservoir to the detectability of CO{sub 2} within the hydrofracture. Furthermore it was intended to determine which experiment (cross well or single well) is best suited to resolve these features. During the pre-injection experiment, the P-wave velocities exhibited relatively low values between 1700-1900 m/s, which decreased to 1600-1800 m/s during the post-injection phase (-5%). The analysis of the pre-injection S-wave data revealed slow S-wave velocities between 600-800 m/s, while the post-injection data revealed velocities between 500-700 m/s (-6%). These velocity estimates produced high Poisson ratios between 0.36 and 0.46 for this highly porous ({approx} 50%) material. Differencing post- and pre-injection data revealed an increase in Poisson ratio of up to 5%. Both, velocity and Poisson estimates indicate the dissolution of CO{sub 2} in the liquid phase of the reservoir accompanied by a pore-pressure increase. The single well data supported the findings of the cross well experiments. P- and S-wave velocities as well as Poisson ratios were comparable to the estimates of the cross well data. The cross well experiment did not detect the presence of the hydrofracture but appeared to be sensitive to overall changes in the reservoir and possibly the presence of a fault. In contrast, the single well reflection data revealed an arrival that could indicate the presence of the hydrofracture between the source and receiver wells, while it did not detect the presence of the fault, possibly due to out of plane reflections.« less
NASA Astrophysics Data System (ADS)
Muskett, R. R.; Sauber, J. M.; Lingle, C. S.; Rabus, B. T.; Tangborn, W. V.; Echelmeyer, K. A.
2005-12-01
Three- to 5-year surface elevation changes on Bagley Ice Valley, Guyot and Yahtse Glaciers, in the eastern Chugach and St. Elias Mtns of south-central Alaska, are estimated using ICESat-derived data and digital elevation models (DEMs) derived from interferometric synthetic aperture radar (InSAR) data. The surface elevations of these glaciers are influenced by climatic warming superimposed on surge dynamics (in the case of Bagley Ice Valley) and tidewater glacier dynamics (in the cases of Guyot and Yahtse Glaciers) in this coastal high-precipitation regime. Bagley Ice Valley / Bering Glacier last surged in 1993-95. Guyot and Yahtse Glaciers, as well as the nearby Tyndell Glacier, have experienced massive tidewater retreat during the past century, as well as during recent decades. The ICESat-derived elevation data we employ were acquired in early autumn in both 2003 and 2004. The NASA/NIMA Shuttle Radar Topography Mission (SRTM) DEM that we employ was derived from X-band InSAR data acquired during this 11-22 Feb. 2000 mission and processed by the German Aerospace Center. This DEM was corrected for estimated systematic error, and a mass balance model was employed to account for seasonal snow accumulation. The Star-3i airborne, X-band, InSAR-derived DEM that we employ was acquired 4-13 Sept. 2000 by Intermap Technologies, Inc., and was also processed by them. The ICESat-derived profiles crossing Bagley Ice Valley, differenced with Star-3i DEM elevations, indicate preliminary mean along-profile elevation increases of 5.6 ± 3.4 m at 1315 m altitude, 7.4 ± 2.7 m at 1448 m altitude, 4.7 ± 1.9 m at 1557 m altitude, 1.3 ± 1.4 m at 1774 m altitude, and 2.5 ± 1.5 m at 1781 m altitude. This is qualitatively consistent with the rising surface on Bagley Ice Valley observed by Muskett et al. [2003]. The ICESat-derived profiles crossing Yahtse Glacier, differenced with the SRTM DEM elevations, indicate preliminary mean elevation changes (negative implies decrease) of -0.9 ± 3.5 m at 1562 m altitude, -2.6 ± 2.8 m at 1378 m altitude, 6.1 ± 3.5 m at 1142 m altitude, 1.4 ± 12.1 m at 1232 m altitude, -4.0 ± 4.2 m at 250 m to 1217 m altitude, -1.8 ± 3.3 m at 1200 m altitude, and 8.0 ± 6.4 m at 940 m altitude. One ICESat-derived track-to-DEM comparison on Guyot Glacier indicates a preliminary mean elevation change in the 478 m to 1150 m altitude range of -2.8 ± 14.1 m. Results, including additional comparisons to small-aircraft laser altimeter data, with more fully-corrected for estimated snow and ice accumulation / ablation between acquisitions times, will be presented. [Muskett, R.R., C.S. Lingle, W.V. Tangborn, and B.T. Rabus, Multi-decadal elevation changes on Bagley Ice Valley and Malaspina Glacier, Alaska, GRL, 30 (16), 1857, doi:10.1029/2003GL017707, 2003.
Three-dimensional simulation of vortex breakdown
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Salas, M. D.
1990-01-01
The integral form of the complete, unsteady, compressible, three-dimensional Navier-Stokes equations in the conservation form, cast in generalized coordinate system, are solved, numerically, to simulate the vortex breakdown phenomenon. The inviscid fluxes are discretized using Roe's upwind-biased flux-difference splitting scheme and the viscous fluxes are discretized using central differencing. Time integration is performed using a backward Euler ADI (alternating direction implicit) scheme. A full approximation multigrid is used to accelerate the convergence to steady state.
Computer program documentation: Raw-to-processed SINDA program (RTOPHS) user's guide
NASA Technical Reports Server (NTRS)
Damico, S. J.
1980-01-01
Use of the Raw to Processed SINDA(System Improved Numerical Differencing Analyzer) Program, RTOPHS, which provides a means of making the temperature prediction data on binary HSTFLO and HISTRY units generated by SINDA available to engineers in an easy to use format, is discussed. The program accomplishes this by reading the HISTRY unit and according to user input instructions, the desired times and temperature prediction data are extracted and written to a word addressable drum file.
Impacts of Ocean Waves on the Atmospheric Surface Layer: Simulations and Observations
2008-06-06
energy and pressure described in § 4 are solved using a mixed finite - difference pseudospectral scheme with a third-order Runge-Kutta time stepping with a...to that in our DNS code (Sullivan and McWilliams 2002; Sullivan et al. 2000). For our mixed finite - difference pseudospec- tral differencing scheme a...Poisson equation. The spatial discretization is pseu- dospectral along lines of constant or and second- order finite difference in the vertical
Application of a Chimera Full Potential Algorithm for Solving Aerodynamic Problems
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Kwak, Dochan (Technical Monitor)
1997-01-01
A numerical scheme utilizing a chimera zonal grid approach for solving the three dimensional full potential equation is described. Special emphasis is placed on describing the spatial differencing algorithm around the chimera interface. Results from two spatial discretization variations are presented; one using a hybrid first-order/second-order-accurate scheme and the second using a fully second-order-accurate scheme. The presentation is highlighted with a number of transonic wing flow field computations.
Lee, Eun Sook; Kim, Sung Hyo; Kim, Sun Mi; Sun, Jeong Ju
2005-12-01
The purpose of this study was to determine the effect of EPMLM (educational program of manual lymph massage) on the arm functioning and QOL (quality of life) in breast cancer patients with lymphedema. Subjects in the experimental group (n=20) participated in EPMLM for 6 weeks from June to July, 2005. The EPMLM consisted of training of lymph massage for 2 weeks and encourage and support of self-care using lymph massage for 4 weeks. The arm functioning assessed at pre-treatment, 2 weeks, and 6 weeks using Arm functioning questionnaire. The QOL assessed at pre-treatment and 6 weeks using SF-36. The outcome data of experimental group was compared with control group (n=20). The collected data was analyzed by using SPSS 10.0 statistical program. The arm functioning of experimental group was increased from 2 weeks after (W=.224, p=.011) and statistically differenced with control group at 2 weeks (Z=-2.241, p=.024) and 6 weeks (Z=-2.453, p=.013). Physical function of QOL domain increased in experimental group (Z=-1.162, p=.050), also statistically differenced with control group (Z=-2.182, p= .030) at 6 weeks. The results suggest that the educational program of manual lymph massage can improve arm functioning and physical function of QOL domain in breast cancer patients with lymphedema.
Arid land monitoring using Landsat albedo difference images
Robinove, Charles J.; Chavez, Pat S.; Gehring, Dale G.; Holmgren, Ralph
1981-01-01
The Landsat albedo, or percentage of incoming radiation reflected from the ground in the wavelength range of 0.5 [mu]m to 1.1 [mu]m, is calculated from an equation using the Landsat digital brightness values and solar irradiance values, and correcting for atmospheric scattering, multispectral scanner calibration, and sun angle. The albedo calculated for each pixel is used to create an albedo image, whose grey scale is proportional to the albedo. Differencing sequential registered images and mapping selected values of the difference is used to create quantitative maps of increased or decreased albedo values of the terrain. All maps and other output products are in black and white rather than color, thus making the method quite economical. Decreases of albedo in arid regions may indicate improvement of land quality; increases may indicate degradation. Tests of the albedo difference mapping method in the Desert Experimental Range in southwestern Utah (a cold desert with little long-term terrain change) for a four-year period show that mapped changes can be correlated with erosion from flash floods, increased or decreased soil moisture, and increases or decreases in the density of desert vegetation, both perennial shrubs and annual plants. All terrain changes identified in this test were related to variations in precipitation. Although further tests of this method in hot deserts showing severe "desertification" are needed, the method is nevertheless recommended for experimental use in monitoring terrain change in other arid and semiarid regions of the world.
The Vast Population of Wolf-Rayet and Red Supergiant Stars in M101. I. Motivation and First Results
NASA Astrophysics Data System (ADS)
Shara, Michael M.; Bibby, Joanne L.; Zurek, David; Crowther, Paul A.; Moffat, Anthony F. J.; Drissen, Laurent
2013-12-01
Assembling a catalog of at least 10,000 Wolf-Rayet (W-R) stars is an essential step in proving (or disproving) that these stars are the progenitors of Type Ib and Type Ic supernovae. To this end, we have used the Hubble Space Telescope (HST) to carry out a deep, He II optical narrowband imaging survey of the ScI spiral galaxy M101. Almost the entire galaxy was imaged with the unprecedented depth and resolution that only the HST affords. Differenced with archival broadband images, the narrowband images allow us to detect much of the W-R star population of M101. We describe the extent of the survey and our images, as well as our data reduction procedures. A detailed broadband-narrowband imaging study of a field east of the center of M101, containing the giant star-forming region NGC 5462, demonstrates our completeness limits, how we find W-R candidates, their properties and spatial distribution, and how we rule out most contaminants. We use the broadband images to locate luminous red supergiant (RSG) candidates. The spatial distributions of the W-R and RSG stars near NGC 5462 are strikingly different. W-R stars dominate the complex core, while RSGs dominate the complex halo. Future papers in this series will describe and catalog more than a thousand W-R and RSG candidates that are detectable in our images, as well as spectra of many of those candidates.
The vast population of Wolf-Rayet and red supergiant stars in M101. I. Motivation and first results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shara, Michael M.; Bibby, Joanne L.; Zurek, David
Assembling a catalog of at least 10,000 Wolf-Rayet (W-R) stars is an essential step in proving (or disproving) that these stars are the progenitors of Type Ib and Type Ic supernovae. To this end, we have used the Hubble Space Telescope (HST) to carry out a deep, He II optical narrowband imaging survey of the ScI spiral galaxy M101. Almost the entire galaxy was imaged with the unprecedented depth and resolution that only the HST affords. Differenced with archival broadband images, the narrowband images allow us to detect much of the W-R star population of M101. We describe the extentmore » of the survey and our images, as well as our data reduction procedures. A detailed broadband-narrowband imaging study of a field east of the center of M101, containing the giant star-forming region NGC 5462, demonstrates our completeness limits, how we find W-R candidates, their properties and spatial distribution, and how we rule out most contaminants. We use the broadband images to locate luminous red supergiant (RSG) candidates. The spatial distributions of the W-R and RSG stars near NGC 5462 are strikingly different. W-R stars dominate the complex core, while RSGs dominate the complex halo. Future papers in this series will describe and catalog more than a thousand W-R and RSG candidates that are detectable in our images, as well as spectra of many of those candidates.« less
NASA Astrophysics Data System (ADS)
Prates, G.; Berrocoso, M.; Fernández-Ros, A.; García, A.; Ortiz, R.
2012-04-01
El Hierro Island (Canary Islands, Spain) has undergone a submarine eruption a few kilometers to its southeast, detected October 10, on the rift alignment that cuts across the island. However, the seismicity level suddenly increased around July 17 and ground deformation was detected by the only continuously observed GNSS-GPS (Global Navigation Satellite Systems - Global Positioning System) benchmark FRON in the El Golfo area. Based on that information several other GNSS-GPS benchmarks were installed, some of which continuously observed as well. A normal vector analysis was applied to these collected data. The normal vector magnitude variation identified local extension-compression regimes, while the normal vector inclination showed the relative height variation between the three benchmarks that define the plan to which normal vector is analyzed. To accomplish this analysis the data was previously processed to achieve positioning solutions every 30 minutes using the Bernese GPS Software 5.0, further enhanced by a Discrete Kalman Filter, giving an overall millimeter level precision. These solutions were reached using the IGS (International GNSS Service) ultra-rapid orbits and the double-differenced ionosphere-free combination. With this strategy the positioning solutions were attained in near real-time. Later with the IGS rapid orbits the data was reprocessed to provide added confidence to the solutions. Two triangles were then considered, a smaller one located in the El Golfo area within the historically collapsed caldera, and a larger one defined by benchmarks placed in Valverde, El Golfo and La Restinga, the town closest to the eruption's location, covering almost the entire Island's surface above sea level. With these two triangles the pre-eruption and post-eruption deformation suffered by El Hierro's surface will be further analyzed.
Bayesian Inference for Signal-Based Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.
2015-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software, discarding significant information present in the original recorded signal. SIG-VISA (Signal-based Vertically Integrated Seismic Analysis) is a system for global seismic monitoring through Bayesian inference on seismic signals. By modeling signals directly, our forward model is able to incorporate a rich representation of the physics underlying the signal generation process, including source mechanisms, wave propagation, and station response. This allows inference in the model to recover the qualitative behavior of recent geophysical methods including waveform matching and double-differencing, all as part of a unified Bayesian monitoring system that simultaneously detects and locates events from a global network of stations. We demonstrate recent progress in scaling up SIG-VISA to efficiently process the data stream of global signals recorded by the International Monitoring System (IMS), including comparisons against existing processing methods that show increased sensitivity from our signal-based model and in particular the ability to locate events (including aftershock sequences that can tax analyst processing) precisely from waveform correlation effects. We also provide a Bayesian analysis of an alleged low-magnitude event near the DPRK test site in May 2010 [1] [2], investigating whether such an event could plausibly be detected through automated processing in a signal-based monitoring system. [1] Zhang, Miao and Wen, Lianxing. "Seismological Evidence for a Low-Yield Nuclear Test on 12 May 2010 in North Korea". Seismological Research Letters, January/February 2015. [2] Richards, Paul. "A Seismic Event in North Korea on 12 May 2010". CTBTO SnT 2015 oral presentation, video at https://video-archive.ctbto.org/index.php/kmc/preview/partner_id/103/uiconf_id/4421629/entry_id/0_ymmtpps0/delivery/http
How can we Optimize Global Satellite Observations of Glacier Velocity and Elevation Changes?
NASA Astrophysics Data System (ADS)
Willis, M. J.; Pritchard, M. E.; Zheng, W.
2015-12-01
We have started a global compilation of glacier surface elevation change rates measured by altimeters and differencing of Digital Elevation Models and glacier velocities measured by Synthetic Aperture Radar (SAR) and optical feature tracking as well as from Interferometric SAR (InSAR). Our goal is to compile statistics on recent ice flow velocities and surface elevation change rates near the fronts of all available glaciers using literature and our own data sets of the Russian Arctic, Patagonia, Alaska, Greenland and Antarctica, the Himalayas, and other locations. We quantify the percentage of the glaciers on the planet that can be regarded as fast flowing glaciers, with surface velocities of more than 50 meters per year, while also recording glaciers that have elevation change rates of more than 2 meters per year. We examine whether glaciers have significant interannual variations in velocities, or have accelerated or stagnated where time series of ice motions are available. We use glacier boundaries and identifiers from the Randolph Glacier Inventory. Our survey highlights glaciers that are likely to react quickly to changes in their mass accumulation rates. The study also identifies geographical areas where our knowledge of glacier dynamics remains poor. Our survey helps guide how frequently observations must be made in order to provide quality satellite-derived velocity and ice elevation observations at a variety of glacier thermal regimes, speeds and widths. Our objectives are to determine to what extent the joint NASA and Indian Space Research Organization Synthetic Aperture Radar mission (NISAR) will be able to provide global precision coverage of ice speed changes and to determine how to optimize observations from the global constellation of satellite missions to record important changes to glacier elevations and velocities worldwide.
1989-08-09
quantitative and can be ascribed to differences in experimental methodology , recovery methods and canputational procedure. one important differenc.e in...when the oil was pyrolyzed in sealed glass tubes. Aircraft turbo oil lubricants with the designation MIL-L-23699 are in canron usage throughout the...which is not explosive, not an oxidizing agent and is relatively inflamnable and non -corrosive. It has the following structure: CH2 - 0 CH3 CH2 C CH2 - 0
1987-06-01
number of series among the 63 which were identified as a particular ARIMA form and were "best" modeled by a particular technique. Figure 1 illustrates a...th time from xe’s. The integrbted autoregressive - moving average model , denoted by ARIMA (p,d,q) is a result of combining d-th differencing process...Experiments, (4) Data Analysis and Modeling , (5) Theory and Probablistic Inference, (6) Fuzzy Statistics, (7) Forecasting and Prediction, (8) Small Sample
2014-09-15
solver, OpenFOAM version 2.1.‡ In particular, the incompressible laminar flow equations (Eq. 6-8) were solved in conjunction with the pressure im- plicit...central differencing and upwinding schemes, respectively. Since the OpenFOAM code is inherently transient, steady-state conditions were ob- tained...collaborative effort between Kitware and Los Alamos National Laboratory. ‡ OpenFOAM is a free, open-source computational fluid dynamics software developed
An application of fractional integration to a long temperature series
NASA Astrophysics Data System (ADS)
Gil-Alana, L. A.
2003-11-01
Some recently proposed techniques of fractional integration are applied to a long UK temperature series. The tests are valid under general forms of serial correlation and do not require estimation of the fractional differencing parameter. The results show that central England temperatures have increased about 0.23 °C per 100 years in recent history. Attempting to summarize the conclusions for each of the months, we are left with the impression that the highest increase has occurred during the months from October to March.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E.W.
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
Computational fluid mechanics utilizing the variational principle of modeling damping seals
NASA Technical Reports Server (NTRS)
Abernathy, J. M.
1986-01-01
A computational fluid dynamics code for application to traditional incompressible flow problems has been developed. The method is actually a slight compressibility approach which takes advantage of the bulk modulus and finite sound speed of all real fluids. The finite element numerical analog uses a dynamic differencing scheme based, in part, on a variational principle for computational fluid dynamics. The code was developed in order to study the feasibility of damping seals for high speed turbomachinery. Preliminary seal analyses have been performed.
Further results on the stagnation point boundary layer with hydrogen injection.
NASA Technical Reports Server (NTRS)
Wu, P.; Libby, P. A.
1972-01-01
The results of an earlier paper on the behavior of the boundary layer at an axisymmetric stagnation with hydrogen injection into a hot external airstream are extended to span the entire range from essentially frozen to essentially equilibrium flow. This extension is made possible by the employment of finite difference methods; the accurate treatment of the boundary conditions at 'infinity,' the differencing technique employed and the formulation resulting in block tri-diagonal matrices are slight variants in the present work.
Research in computational fluid dynamics
NASA Technical Reports Server (NTRS)
Murman, Earll M.
1987-01-01
The numerical integration of quasi-one-dimensional unsteady flow problems which involve finite rate chemistry are discussed, and are expressed in terms of conservative form Euler and species conservation equations. Hypersonic viscous calculations for delta wing geometries is also examined. The conical Navier-Stokes equations model was selected in order to investigate the effects of viscous-inviscid interations. The more complete three-dimensional model is beyond the available computing resources. The flux vector splitting method with van Leer's MUSCL differencing is being used. Preliminary results were computed for several conditions.
CFD in Support of Wind Tunnel Testing for Aircraft/Weapons Integration
2004-06-01
Warming flux vector splitting scheme. Viscous rate t mies s to the oDentati ote t fluxes (computed using spatial central differencing) in erotate try...computations factors to eliminate them from the current computation. performed. The grid system consisted of 18 x 106 points These newly i-blanked grid...273-295. 130 14. van Leer, B., "Towards the Ultimate Conservative 18 . Suhs, N.E., and R.W. Tramel, "PEGSUS 4.0 Users Manual." Difference Scheme V. A
2007-11-01
again, with of the prevailing T, S, and, hence, D gradients through the the advent of high-performance spaceborne altimeters (e.g., high- aspect - ratio ... rectangular domains with linear dimensions largely , if not completely, eliminated by the differencing oper- of about 60 km in a 4-h flight. (See...strongest A simple four- quadrant arctangent of the terms in the density in the 00 and 1800 directions, whereas compensation is most ratio would serve our
Numerical Field Model Simulation of Full Scale Fire Tests in a Closed Spherical/Cylindrical Vessel.
1987-12-01
the behavior of an actual fire on board a ship. The computer model will be verified by the experimental data obtained in Fire-l. It is important to... behavior in simulations where convection is important. The upwind differencing scheme takes into account the unsymmetrical phenomenon of convection by using...TANK CELL ON THE NORTH SIDE) FOR A * * PARTICULAR FIRE CELL * * COSUMS (I,J) = THE ARRAY TO STORE THE SIMILIAR VALUE FOR THE FIRE * * CELL TO THE SOUTH
Computing interface motion in compressible gas dynamics
NASA Technical Reports Server (NTRS)
Mulder, W.; Osher, S.; Sethan, James A.
1992-01-01
An analysis is conducted of the coupling of Osher and Sethian's (1988) 'Hamilton-Jacobi' level set formulation of the equations of motion for propagating interfaces to a system of conservation laws for compressible gas dynamics, giving attention to both the conservative and nonconservative differencing of the level set function. The capabilities of the method are illustrated in view of the results of numerical convergence studies of the compressible Rayleigh-Taylor and Kelvin-Helmholtz instabilities for air-air and air-helium boundaries.
CFD in the 1980's from one point of view
NASA Technical Reports Server (NTRS)
Lomax, Harvard
1991-01-01
The present interpretive treatment of the development history of CFD in the 1980s gives attention to advancements in such algorithmic techniques as flux Jacobian-based upwind differencing, total variation-diminishing and essentially nonoscillatory schemes, multigrid methods, unstructured grids, and nonrectangular structured grids. At the same time, computational turbulence research gave attention to turbulence modeling on the bases of increasingly powerful supercomputers and meticulously constructed databases. The major future developments in CFD will encompass such capabilities as structured and unstructured three-dimensional grids.
Mid-Infrared Spectroscopy of Carbon Stars in the Small Magellanic Cloud
2006-07-10
nod. Before extracting spectra from fit a variety of spectral feature shapes using MgS considerably the images, we used the imclean software package...mined from neighboring pixels. In addition to the dust features , the IRS wavelength range also To extract spectra from the cleaned and differenced...Example of the extraction of the molecular bands and the SiC dust 24 jIm, and they avoid any potential problems at the joint be- feature from the spectrum
NASA Technical Reports Server (NTRS)
Oman, L. D.; Douglass, A. R.; Ziemke, J. R.; Rodriquez, J. M.; Waugh, D. W.; Nielsen, J. E.
2012-01-01
The El Nino-Southern Oscillation (ENSO) is the dominant mode of tropical variability on interannual time scales. ENSO appears to extend its influence into the chemical composition of the tropical troposphere. Recent work has revealed an ENSO-induced wave-1 anomaly in observed tropical tropospheric column ozone. This results in a dipole over the western and eastern tropical Pacific, whereby differencing the two regions produces an ozone anomaly with an extremely high correlation to the Nino 3.4 Index. We have successfully reproduced this feature using the Goddard Earth Observing System Version 5 (GEOS-5) general circulation model coupled to a comprehensive stratospheric and tropospheric chemical mechanism forced with observed sea surface temperatures over the past 25 years. An examination of the modeled ozone field reveals the vertical contributions of tropospheric ozone to the column over the western and eastern Pacific region. We will show composition sensitivity in observations from NASA s Aura satellite Microwave Limb Sounder (MLS) and the Tropospheric Emissions Spectrometer (TES) and a simulation to provide insight into the vertical structure of these ENSO-induced ozone changes. The ozone changes due to the Quasi-Biennial Oscillation (QBO) in the extra-polar upper troposphere and lower stratosphere in MLS measurements will also be discussed.
Cox, T.J.; Runkel, R.L.
2008-01-01
Past applications of one-dimensional advection, dispersion, and transient storage zone models have almost exclusively relied on a central differencing, Eulerian numerical approximation to the nonconservative form of the fundamental equation. However, there are scenarios where this approach generates unacceptable error. A new numerical scheme for this type of modeling is presented here that is based on tracking Lagrangian control volumes across a fixed (Eulerian) grid. Numerical tests are used to provide a direct comparison of the new scheme versus nonconservative Eulerian numerical methods, in terms of both accuracy and mass conservation. Key characteristics of systems for which the Lagrangian scheme performs better than the Eulerian scheme include: nonuniform flow fields, steep gradient plume fronts, and pulse and steady point source loadings in advection-dominated systems. A new analytical derivation is presented that provides insight into the loss of mass conservation in the nonconservative Eulerian scheme. This derivation shows that loss of mass conservation in the vicinity of spatial flow changes is directly proportional to the lateral inflow rate and the change in stream concentration due to the inflow. While the nonconservative Eulerian scheme has clearly worked well for past published applications, it is important for users to be aware of the scheme's limitations. ?? 2008 ASCE.
Tidal and tidally averaged circulation characteristics of Suisun Bay, California
Smith, Lawrence H.; Cheng, Ralph T.
1987-01-01
Availability of extensive field data permitted realistic calibration and validation of a hydrodynamic model of tidal circulation and salt transport for Suisun Bay, California. Suisun Bay is a partially mixed embayment of northern San Francisco Bay located just seaward of the Sacramento-San Joaquin Delta. The model employs a variant of an alternating direction implicit finite-difference method to solve the hydrodynamic equations and an Eulerian-Lagrangian method to solve the salt transport equation. An upwind formulation of the advective acceleration terms of the momentum equations was employed to avoid oscillations in the tidally averaged velocity field produced by central spatial differencing of these terms. Simulation results of tidal circulation and salt transport demonstrate that tides and the complex bathymetry determine the patterns of tidal velocities and that net changes in the salinity distribution over a few tidal cycles are small despite large changes during each tidal cycle. Computations of tidally averaged circulation suggest that baroclinic and wind effects are important influences on tidally averaged circulation during low freshwater-inflow conditions. Exclusion of baroclinic effects would lead to overestimation of freshwater inflow by several hundred m3/s for a fixed set of model boundary conditions. Likewise, exclusion of wind would cause an underestimation of flux rates between shoals and channels by 70–100%.
Recent Improvements in AMSR2 Ground-Based RFI Filtering
NASA Astrophysics Data System (ADS)
Scott, J. P.; Gentemann, C. L.; Wentz, F. J.
2015-12-01
Passive satellite radiometer measurements in the microwave frequencies (6-89 GHz) are useful in providing geophysical retrievals of sea surface temperature (SST), atmospheric water vapor, wind speed, rain rate, and more. However, radio frequency interference (RFI) is one of the fastest growing sources of error in these retrievals. RFI can originate from broadcasting satellites, as well as from ground-based instrumentation that makes use of the microwave range. The microwave channel bandwidths used by passive satellite radiometers are often wider than the protected bands allocated for this type of remote sensing, a common practice in microwave radiometer design used to reduce the effect of instrument noise in the observed signal. However, broad channel bandwidths allow greater opportunity for RFI to affect these observations and retrievals. For ground-based RFI, a signal is broadcast directly into the atmosphere which may interfere with the radiometer - its antenna, cold mirror, hot load or the internal workings of the radiometer itself. It is relatively easy to identify and flag RFI from large sources, but more difficult to do so from small, sporadic sources. Ground-based RFI has high spatial and temporal variability, requiring constant, automated detection and removal to avoid spurious trends leaching into the geophysical retrievals. Ascension Island in the South Atlantic Ocean has been one of these notorious ground-based RFI sources, affecting many microwave radiometers, including the AMSR2 radiometer onboard JAXA's GCOM-W1 satellite. Ascension Island RFI mainly affects AMSR2's lower frequency channels (6.9, 7.3, and 10.65 GHz) over a broad spatial region in the South Atlantic Ocean, which makes it challenging to detect and flag this RFI using conventional channel and geophysical retrieval differencing techniques. The authors have developed a new method of using the radiometer's earth counts and hot counts, for the affected channels, to detect an Ascension Island RFI event and flag the data efficiently and accurately, thereby reducing false detections and optimizing retrieval quality and data preservation.
Bayesian Monitoring Systems for the CTBT: Historical Development and New Results
NASA Astrophysics Data System (ADS)
Russell, S.; Arora, N. S.; Moore, D.
2016-12-01
A project at Berkeley, begun in 2009 in collaboration with CTBTO andmore recently with LLNL, has reformulated the global seismicmonitoring problem in a Bayesian framework. A first-generation system,NETVISA, has been built comprising a spatial event prior andgenerative models of event transmission and detection, as well as aMonte Carlo inference algorithm. The probabilistic model allows forseamless integration of various disparate sources of information,including negative information (the absence of detections). Workingfrom arrivals extracted by traditional station processing fromInternational Monitoring System (IMS) data, NETVISA achieves areduction of around 60% in the number of missed events compared withthe currently deployed network processing system. It also finds manyevents that are missed by the human analysts who postprocess the IMSoutput. Recent improvements include the integration of models forinfrasound and hydroacoustic detections and a global depth model fornatural seismicity trained from ISC data. NETVISA is now fullycompatible with the CTBTO operating environment. A second-generation model called SIGVISA extends NETVISA's generativemodel all the way from events to raw signal data, avoiding theerror-prone bottom-up detection phase of station processing. SIGVISA'smodel automatically captures the phenomena underlying existingdetection and location techniques such as multilateration, waveformcorrelation matching, and double-differencing, and integrates theminto a global inference process that also (like NETVISA) handles denovo events. Initial results for the Western US in early 2008 (whenthe transportable US Array was operating) shows that SIGVISA finds,from IMS data only, more than twice the number of events recorded inthe CTBTO Late Event Bulletin (LEB). For mb 1.0-2.5, the ratio is more than10; put another way, for this data set, SIGVISA lowers the detectionthreshold by roughly one magnitude compared to LEB. The broader message of this work is that probabilistic inference basedon a vertically integrated generative model that directly expressesgeophysical knowledge can be a much more effective approach forinterpreting scientific data than the traditional bottom-up processingpipeline.
Depth and Distribution of CO2 Snow on Mars
NASA Technical Reports Server (NTRS)
Aharonson, Oded; Zuber, Maria T.; Smith, David E.; Neumann, Gregory A.
2003-01-01
The dynamic role of volatiles on the surface of Mars has been a subject of longstanding interest. In the pre-Viking era, much of the debate was necessarily addressed by theoretical considerations. A particularly influential treatment by Leighton and Murray put forth a simple model relying on solar energy balance, and led to the conclusion that the most prominent volatile exchanging with the atmosphere over seasonal cycles is carbon dioxide. Their model suggested that due to this exchange, atmospheric CO2 partial pressure is regulated by polar ice. While current thinking attributes a larger role to H2O ice than did the occasional thin polar coating this model predicted, the CO2 cycle appears to be essentially correct. There are a number of observational constraints on the seasonal exchange of surface volatiles with the atmosphere. The growth and retreat of polar CO2 frost is visible from Earth-based telescopes and from spacecraft in Mars orbit, both at visible wavelengths and in thermal IR properties of the surface. Recently, variations in Gamma ray and neutron fluxes have also been used to infer integrated changes in CO2 mass on the surface. Measurements made by Viking's Mars Atmospheric Water Detector experiment were sensitive to atmospheric H2O vapor abundance. Surface condensates and their transient nature were detected by the Viking landers. The study here is motivated by recent data collected by the Mars Global Surveyor, affording the opportunity to not only detect the lateral distribution of volatiles, but also to constrain the variable volumes of the reservoirs. We elaborate on a technique first employed by Smith et al. By examining averages of a large number of topographic measurements collected by the Mars Orbiter Laser Altimeter (MOLA), that study showed that the zonal pattern of deposition and sublimation of CO2 can be determined. In their first approach, reference surfaces were fit to all measurements in narrow latitude annuli, and the time dependent variations about those mean surfaces were examined. In their second approach, height measurements from pairs of tracks that cross on the surface were interpolated and differenced, forming a set of crossover residuals. These residuals were then examined as a function of time and latitude. The initial studies averaged over longitude to maximize signal and minimize noise in order to isolate the expected small signal. In this follow-up study we now attempt to extract the elevation change pattern also as a function of longitude, and we focus on the crossover approach.
Lunar Radio_phase Ranging in Chinese Lunar Lander Mission for Astrometry
NASA Astrophysics Data System (ADS)
Ping, Jinsong; Meng, Qiao; Li, Wenxiao; Wang, Mingyuan; Wang, Zhen; Zhang, Tianyi; Han, Songtao
2015-08-01
The radio tracking data in lunar and planetary missions can be directly applied for scientific investigation. The variations of phase and of amplitude of the radio carrier wave signal linked between the spacecraft and the ground tracking antenna are used to deduce the planetary atmospheric and ionospheric structure, planetary gravity field, mass, ring, ephemeris, and even to test the general relativity. In the Chinese lunar missions, we developed the lunar and planetary radio science receiver to measure the distance variation between the tracking station-lander by means of open loop radio phase tracking. Using this method in Chang’E-3 landing mission, a lunar radio_phase ranging (LRR) technique was realized at Chinese deep space tracking stations and astronomical VLBI stations with H-maser clocks installed. Radio transponder and transmitter had been installed on the Chang’E-3/4. Transponder will receive the uplink S/X band radio wave transmitted from the two newly constructed Chinese deep space stations, where the high quality hydrogen maser atomic clocks have been used as local time and frequency standard. The clocks between VLBI stations and deep space stations can be synchronized to UTC standard within 20 nanoseconds using satellite common view methods. In the near future there will be a plan to improve this accuracy to 5 nanoseconds or better, as the level of other deep space network around world. In the preliminary LRR experiments of Chang'E-3, the obtained 1sps phase ranging observables have a resolution of 0.2 millimeter or better, with a fitting RMS about 2~3 millimeter, after the atmospheric and ionospheric errors removed. This method can be a new astrometric technique to measure the Earth tide and rotation, lunar orbit, tides and liberation, by means of solo observation or of working together with Lunar Laser Ranging. After differencing the ranging, we even obtained 1sps doppler series of 2-way observables with resolution of 0.07mm/second, which can be used to check the uplimit for low frequency (0.001~1 Hz) gravitational wave detection between the Earth and the Moon.
Molecular constituents of colorectal cancer metastatic to the liver by imaging infrared spectroscopy
NASA Astrophysics Data System (ADS)
Coe, James V.; Chen, Zhaomin; Li, Ran; Nystrom, Steven V.; Butke, Ryan; Miller, Barrie; Hitchcock, Charles L.; Allen, Heather C.; Povoski, Stephen P.; Martin, Edward W.
2015-03-01
Infrared (IR) imaging spectroscopy of human liver tissue slices has been used to identify and characterize liver metastasis of colorectal origin which was surgically removed from a consenting patient and frozen without formalin fixation or dehydration procedures, so that lipids and water remain in the tissues. First, a k-means clustering analysis, using metrics from the IR spectra, identified groups within the image. The groups were identified as tumor or nontumor regions by comparing to an H and E stain of the same sample after IR imaging. Then, calibrant IR spectra of protein, several fats, glycogen, and polyvinyl alcohol were isolated by differencing spectra from different regions or groups in the image space. Finally, inner products (or scores) of the IR spectra at each pixel in the image with each of the various calibrants were calculated showing how the calibrant molecules vary in tumor and nontumor regions. In this particular case, glycogen and protein changes enable separation of tumor and nontumor regions as shown with a contour plot of the glycogen scores versus the protein scores.
NASA Astrophysics Data System (ADS)
Muniandy, Sithi V.; Uning, Rosemary
2006-11-01
Foreign currency exchange rate policies of ASEAN member countries have undergone tremendous changes following the 1997 Asian financial crisis. In this paper, we study the fractal and long-memory characteristics in the volatility of five ASEAN founding members’ exchange rates with respect to US dollar. The impact of exchange rate policies implemented by the ASEAN-5 countries on the currency fluctuations during pre-, mid- and post-crisis are briefly discussed. The time series considered are daily price returns, absolute returns and aggregated absolute returns, each partitioned into three segments based on the crisis regimes. These time series are then modeled using fractional Gaussian noise, fractionally integrated ARFIMA (0,d,0) and generalized Cauchy process. The first two stationary models provide the description of long-range dependence through Hurst and fractional differencing parameter, respectively. Meanwhile, the generalized Cauchy process offers independent estimation of fractal dimension and long memory exponent. In comparison, among the three models we found that the generalized Cauchy process showed greater sensitivity to transition of exchange rate regimes that were implemented by ASEAN-5 countries.
An Integrated Solution for Performing Thermo-fluid Conjugate Analysis
NASA Technical Reports Server (NTRS)
Kornberg, Oren
2009-01-01
A method has been developed which integrates a fluid flow analyzer and a thermal analyzer to produce both steady state and transient results of 1-D, 2-D, and 3-D analysis models. The Generalized Fluid System Simulation Program (GFSSP) is a one dimensional, general purpose fluid analysis code which computes pressures and flow distributions in complex fluid networks. The MSC Systems Improved Numerical Differencing Analyzer (MSC.SINDA) is a one dimensional general purpose thermal analyzer that solves network representations of thermal systems. Both GFSSP and MSC.SINDA have graphical user interfaces which are used to build the respective model and prepare it for analysis. The SINDA/GFSSP Conjugate Integrator (SGCI) is a formbase graphical integration program used to set input parameters for the conjugate analyses and run the models. The contents of this paper describes SGCI and its thermo-fluids conjugate analysis techniques and capabilities by presenting results from some example models including the cryogenic chill down of a copper pipe, a bar between two walls in a fluid stream, and a solid plate creating a phase change in a flowing fluid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pruess, K.; Oldenburg, C.; Moridis, G.
1997-12-31
This paper summarizes recent advances in methods for simulating water and tracer injection, and presents illustrative applications to liquid- and vapor-dominated geothermal reservoirs. High-resolution simulations of water injection into heterogeneous, vertical fractures in superheated vapor zones were performed. Injected water was found to move in dendritic patterns, and to experience stronger lateral flow effects than predicted from homogeneous medium models. Higher-order differencing methods were applied to modeling water and tracer injection into liquid-dominated systems. Conventional upstream weighting techniques were shown to be adequate for predicting the migration of thermal fronts, while higher-order methods give far better accuracy for tracer transport.more » A new fluid property module for the TOUGH2 simulator is described which allows a more accurate description of geofluids, and includes mineral dissolution and precipitation effects with associated porosity and permeability change. Comparisons between numerical simulation predictions and data for laboratory and field injection experiments are summarized. Enhanced simulation capabilities include a new linear solver package for TOUGH2, and inverse modeling techniques for automatic history matching and optimization.« less
NASA Astrophysics Data System (ADS)
Greaves, Heather E.
Climate change is disproportionately affecting high northern latitudes, and the extreme temperatures, remoteness, and sheer size of the Arctic tundra biome have always posed challenges that make application of remote sensing technology especially appropriate. Advances in high-resolution remote sensing continually improve our ability to measure characteristics of tundra vegetation communities, which have been difficult to characterize previously due to their low stature and their distribution in complex, heterogeneous patches across large landscapes. In this work, I apply terrestrial lidar, airborne lidar, and high-resolution airborne multispectral imagery to estimate tundra vegetation characteristics for a research area near Toolik Lake, Alaska. Initially, I explored methods for estimating shrub biomass from terrestrial lidar point clouds, finding that a canopy-volume based algorithm performed best. Although shrub biomass estimates derived from airborne lidar data were less accurate than those from terrestrial lidar data, algorithm parameters used to derive biomass estimates were similar for both datasets. Additionally, I found that airborne lidar-based shrub biomass estimates were just as accurate whether calibrated against terrestrial lidar data or harvested shrub biomass--suggesting that terrestrial lidar potentially could replace destructive biomass harvest. Along with smoothed Normalized Differenced Vegetation Index (NDVI) derived from airborne imagery, airborne lidar-derived canopy volume was an important predictor in a Random Forest model trained to estimate shrub biomass across the 12.5 km2 covered by our lidar and imagery data. The resulting 0.80 m resolution shrub biomass maps should provide important benchmarks for change detection in the Toolik area, especially as deciduous shrubs continue to expand in tundra regions. Finally, I applied 33 lidar- and imagery-derived predictor layers in a validated Random Forest modeling approach to map vegetation community distribution at 20 cm resolution across the data collection area, creating maps that will enable validation of coarser maps, as well as study of fine-scale ecological processes in the area. These projects have pushed the limits of what can be accomplished for vegetation mapping using airborne remote sensing in a challenging but important region; it is my hope that the methods explored here will illuminate potential paths forward as landscapes and technologies inevitably continue to change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arakawa, Akio; Konor, C.S.
Two types of vertical grids are used for atmospheric models: The Lorenz (L grid) and the Charney-Phillips grid (CP grid). In this paper, problems with the L grid are pointed out that are due to the existence of an extra degree of freedom in the vertical distribution of the temperature (and the potential temperature). Then a vertical differencing of the primitive equations based on the CP grid is presented, while most of the advantages of the L grid in a hybrid {sigma}-p vetical coordinate are maintained. The discrete hydrostatic equation is constructed in such a way that it is freemore » from the vertical computational mode in the thermal field. Also, the vertical advection of the potential temperature in the discrete thermodynamic equation is constructed in such a way that it reduces to the standard (and most straightforward) vertical differencing of the quasigeostrophic equations based on the CP grid. Simulations of standing oscillations superposed on a resting atmosphere are presented using two vertically discrete models, one based on the L grid and the other on the CP grid. The comparison of the simulations shows that with the L grid a stationary vertically zigzag pattern dominates in the thermal field, while with the CP grid no such pattern is evident. Simulations of the growth of an extrapolated cyclone in a cyclic channel on a {beta} plan are also presented using two different {sigma}-coordinate models, again one with the L grid and the other with the CP grid, starting from random disturbances. 17 refs., 8 figs.« less
Benchmark measurements and calculations of a 3-dimensional neutron streaming experiment
NASA Astrophysics Data System (ADS)
Barnett, D. A., Jr.
1991-02-01
An experimental assembly known as the Dog-Legged Void assembly was constructed to measure the effect of neutron streaming in iron and void regions. The primary purpose of the measurements was to provide benchmark data against which various neutron transport calculation tools could be compared. The measurements included neutron flux spectra at four places and integral measurements at two places in the iron streaming path as well as integral measurements along several axial traverses. These data have been used in the verification of Oak Ridge National Laboratory's three-dimensional discrete ordinates code, TORT. For a base case calculation using one-half inch mesh spacing, finite difference spatial differencing, an S(sub 16) quadrature and P(sub 1) cross sections in the MUFT multigroup structure, the calculated solution agreed to within 18 percent with the spectral measurements and to within 24 percent of the integral measurements. Variations on the base case using a fewgroup energy structure and P(sub 1) and P(sub 3) cross sections showed similar agreement. Calculations using a linear nodal spatial differencing scheme and fewgroup cross sections also showed similar agreement. For the same mesh size, the nodal method was seen to require 2.2 times as much CPU time as the finite difference method. A nodal calculation using a typical mesh spacing of 2 inches, which had approximately 32 times fewer mesh cells than the base case, agreed with the measurements to within 34 percent and yet required on 8 percent of the CPU time.
Artificial Vector Calibration Method for Differencing Magnetic Gradient Tensor Systems
Li, Zhining; Zhang, Yingtang; Yin, Gang
2018-01-01
The measurement error of the differencing (i.e., using two homogenous field sensors at a known baseline distance) magnetic gradient tensor system includes the biases, scale factors, nonorthogonality of the single magnetic sensor, and the misalignment error between the sensor arrays, all of which can severely affect the measurement accuracy. In this paper, we propose a low-cost artificial vector calibration method for the tensor system. Firstly, the error parameter linear equations are constructed based on the single-sensor’s system error model to obtain the artificial ideal vector output of the platform, with the total magnetic intensity (TMI) scalar as a reference by two nonlinear conversions, without any mathematical simplification. Secondly, the Levenberg–Marquardt algorithm is used to compute the integrated model of the 12 error parameters by nonlinear least-squares fitting method with the artificial vector output as a reference, and a total of 48 parameters of the system is estimated simultaneously. The calibrated system outputs along the reference platform-orthogonal coordinate system. The analysis results show that the artificial vector calibrated output can track the orientation fluctuations of TMI accurately, effectively avoiding the “overcalibration” problem. The accuracy of the error parameters’ estimation in the simulation is close to 100%. The experimental root-mean-square error (RMSE) of the TMI and tensor components is less than 3 nT and 20 nT/m, respectively, and the estimation of the parameters is highly robust. PMID:29373544
Trajectory control sensor engineering model detailed test objective
NASA Technical Reports Server (NTRS)
Dekome, Kent; Barr, Joseph Martin
1991-01-01
The concept employed in an existing Trajectory Control Sensor (TCS) breadboard is being developed into an engineering model to be considered for flight on the Shuttle as a Detailed Test Objective (DTO). The sensor design addresses the needs of Shuttle/SSF docking/berthing by providing relative range and range rate to 1500 meters as well as the perceived needs of AR&C by relative attitude measurement over the last 100 meters. Range measurement is determined using a four-tone ranging technique. The Doppler shift on the highest frequency tone will be used to provide direct measurement of range rate. Bearing rate and attitude rates will be determined through back differencing of bearing and attitude, respectively. The target consists of an isosceles triangle configuration of three optical retroreflectors, roughly one meter and one-half meter in size. After target acquisition, the sensor continually updates the positions of the three retros at a rate of about one hertz. The engineering model is expected to weigh about 25 pounds, consume 25-30 watts, and have an envelope of about 1.25 cubic feet. The following concerns were addressed during the presentation: are there any concerns with differentiating attitude and bearing to get attitude and bearing rates? Since the docking scenario has low data bandwidth, back differencing is a sufficient approximation of a perfect differentiator for this application. Could range data be obtained if there were no retroreflectors on the target vehicle? Possibly, but only at close range. It would be dependent on target characteristics.
NASA Astrophysics Data System (ADS)
Wu, Kunpeng; Liu, Shiyin; Jiang, Zongli; Xu, Junli; Wei, Junfeng; Guo, Wanqin
2018-01-01
Due to the influence of the Indian monsoon, the Kangri Karpo Mountains in the south-east of the Tibetan Plateau is in the most humid and one of the most important and concentrated regions containing maritime (temperate) glaciers. Glacier mass loss in the Kangri Karpo is an important contributor to global mean sea level rise, and changes run-off distribution, increasing the risk of glacial-lake outburst floods (GLOFs). Because of its inaccessibility and high labour costs, information about the Kangri Karpo glaciers is still limited. Using geodetic methods based on digital elevation models (DEMs) derived from 1980 topographic maps from the Shuttle Radar Topography Mission (SRTM) (2000) and from TerraSAR-X/TanDEM-X (2014), this study has determined glacier elevation changes. Glacier area and length changes between 1980 and 2015 were derived from topographical maps and Landsat TM/ETM+/OLI images. Results show that the Kangri Karpo contained 1166 glaciers with an area of 2048.50 ± 48.65 km2 in 2015. Ice cover diminished by 679.51 ± 59.49 km2 (24.9 ± 2.2 %) or 0.71 ± 0.06 % a-1 from 1980 to 2015, although nine glaciers advanced. A glacierized area of 788.28 km2, derived from DEM differencing, experienced a mean mass loss of 0.46 ± 0.08 m w.e. a-1 from 1980 to 2014. Shrinkage and mass loss accelerated significantly from 2000 to 2015 compared to 1980-2000, consistent with a warming climate.
NASA Astrophysics Data System (ADS)
Dolan, K. A.
2015-12-01
Disturbance plays a critical role in shaping the structure and function of forested ecosystems as well as the ecosystem services they provide, including but not limited to: carbon storage, biodiversity habitat, water quality and flow, and land atmosphere exchanges of energy and water. In addition, recent studies suggest that disturbance rates may increase in the future under altered climate and land use scenarios. Thus understanding how vulnerable forested ecosystems are to potential changes in disturbance rates is of high importance. This study calculated the theoretical threshold rate of disturbance for which forest ecosystems could no longer be sustained (λ*) across the Coterminous U.S. using an advanced process based ecosystem model (ED). Published rates of disturbance (λ) in 50 study sites were obtained from the North American Forest Disturbance (NAFD) program. Disturbance distance (λ* - λ) was calculated for each site by differencing the model based threshold under current climate conditions and average observed rates of disturbance over the last quarter century. Preliminary results confirm all sample forest sites have current average rates of disturbance below λ*, but there were interesting patterns in the recorded disturbance distances. In general western sites had much smaller disturbance distances, suggesting higher vulnerability to change, while eastern sites showed larger buffers. Ongoing work is being conducted to assess the vulnerability of these sites in the context of potential future changes by propagating scenarios of future climate and land-use change through the analysis.
NASA Astrophysics Data System (ADS)
Bai, Weihua; Liu, Congliang; Meng, Xiangguang; Sun, Yueqiang; Kirchengast, Gottfried; Du, Qifei; Wang, Xianyi; Yang, Guanglin; Liao, Mi; Yang, Zhongdong; Zhao, Danyang; Xia, Junming; Cai, Yuerong; Liu, Lijun; Wang, Dongwei
2018-02-01
The Global Navigation Satellite System (GNSS) Occultation Sounder (GNOS) is one of the new-generation payloads onboard the Chinese FengYun 3 (FY-3) series of operational meteorological satellites for sounding the Earth's neutral atmosphere and ionosphere. The GNOS was designed for acquiring setting and rising radio occultation (RO) data by using GNSS signals from both the Chinese BeiDou System (BDS) and the US Global Positioning System (GPS). An ultra-stable oscillator with 1 s stability (Allan deviation) at the level of 10-12 was installed on the FY-3C GNOS, and thus both zero-difference and single-difference excess phase processing methods should be feasible for FY-3C GNOS observations. In this study we focus on evaluating zero-difference processing of BDS RO data vs. single-difference processing, in order to investigate the zero-difference feasibility for this new instrument, which after its launch in September 2013 started to use BDS signals from five geostationary orbit (GEO) satellites, five inclined geosynchronous orbit (IGSO) satellites and four medium Earth orbit (MEO) satellites. We used a 3-month set of GNOS BDS RO data (October to December 2013) for the evaluation and compared atmospheric bending angle and refractivity profiles, derived from single- and zero-difference excess phase data, against co-located profiles from European Centre for Medium-Range Weather Forecasts (ECMWF) analyses. We also compared against co-located refractivity profiles from radiosondes. The statistical evaluation against these reference data shows that the results from single- and zero-difference processing are reasonably consistent in both bias and standard deviation, clearly demonstrating the feasibility of zero differencing for GNOS BDS RO observations. The average bias (and standard deviation) of the bending angle and refractivity profiles were found to be about 0.05 to 0.2 % (and 0.7 to 1.6 %) over the upper troposphere and lower stratosphere. Zero differencing was found to perform slightly better, as may be expected from its lower vulnerability to noise. The validation results indicate that GNOS can provide, on top of GPS RO profiles, accurate and precise BDS RO profiles both from single- and zero-difference processing. The GNOS observations by the series of FY-3 satellites are thus expected to provide important contributions to numerical weather prediction and global climate change analysis.
NASA Astrophysics Data System (ADS)
Kubanek, J.; Raible, B.; Westerhaus, M.; Heck, B.
2017-12-01
High-resolution and up-to-date topographic data are of high value in volcanology and can be used in a variety of applications such as volcanic flow modeling or hazard assessment. Furthermore, time-series of topographic data can provide valuable insights into the dynamics of an ongoing eruption. Differencing topographic data acquired at different times enables to derive areal coverage of lava, flow volumes, and lava extrusion rates, the most important parameters during ongoing eruptions for estimating hazard potential, yet most difficult to determine. Anyhow, topographic data acquisition and provision is a challenge. Very often, high-resolution data only exists within a small spatial extension, or the available data is already outdated when the final product is provided. This is especially true for very dynamic landscapes, such as volcanoes. The bistatic TanDEM-X radar satellite mission enables for the first time to generate up-to-date and high-resolution digital elevation models (DEMs) repeatedly using the interferometric phase. The repeated acquisition of TanDEM-X data facilitates the generation of a time-series of DEMs. Differencing DEMs generated from bistatic TanDEM-X data over time can contribute to monitor topographic changes at active volcanoes, and can help to estimate magmatic ascent rates. Here, we use the bistatic TanDEM-X data to investigate the activity of Etna volcano in Sicily, Italy. Etna's activity is characterized by lava fountains and lava flows with ash plumes from four major summit crater areas. Especially the newest crater, the New South East Crater (NSEC) that was formed in 2011 has been highly active in recent years. Over one hundred bistatic TanDEM-X data pairs were acquired between January 2011 and March 2017 in StripMap mode, covering episodes of lava fountaining and lava flow emplacement at Etna's NSEC and its surrounding area. Generating DEMs of every bistatic data pair enables us to assess areal extension of the lava flows, to calculate lava flow volume, and lava extrusion rates. TanDEM-X data have been acquired at Etna during almost every overflight of the TanDEM-X satellite mission, resulting in a high-temporal resolution of DEMs giving highly valuable insights into Etna's volcanic activity of the last six years.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Tyler D., E-mail: robinson@astro.washington.edu
2011-11-01
The Moon maintains large surface temperatures on its illuminated hemisphere and can contribute significant amounts of flux to spatially unresolved thermal infrared (IR) observations of the Earth-Moon system, especially at wavelengths where Earth's atmosphere is absorbing. In this paper we investigate the effects of an unresolved companion on IR observations of Earthlike exoplanets. For an extrasolar twin Earth-Moon system observed at full phase at IR wavelengths, the Moon consistently comprises about 20% of the total signal, approaches 30% of the signal in the 9.6 {mu}m ozone band and the 15 {mu}m carbon dioxide band, makes up as much as 80%more » of the signal in the 6.3 {mu}m water band, and more than 90% of the signal in the 4.3 {mu}m carbon dioxide band. These excesses translate to inferred brightness temperatures for Earth that are too large by 20-40 K and demonstrate that the presence of undetected satellites can have significant impacts on the spectroscopic characterization of exoplanets. The thermal flux contribution from an airless companion depends strongly on phase, implying that observations of exoplanets should be taken when the star-planet-observer angle (i.e., phase angle) is as large as feasibly possible if contributions from companions are to be minimized. We show that, by differencing IR observations of an Earth twin with a companion taken at both gibbous and crescent phases, Moonlike satellites may be detectable by future exoplanet characterization missions for a wide range of system inclinations.« less
Imaging of CO{sub 2} injection during an enhanced-oil-recovery experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gritto, Roland; Daley, Thomas M.; Myer, Larry R.
2003-04-29
A series of time-lapse seismic cross well and single well experiments were conducted in a diatomite reservoir to monitor the injection of CO{sub 2} into a hydrofracture zone, using P- and S-wave data. During the first phase the set of seismic experiments were conducted after the injection of water into the hydrofrac-zone. The set of seismic experiments was repeated after a time period of 7 months during which CO{sub 2} was injected into the hydrofractured zone. The issues to be addressed ranged from the detectability of the geologic structure in the diatomic reservoir to the detectability of CO{sub 2} withinmore » the hydrofracture. During the pre-injection experiment, the P-wave velocities exhibited relatively low values between 1700-1900 m/s, which decreased to 1600-1800 m/s during the post-injection phase (-5 percent). The analysis of the pre-injection S-wave data revealed slow S-wave velocities between 600-800 m/s, while the post-injection data revealed velocities between 500-700 m/s (-6 percent). These velocity estimates produced high Poisson ratios between 0.36 and 0.46 for this highly porous ({approx} 50 percent) material. Differencing post- and pre-injection data revealed an increase in Poisson ratio of up to 5 percent. Both, velocity and Poisson estimates indicate the dissolution of CO{sub 2} in the liquid phase of the reservoir accompanied by a pore-pressure increase. The results of the cross well experiments were corroborated by single well data and laboratory measurements on core data.« less
NASA Astrophysics Data System (ADS)
Utsumi, Yousuke; Tominaga, Nozomu; Tanaka, Masaomi; Morokuma, Tomoki; Yoshida, Michitoshi; Asakura, Yuichiro; Finet, François; Furusawa, Hisanori; Kawabata, Koji S.; Liu, Wei; Matsubayashi, Kazuya; Moritani, Yuki; Motohara, Kentaro; Nakata, Fumiaki; Ohta, Kouji; Terai, Tsuyoshi; Uemura, Makoto; Yasuda, Naoki
2018-01-01
We present the results of detailed analysis of an optical imaging survey conducted using the Subaru/Hyper Suprime-Cam (HSC) that aimed to identify an optical counterpart to the gravitational wave event GW151226. In half a night, the i- and z-band imaging survey by HSC covered 63.5 deg2 of the error region, which contains about 7% of the LIGO localization probability, and the same field was observed in three different epochs. The detectable magnitude of the candidates in a differenced image is evaluated as i ˜ 23.2 mag for the requirement of at least two 5 σ detections, and 1744 candidates are discovered. Assuming a kilonova as an optical counterpart, we compare the optical properties of the candidates with model predictions. A red and rapidly declining light curve condition enables the discrimination of a kilonova from other transients, and a small number of candidates satisfy this condition. The presence of stellar-like counterparts in the reference frame suggests that the surviving candidates are likely to be flare stars. The fact that most of those candidates are in the galactic plane, |b| < 5°, supports this interpretation. We also check whether the candidates are associated with the nearby GLADE galaxies, which reduces the number of contaminants even with a looser color cut. When a better probability map (with localization accuracy of ˜50 deg2) is available, kilonova searches of up to approximately 200 Mpc will become feasible by conducting immediate follow-up observations with an interval of 3-6 d.
Multi-Decadal Comparison between Clean-Ice and Debris-Covered Glaciers in the Eastern Himalaya
NASA Astrophysics Data System (ADS)
Maurer, J. M.; Rupper, S.
2014-12-01
Himalayan glaciers are important natural resources and climatic indicators. Many of these glaciers have debris-covered ablation zones, while others are mostly clean ice. Regarding glacier dynamics, it is expected that debris-covered glaciers will respond differently to atmospheric warming compared to clean ice glaciers. In the Bhutanese Himalaya, there are (1) north flowing clean-ice glaciers with high velocities, likely with large amounts of basal sliding, and (2) south flowing debris-covered glaciers with slow velocities, thermokarst features, and influenced more by the Indian Summer Monsoon. This region, therefore, is ideal for comparing the dynamical response of clean-ice versus debris-covered glaciers to climatic change. In particular, previous studies have suggested the north flowing glaciers are likely adjusting more dynamically (i.e. retreating) in response to climate variations, while the south flowing glaciers are likely experiencing downwasting, with stagnant termini locations. We test this hypothesis by assessing glacier changes over three decades in the Bhutan region using a newly-developed workflow to extract DEMs and orthorectified imagery from both 1976 historical spy satellite images and 2006 ASTER images. DEM differencing for both debris-covered and clean glaciers allows for quantification of glacier surface elevation changes, while orthorectified imagery allows for measuring changes in glacier termini. The same stereo-matching, denoising, and georeferencing methodology is used on both datasets to ensure consistency, while the three decade timespan allows for a better signal to noise ratio compared to studies performed on shorter timescales. The results of these analyses highlight the similarities and differences in the decadal response of clean-ice and debris-covered glaciers to climatic change, and provide insights into the complex dynamics of debris-covered glaciers in the monsoonal Himalayas.
Mapping disturbances in a mangrove forest using multi-date landsat TM imagery.
Kovacs, J M; Wang, J; Blanco-Correa, M
2001-05-01
To evaluate the accounts of local fishermen, Landsat TM images (1986, 1993, 1999) were examined to assess potential losses in the mangrove forests of the Teacapán-Agua Brava lagoon system, Mexico. A binary change mask derived from image differencing of a band 4/3 ratio was employed to calculate any changes within this forested wetland. The results indicate that by 1986 approximately 18% (or 86 km2) of the mangrove area under study was either dead or in poor condition. The majority of this damage had occurred in the eastern section of the Agua Brava basin, which coincides, with the reports of the elderly fishermen. Examination of aerial photographs from 1970 revealed no adverse impacts in this area and would suggest, as postulated by the fishermen and other scientists, that modifications in environmental conditions following the opening of a canal, Cuautlá canal, in 1972 may have initiated the large-scale mortality. Although these areas of impact are still developing, the results from the satellite data indicate that the majority of the more recent changes are occurring elsewhere in the system. Obvious in the 1999 satellite data, but not so in the 1993, are large areas of mangrove degradation in the northern section of the Teacapán region. In the Agua Brava basin, the more recent transformations are appearing on the western side of the basin. Since long-term records of environmental conditions are absent, it is difficult to determine why these latest changes are occurring or even if the earlier losses were the result of the canal. Potential agents of change that have recently been observed include a hurricane, a second canal, and the uncontrolled expansion of the Cuautlá canal since 1994.
Efficient High-Order Accurate Methods using Unstructured Grids for Hydrodynamics and Acoustics
2007-08-31
Leer. On upstream differencing and godunov-type schemes for hyperbolic conservation laws. SIAM Review, 25(1):35-61, 1983. [46] F . Eleuterio Toro ...early stage [4-61. The basic idea can be surmised from simple approximation theory. If a continuous function f is to be approximated over a set of...a2f 4h4 a4ff(x+eh) = f (x)+-- + _ •-+• e +0 +... (1) where 0 < e < 1 for approximations inside the interval of width h. For a second-order approximation
1988-10-01
meteorologists’ rule-of-thumb that climatic drift manifests itself in periods greater than 30 years. For a fractionally-differenced model with our...estimates in a univariate ARIMA (p, d, q) with I d I< 0.5 has been derived by Li and McLrjd (1986). The model used by I-Iaslett an Raftery can be viewed as...Reply to the Discussion of "Space-time Modelling with Long-mnmory cDependence: Assessing Ireland’s Wind Resource" cJohn Haslett Department of
CFD propels NASP propulsion progress
NASA Technical Reports Server (NTRS)
Povinelli, Louis A.; Dwoyer, Douglas L.; Green, Michael J.
1990-01-01
The most complex aerothermodynamics encountered in the National Aerospace Plane (NASP) propulsion system are associated with the fuel-mixing and combustion-reaction flows of its combustor section; adequate CFD tools must be developed to model shock-wave systems, turbulent hydrogen/air mixing, flow separation, and combustion. Improvements to existing CFD codes have involved extension from two dimensions to three, as well as the addition of finite-rate hydrogen-air chemistry. A novel CFD code for the treatment of reacting flows throughout the NASP, designated GASP, uses the most advanced upwind-differencing technology.
CFD propels NASP propulsion progress
NASA Astrophysics Data System (ADS)
Povinelli, Louis A.; Dwoyer, Douglas L.; Green, Michael J.
1990-07-01
The most complex aerothermodynamics encountered in the National Aerospace Plane (NASP) propulsion system are associated with the fuel-mixing and combustion-reaction flows of its combustor section; adequate CFD tools must be developed to model shock-wave systems, turbulent hydrogen/air mixing, flow separation, and combustion. Improvements to existing CFD codes have involved extension from two dimensions to three, as well as the addition of finite-rate hydrogen-air chemistry. A novel CFD code for the treatment of reacting flows throughout the NASP, designated GASP, uses the most advanced upwind-differencing technology.
An implicit flux-split algorithm to calculate hypersonic flowfields in chemical equilibrium
NASA Technical Reports Server (NTRS)
Palmer, Grant
1987-01-01
An implicit, finite-difference, shock-capturing algorithm that calculates inviscid, hypersonic flows in chemical equilibrium is presented. The flux vectors and flux Jacobians are differenced using a first-order, flux-split technique. The equilibrium composition of the gas is determined by minimizing the Gibbs free energy at every node point. The code is validated by comparing results over an axisymmetric hemisphere against previously published results. The algorithm is also applied to more practical configurations. The accuracy, stability, and versatility of the algorithm have been promising.
A Time Domain Analysis of Gust-Cascade Interaction Noise
NASA Technical Reports Server (NTRS)
Nallasamy, M.; Hixon, R.; Sawyer, S. D.; Dyson, R. W.
2003-01-01
The gust response of a 2 D cascade is studied by solving the full nonlinear Euler equations employing higher order accurate spatial differencing and time stepping techniques. The solutions exhibit the exponential decay of the two circumferential mode orders of the cutoff blade passing frequency (BPF) tone and propagation of one circumferential mode order at 2BPF, as would be expected for the flow configuration considered. Two frequency excitations indicate that the interaction between the frequencies and the self interaction contribute to the amplitude of the propagating mode.
Large Eddy Simulation of Flow in Turbine Cascades Using LESTool and UNCLE Codes
NASA Technical Reports Server (NTRS)
Huang, P. G.
2004-01-01
During the period December 23,1997 and December August 31,2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Spalart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.
Trapped-Ion Quantum Simulation of an Ising Model with Transverse and Longitudinal Fields
2013-03-29
resonant λ = 355 nm laser beams which drive stimulated Raman transitions [33, 34]. The beams intersect at right angles so that their wavevector difference...ated by a pair of Raman laser beams with a beatnote frequency of ωS , with the field amplitude determined by the beam intensities. The field directions...cool- ing, followed by optical pumping to the state |↓↓↓ ..〉z and 100 µs of Raman sideband cooling that prepares the motion of all modes along ∆~k in
Large Eddy Simulation of Flow in Turbine Cascades Using LEST and UNCLE Codes
NASA Technical Reports Server (NTRS)
Ashpis, David (Technical Monitor); Huang, P. G.
2004-01-01
During the period December 23, 1997 and December August 31, 2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Sparlart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.
Improving the Accuracy of Cloud Detection Using Machine Learning
NASA Astrophysics Data System (ADS)
Craddock, M. E.; Alliss, R. J.; Mason, M.
2017-12-01
Cloud detection from geostationary satellite imagery has long been accomplished through multi-spectral channel differencing in comparison to the Earth's surface. The distinction of clear/cloud is then determined by comparing these differences to empirical thresholds. Using this methodology, the probability of detecting clouds exceeds 90% but performance varies seasonally, regionally and temporally. The Cloud Mask Generator (CMG) database developed under this effort, consists of 20 years of 4 km, 15minute clear/cloud images based on GOES data over CONUS and Hawaii. The algorithms to determine cloudy pixels in the imagery are based on well-known multi-spectral techniques and defined thresholds. These thresholds were produced by manually studying thousands of images and thousands of man-hours to determine the success and failure of the algorithms to fine tune the thresholds. This study aims to investigate the potential of improving cloud detection by using Random Forest (RF) ensemble classification. RF is the ideal methodology to employ for cloud detection as it runs efficiently on large datasets, is robust to outliers and noise and is able to deal with highly correlated predictors, such as multi-spectral satellite imagery. The RF code was developed using Python in about 4 weeks. The region of focus selected was Hawaii and includes the use of visible and infrared imagery, topography and multi-spectral image products as predictors. The development of the cloud detection technique is realized in three steps. First, tuning of the RF models is completed to identify the optimal values of the number of trees and number of predictors to employ for both day and night scenes. Second, the RF models are trained using the optimal number of trees and a select number of random predictors identified during the tuning phase. Lastly, the model is used to predict clouds for an independent time period than used during training and compared to truth, the CMG cloud mask. Initial results show 97% accuracy during the daytime, 94% accuracy at night, and 95% accuracy for all times. The total time to train, tune and test was approximately one week. The improved performance and reduced time to produce results is testament to improved computer technology and the use of machine learning as a more efficient and accurate methodology of cloud detection.
NASA Astrophysics Data System (ADS)
Caress, D. W.; Paull, C. K.; Dallimore, S.; Lundsten, E. M.; Anderson, K.; Gwiazda, R.; Melling, H.; Lundsten, L.; Graves, D.; Thomas, H. J.; Cote, M.
2017-12-01
Two active submarine mud volcano sites located at 420 and 740 m depths on the margin of the Canadian Beaufort Sea were mapped in 2013 and again in 2016 using the same survey line pattern allowing detection of change over three years. The surveys were conducted using MBARI's mapping AUVs which fields a 200 kHz or 400 kHz multibeam sonar, a 1-6 kHz chirp sub-bottom profiler, and a 110 kHz chirp sidescan from a 50 m altitude. The resulting bathymetry has 1 m lateral resolution and 0.1 m vertical precision and sidescan mosaics have 1 m lateral resolution. Vertical changes of ≥0.2 m are observable by differencing repeat surveys. These features were also visited with MBARI's miniROV, which was outfitted for these dives with a manipulator mounted temperature probe. The 420 m mud volcano is nearly circular, 1100 m across, flat-topped, and superimposed on the pre-existing smooth slope. The central plateau has low relief <3 m consisting of concentric rings and ovoid mounds that appear to reflect distinct eruptions at shifting locations. The 740 m site contains 3 mud volcanoes, most prominently a 630 m wide, 30 m high flat-topped plateau with about 4 m of relief similar to the 420 m feature plus a 5 m high cone on the southern rim. North of this plateau is a smooth-textured conically shaped feature also standing about 30 m above the floor of the subsidence structure. Sidescan mosaics reveal significant changes in backscatter patterns at both mud volcano sites between surveys. Comparison of bathymetry also reveals new flows of up to 1.8 m thickness at both sites, as well as subtle spreading of the flat plateaus rims. An active mudflow was encountered during a miniROV dive on a high backscatter target at the 740 m site. This tongue of mud was observed to be slowly flowing downslope. The ROV temperature probe inserted 2 cm into the flow measured 23°C, compared to ambient water (-0.4°C), indicating the rapid ascent of the mud from considerable subsurface depths. Bubbles (presumably methane) were escaping from the active mudflow. Combining seafloor mapping with ROV observations indicates that new sediment flows with entrained methane bubbles exhibit very high backscatter which rapidly changes to very low backscatter following degassing of the smooth, bare mud. To our knowledge this is the first time an eruption on a submarine mud volcano has been observed.
NASA Astrophysics Data System (ADS)
Bailey, T. L.; Sutherland-Montoya, D.
2015-12-01
High resolution topographic analysis methods have become important tools in geomorphology. Structure from Motion photogrammetry offers a compelling vehicle for geomorphic change detection in fluvial environments. This process can produce arbitrarily high resolution, geographically registered spectral and topographic coverages from a collection of overlapping digital imagery from consumer cameras. Cuneo Creek has had three historically observed episodes of rapid aggradation (1955, 1964, and 1997). The debris flow deposits continue to be major sources of sediment sixty years after the initial slope failure. Previous studies have monitored the sediment storage volume and particle size since 1976 (in 1976, 1982, 1983, 1985, 1986, 1987, 1998, 2003). We reoccupied 3 previously surveyed stream cross sections on Sept 30, 2014 and March 30, 2015, and produced photogrammetric point clouds using a pole mounted camera with a remote view finder to take nadir view images from 4.3 meters above the channel bed. Ground control points were registered using survey grade GPS and typical cross sections used over 100 images to build the structure model. This process simultaneously collects channel geometry and we used it to also generate surface texture metrics, and produced DEMs with point cloud densities above 5000 points / m2. In the period between the surveys, a five year recurrence interval discharge of 20 m3/s scoured the channel. Surface particle size distribution has been determined for each observation period using image segmentation algorithms based on spectral distance and compactness. Topographic differencing between the point clouds shows substantial channel bed mobilization and reorganization. The net decline in sediment storage is in excess of 4 x 10^5 cubic meters since the 1964 aggradation peak, with associated coarsening of surface particle sizes. These new methods provide a promising rapid assessment tool for measurement of channel responses to sediment inputs.
NASA Astrophysics Data System (ADS)
Bourgeau-Chavez, L. L.; Miller, M. E.; Battaglia, M.; Banda, E.; Endres, S.; Currie, W. S.; Elgersma, K. J.; French, N. H. F.; Goldberg, D. E.; Hyndman, D. W.
2014-12-01
Spread of invasive plant species in the coastal wetlands of the Great Lakes is degrading wetland habitat, decreasing biodiversity, and decreasing ecosystem services. An understanding of the mechanisms of invasion is crucial to gaining control of this growing threat. To better understand the effects of land use and climatic drivers on the vulnerability of coastal zones to invasion, as well as to develop an understanding of the mechanisms of invasion, research is being conducted that integrates field studies, process-based ecosystem and hydrological models, and remote sensing. Spatial data from remote sensing is needed to parameterize the hydrological model and to test the outputs of the linked models. We will present several new remote sensing products that are providing important physiological, biochemical, and landscape information to parameterize and verify models. This includes a novel hybrid radar-optical technique to delineate stands of invasives, as well as natural wetland cover types; using radar to map seasonally inundated areas not hydrologically connected; and developing new algorithms to estimate leaf area index (LAI) using Landsat. A coastal map delineating wetland types including monocultures of the invaders (Typha spp. and Phragmites austrailis) was created using satellite radar (ALOS PALSAR, 20 m resolution) and optical data (Landsat 5, 30 m resolution) fusion from multiple dates in a Random Forests classifier. These maps provide verification of the integrated model showing areas at high risk of invasion. For parameterizing the hydrological model, maps of seasonal wetness are being developed using spring (wet) imagery and differencing that with summer (dry) imagery to detect the seasonally wet areas. Finally, development of LAI remote sensing high resolution algorithms for uplands and wetlands is underway. LAI algorithms for wetlands have not been previously developed due to the difficulty of a water background. These products are being used to improve the hydrological model through higher resolution products and parameterization of variables that have previously been largely unknown.
Analyzing millet price regimes and market performance in Niger with remote sensing data
NASA Astrophysics Data System (ADS)
Essam, Timothy Michael
This dissertation concerns the analysis of staple food prices and market performance in Niger using remotely sensed vegetation indices in the form of normalized differenced vegetation index (NDVI). By exploiting the link between weather-related vegetation production conditions, which serve as a proxy for spatially explicit millet yields and thus millet availability, this study analyzes the potential causal links between NDVI outcomes and millet market performance and presents an empirical approach for predicting changes in market performance based on NDVI outcomes. Overall, the thesis finds that inter-market price spreads and levels of market integration can be reasonably explained by deviations in vegetation index outcomes from the growing season. Negative (positive) NDVI shocks are associated with better (worse) than expected market performance as measured by converging inter-market price spreads. As the number of markets affected by negatively abnormal vegetation production conditions in the same month of the growing season increases, inter-market price dispersion declines. Positive NDVI shocks, however, do not mirror this pattern in terms of the magnitude of inter-market price divergence. Market integration is also found to be linked to vegetation index outcomes as below (above) average NDVI outcomes result in more integrated (segmented) markets. Climate change and food security policies and interventions should be guided by these findings and account for dynamic relationships among market structures and vegetation production outcomes.
Low Dissipative High Order Shock-Capturing Methods Using Characteristic-Based Filters
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sandham, N. D.; Djomehri, M. J.
1998-01-01
An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Oisson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.
Low Dissipative High Order Shock-Capturing Methods using Characteristic-Based Filters
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sandham, N. D.; Djomehri, M. J.
1998-01-01
An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Olsson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.
Dabundo, Richard; Lehmann, Moritz F.; Treibergs, Lija; Tobias, Craig R.; Altabet, Mark A.; Moisander, Pia H.; Granger, Julie
2014-01-01
We report on the contamination of commercial 15-nitrogen (15N) N2 gas stocks with 15N-enriched ammonium, nitrate and/or nitrite, and nitrous oxide. 15N2 gas is used to estimate N2 fixation rates from incubations of environmental samples by monitoring the incorporation of isotopically labeled 15N2 into organic matter. However, the microbial assimilation of bioavailable 15N-labeled N2 gas contaminants, nitrate, nitrite, and ammonium, is liable to lead to the inflation or false detection of N2 fixation rates. 15N2 gas procured from three major suppliers was analyzed for the presence of these 15N-contaminants. Substantial concentrations of 15N-contaminants were detected in four Sigma-Aldrich 15N2 lecture bottles from two discrete batch syntheses. Per mole of 15N2 gas, 34 to 1900 µmoles of 15N-ammonium, 1.8 to 420 µmoles of 15N-nitrate/nitrite, and ≥21 µmoles of 15N-nitrous oxide were detected. One 15N2 lecture bottle from Campro Scientific contained ≥11 µmoles of 15N-nitrous oxide per mole of 15N2 gas, and no detected 15N-nitrate/nitrite at the given experimental 15N2 tracer dilutions. Two Cambridge Isotopes lecture bottles from discrete batch syntheses contained ≥0.81 µmoles 15N-nitrous oxide per mole 15N2, and trace concentrations of 15N-ammonium and 15N-nitrate/nitrite. 15N2 gas equilibrated cultures of the green algae Dunaliella tertiolecta confirmed that the 15N-contaminants are assimilable. A finite-differencing model parameterized using oceanic field conditions typical of N2 fixation assays suggests that the degree of detected 15N-ammonium contamination could yield inferred N2 fixation rates ranging from undetectable, <0.01 nmoles N L−1 d−1, to 530 nmoles N L−1 d−1, contingent on experimental conditions. These rates are comparable to, or greater than, N2 fixation rates commonly detected in field assays. These results indicate that past reports of N2 fixation should be interpreted with caution, and demonstrate that the purity of commercial 15N2 gas must be ensured prior to use in future N2 fixation rate determinations. PMID:25329300
Dabundo, Richard; Lehmann, Moritz F; Treibergs, Lija; Tobias, Craig R; Altabet, Mark A; Moisander, Pia H; Granger, Julie
2014-01-01
We report on the contamination of commercial 15-nitrogen (15N) N2 gas stocks with 15N-enriched ammonium, nitrate and/or nitrite, and nitrous oxide. 15N2 gas is used to estimate N2 fixation rates from incubations of environmental samples by monitoring the incorporation of isotopically labeled 15N2 into organic matter. However, the microbial assimilation of bioavailable 15N-labeled N2 gas contaminants, nitrate, nitrite, and ammonium, is liable to lead to the inflation or false detection of N2 fixation rates. 15N2 gas procured from three major suppliers was analyzed for the presence of these 15N-contaminants. Substantial concentrations of 15N-contaminants were detected in four Sigma-Aldrich 15N2 lecture bottles from two discrete batch syntheses. Per mole of 15N2 gas, 34 to 1900 µmoles of 15N-ammonium, 1.8 to 420 µmoles of 15N-nitrate/nitrite, and ≥21 µmoles of 15N-nitrous oxide were detected. One 15N2 lecture bottle from Campro Scientific contained ≥11 µmoles of 15N-nitrous oxide per mole of 15N2 gas, and no detected 15N-nitrate/nitrite at the given experimental 15N2 tracer dilutions. Two Cambridge Isotopes lecture bottles from discrete batch syntheses contained ≥0.81 µmoles 15N-nitrous oxide per mole 15N2, and trace concentrations of 15N-ammonium and 15N-nitrate/nitrite. 15N2 gas equilibrated cultures of the green algae Dunaliella tertiolecta confirmed that the 15N-contaminants are assimilable. A finite-differencing model parameterized using oceanic field conditions typical of N2 fixation assays suggests that the degree of detected 15N-ammonium contamination could yield inferred N2 fixation rates ranging from undetectable, <0.01 nmoles N L(-1) d(-1), to 530 nmoles N L(-1) d(-1), contingent on experimental conditions. These rates are comparable to, or greater than, N2 fixation rates commonly detected in field assays. These results indicate that past reports of N2 fixation should be interpreted with caution, and demonstrate that the purity of commercial 15N2 gas must be ensured prior to use in future N2 fixation rate determinations.
Bathymetric survey of Carroll Creek Tributary to Lake Tuscaloosa, Tuscaloosa County, Alabama, 2010
Lee, K.G.; Kimbrow, D.R.
2011-01-01
The U.S. Geological Survey, in cooperation with the City of Tuscaloosa, conducted a bathymetric survey of Carroll Creek, on May 12-13, 2010. Carroll Creek is one of the major tributaries to Lake Tuscaloosa and contributes about 6 percent of the surface drainage area. A 3.5-mile reach of Carroll Creek was surveyed to prepare a current bathymetric map, determine storage capacities at specified water-surface elevations, and compare current conditions to historical cross sections. Bathymetric data were collected using a high-resolution interferometric mapping system consisting of a phase-differencing bathymetric sonar, navigation and motion-sensing system, and a data acquisition computer. To assess the accuracy of the interferometric mapping system and document depths in shallow areas of the study reach, an electronic total station was used to survey 22 cross sections spaced 50 feet apart. The data were combined and processed and a Triangulated Irregular Network (TIN) and contour map were generated. Cross sections were extracted from the TIN and compared with historical cross sections. Between 2004 and 2010, the area (cross section 1) at the confluence of Carroll Creek and the main run of LakeTuscaloosa showed little to no change in capacity area. Another area (cross section 2) showed a maximum change in elevation of 4 feet and an average change of 3 feet. At the water-surface elevation of 224 feet (National Geodetic Vertical Datum of 1929), the cross-sectional area has changed by 260 square feet for a total loss of 28 percent of cross-sectional storage area. The loss of area may be attributed to sedimentation in Carroll Creek and (or) the difference in accuracy between the two surveys.
Hydro-geomorphology of the middle Elwha River, Washington, following dam removal
NASA Astrophysics Data System (ADS)
Morgan, J. A.; Nelson, P. A.; Brogan, D. J.
2017-12-01
Dam removal is an increasingly common river restoration practice, which can produce dramatic increases in sediment supply to downstream reaches. There remains, however, considerable uncertainty in how mesoscale morphological units (e.g., riffles and pools) respond to the flow and sediment supply changes associated with dam removal. The recent removal of Glines Canyon Dam on the Elwha River in Washington State provides a natural setting to explore how increased sediment supply due to dam removal may affect downstream reaches. Here, we present observations and surveys documenting how a 1 km reach, located approximately 5 km downstream of the former dam site, has evolved following dam removal. Annual topographic/bathymetric surveys were conducted in 2014-2016 using RTK-GNSS methods, and these surveys were coupled with airborne lidar to create continuous surface maps of the valley bottom. Differencing the elevation models reveals channel widening and migration due to lateral bank retreat and bar aggradation. Analysis of aerial imagery dating back to 1939 suggests that rates of both widening and meander migration have increased following dam removal. We also used results from depth-averaged hydrodynamic modeling with a fuzzy c-means clustering approach to delineate riffle and pool units; this analysis suggests that both riffles and pools stayed relatively consistent from 2014-2015, while both areas decreased from 2015 to 2016. Without any considerable changes to the hydrologic regime these higher rates of change are implied to be the result of the increased sediment supply. Our results, which indicate an increased dynamism due directly to the amplified sediment supply, have the potential to further inform river managers and restoration specialists who oversee projects related to changing sediment regimes.
1980-07-01
FUNCTION ( t) CENTERED AT C WITH PERIOD n -nr 0 soTIME t FIGURE 3.4S RECTAPOOLAR PORN )=C FUNCTION g t) CENTERED AT 0 WITH PERIOD n n n 52n tI y I (h...of a typical family in Kabiria (a city in Northern Algeria) over the time period Jan.-Feb. 1975 through Nov.-Dec. 1977. We would like to obtain a...values of y .. .. ... -75- Table 4.2 The Average Bi-Monthly Expenses of a Family in Kabiria and Their Fourier Representation Fourier Coefficients x k
Viscous flow computations using a second-order upwind differencing scheme
NASA Technical Reports Server (NTRS)
Chen, Y. S.
1988-01-01
In the present computations of a wide range of fluid flow problems by means of the primitive variables-incorporating Navier-Stokes equations, a mixed second-order upwinding scheme approximates the convective terms of the transport equations and the scheme's accuracy is verified for convection-dominated high Re number flow problems. An adaptive dissipation scheme is used as a monotonic supersonic shock flow capture mechanism. Many benchmark fluid flow problems, including the compressible and incompressible, laminar and turbulent, over a wide range of M and Re numbers, are presently studied to verify the accuracy and robustness of this numerical method.
A general algorithm using finite element method for aerodynamic configurations at low speeds
NASA Technical Reports Server (NTRS)
Balasubramanian, R.
1975-01-01
A finite element algorithm for numerical simulation of two-dimensional, incompressible, viscous flows was developed. The Navier-Stokes equations are suitably modelled to facilitate direct solution for the essential flow parameters. A leap-frog time differencing and Galerkin minimization of these model equations yields the finite element algorithm. The finite elements are triangular with bicubic shape functions approximating the solution space. The finite element matrices are unsymmetrically banded to facilitate savings in storage. An unsymmetric L-U decomposition is performed on the finite element matrices to obtain the solution for the boundary value problem.
Room temperature single-photon detectors for high bit rate quantum key distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comandar, L. C.; Patel, K. A.; Engineering Department, Cambridge University, 9 J J Thomson Ave., Cambridge CB3 0FA
We report room temperature operation of telecom wavelength single-photon detectors for high bit rate quantum key distribution (QKD). Room temperature operation is achieved using InGaAs avalanche photodiodes integrated with electronics based on the self-differencing technique that increases avalanche discrimination sensitivity. Despite using room temperature detectors, we demonstrate QKD with record secure bit rates over a range of fiber lengths (e.g., 1.26 Mbit/s over 50 km). Furthermore, our results indicate that operating the detectors at room temperature increases the secure bit rate for short distances.
On the sensitivity of complex, internally coupled systems
NASA Technical Reports Server (NTRS)
Sobieszczanskisobieski, Jaroslaw
1988-01-01
A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.
Computational design of the basic dynamical processes of the UCLA general circulation model
NASA Technical Reports Server (NTRS)
Arakawa, A.; Lamb, V. R.
1977-01-01
The 12-layer UCLA general circulation model encompassing troposphere and stratosphere (and superjacent 'sponge layer') is described. Prognostic variables are: surface pressure, horizontal velocity, temperature, water vapor and ozone in each layer, planetary boundary layer (PBL) depth, temperature, moisture and momentum discontinuities at PBL top, ground temperature and water storage, and mass of snow on ground. Selection of space finite-difference schemes for homogeneous incompressible flow, with/without a free surface, nonlinear two-dimensional nondivergent flow, enstrophy conserving schemes, momentum advection schemes, vertical and horizontal difference schemes, and time differencing schemes are discussed.
Solidification of a binary mixture
NASA Technical Reports Server (NTRS)
Antar, B. N.
1982-01-01
The time dependent concentration and temperature profiles of a finite layer of a binary mixture are investigated during solidification. The coupled time dependent Stefan problem is solved numerically using an implicit finite differencing algorithm with the method of lines. Specifically, the temporal operator is approximated via an implicit finite difference operator resulting in a coupled set of ordinary differential equations for the spatial distribution of the temperature and concentration for each time. Since the resulting differential equations set form a boundary value problem with matching conditions at an unknown spatial point, the method of invariant imbedding is used for its solution.
Autonomous Relative Navigation for Formation-Flying Satellites Using GPS
NASA Technical Reports Server (NTRS)
Gramling, Cheryl; Carpenter, J. Russell; Long, Anne; Kelbel, David; Lee, Taesul
2000-01-01
The Goddard Space Flight Center is currently developing advanced spacecraft systems to provide autonomous navigation and control of formation flyers. This paper discusses autonomous relative navigation performance for a formation of four eccentric, medium-altitude Earth-orbiting satellites using Global Positioning System (GPS) Standard Positioning Service (SPS) and "GPS-like " intersatellite measurements. The performance of several candidate relative navigation approaches is evaluated. These analyses indicate that an autonomous relative navigation position accuracy of 1meter root-mean-square can be achieved by differencing high-accuracy filtered solutions if only measurements from common GPS space vehicles are used in the independently estimated solutions.
NASA Astrophysics Data System (ADS)
Brogan, D. J.; Nelson, P. A.; MacDonald, L. H.
2016-12-01
Considerable advances have been made in understanding post-wildfire runoff, erosion, and mass wasting at the hillslope and small watershed scale, but the larger-scale effects on flooding, water quality, and sedimentation are often the most significant impacts. The problem is that we have virtually no watershed-specific tools to quantify the proportion of eroded sediment that is stored or delivered from watersheds larger than about 2-5 km2. In this study we are quantifying how channel and valley bottom characteristics affect post-wildfire sediment storage and delivery. Our research is based on intensive monitoring of sediment storage over time in two 15 km2 watersheds (Skin Gulch and Hill Gulch) burned in the 2012 High Park Fire using repeated cross section and longitudinal surveys from fall 2012 through summer 2016, five airborne laser scanning (ALS) datasets from fall 2012 through summer 2015, and both radar and ground-based precipitation measurements. We have computed changes in sediment storage by differencing successive cross sections, and computed spatially explicit changes in successive ALS point clouds using the multiscale model to model cloud comparison (M3C2) algorithm. These channel changes are being related to potential morphometric controls, including valley width, valley slope, confinement, contributing area, valley expansion or contraction, topographic curvature (planform and profile), and estimated sediment inputs. We hypothesize that maximum rainfall intensity and lateral confinement will be the primary independent variables that describe observed patterns of erosion and deposition, and that the results can help predict post-wildfire sediment delivery and identify high priority areas for restoration.
A numerical differentiation library exploiting parallel architectures
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.
2009-08-01
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.
Zhao, Xu; Zhang, Baofeng; Liu, Huijuan; Chen, Fayuan; Li, Angzhen; Qu, Jiuhui
2012-05-01
The treatment of the plugboard wastewater was performed by an optimal electrocoagulation and electro-Fenton. The organic components with suspended fractions accounting for 30% COD were preferably removed via electrocoagulation at initial 5 min. In contrast, the removal efficiency was increased to 76% with the addition of H(2)O(2). The electrogenerated Fe(2+) reacts with H(2)O(2) and leads to the generation of (·)OH, which is responsible for the higher COD removal. However, overdosage H(2)O(2) will consume (·)OH generated in the electro-Fenton process and lead to the low COD removal. The COD removal efficiency decreased with the increased pH. The concentration of Fe(2+) ions was dependent on the solution pH, H(2)O(2) dosage and current density. The changes of organic characteristics in coagulation and oxidation process were differenced and evaluated using gel permeation chromatography, fluorescence excitation-emission scans and Fourier transform infrared spectroscopy. The fraction of the wastewater with aromatic structure and large molecular weight was decomposed into aliphatic structure and small molecular weight fraction in the electro-Fenton process. Copyright © 2012. Published by Elsevier Ltd.
Macroeconomic Fluctuations and Mortality in Postwar Japan
TAPIA GRANADOS, JOSÉ A.
2008-01-01
Recent research has shown that after long-term declining trends are excluded, mortality rates in industrial countries tend to rise in economic expansions and fall in economic recessions. In the present work, co-movements between economic fluctuations and mortality changes in postwar Japan are investigated by analyzing time series of mortality rates and eight economic indicators. To eliminate spurious associations attributable to trends, series are detrended either via Hodrick-Prescott filtering or through differencing. As previously found in other industrial economies, general mortality and age-specific death rates in Japan tend to increase in expansions and drop in recessions, for both males and females. The effect, which is slightly stronger for males, is particularly noticeable in those aged 45–64. Deaths attributed to heart disease, pneumonia, accidents, liver disease, and senility—making up about 41% of total mortality—tend to fluctuate procyclically, increasing in expansions. Suicides, as well as deaths attributable to diabetes and hypertensive disease, make up about 4% of total mortality and fluctuate countercyclically, increasing in recessions. Deaths attributed to other causes, making up about half of total deaths, don’t show a clearly defined relationship with the fluctuations of the economy. PMID:18613484
The Response of Tropospheric Ozone to ENSO in Observations and a Chemistry-Climate Simulation
NASA Technical Reports Server (NTRS)
Oman, L. D.; Douglass, A. R.; Ziemke, J. R.; Waugh, D. W.; Rodriguez, J. M.; Nielsen, J. E.
2012-01-01
The El Nino-Southern Oscillation (ENSO) is the dominant mode of tropical variability on interannual time scales. ENSO appears to extend its influence into the chemical composition of the tropical troposphere. Recent results have revealed an ENSO induced wave-l anomaly in observed tropical tropospheric column ozone. This results in a dipole over the western and eastern tropical Pacific, whereby differencing the two regions produces an ozone anomaly with an extremely high correlation to the Nino 3.4 Index. We have successfully reproduced this result using the Goddard Earth Observing System Version 5 (GEOS-5) general circulation model coupled to a comprehensive stratospheric and tropospheric chemical mechanism forced with observed sea surface temperatures over the past 25 years. An examination of the modeled ozone field reveals the vertical contributions of tropospheric ozone to the column over the western and eastern Pacific region. We will show targeted comparisons with observations from NASA's Aura satellite Microwave Limb Sounder (MLS), and the Tropospheric Emissions Spectrometer (TES) to provide insight into the vertical structure of ozone changes. The tropospheric ozone response to ENSO could be a useful chemistry-climate model evaluation tool and should be considered in future modeling assessments.
New multigrid approach for three-dimensional unstructured, adaptive grids
NASA Technical Reports Server (NTRS)
Parthasarathy, Vijayan; Kallinderis, Y.
1994-01-01
A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.
Macroeconomic fluctuations and mortality in postwar Japan.
Granados, José A Tapia
2008-05-01
Recent research has shown that after long-term declining trends are excluded, mortality rates in industrial countries tend to rise in economic expansions and fall in economic recessions. In the present work, co-movements between economic fluctuations and mortality changes in postwar Japan are investigated by analyzing time series of mortality rates and eight economic indicators. To eliminate spurious associations attributable to trends, series are detrended either via Hodrick-Prescott filtering or through differencing. As previously found in other industrial economies, general mortality and age-specific death rates in Japan tend to increase in expansions and drop in recessions, for both males and females. The effect, which is slightly stronger for males, is particularly noticeable in those aged 45-64. Deaths attributed to heart disease, pneumonia, accidents, liver disease, and senility--making up about 41% of total mortality--tend to fluctuate procyclically, increasing in expansions. Suicides, as well as deaths attributable to diabetes and hypertensive disease, make up about 4% of total mortality and fluctuate countercyclically, increasing in recessions. Deaths attributed to other causes, making up about half of total deaths, don't show a clearly defined relationship with the fluctuations of the economy.
Reflector surface distortion analysis techniques (thermal distortion analysis of antennas in space)
NASA Technical Reports Server (NTRS)
Sharp, R.; Liao, M.; Giriunas, J.; Heighway, J.; Lagin, A.; Steinbach, R.
1989-01-01
A group of large computer programs are used to predict the farfield antenna pattern of reflector antennas in the thermal environment of space. Thermal Radiation Analysis Systems (TRASYS) is a thermal radiation analyzer that interfaces with Systems Improved Numerical Differencing Analyzer (SINDA), a finite difference thermal analysis program. The programs linked together for this analysis can now be used to predict antenna performance in the constantly changing space environment. They can be used for very complex spacecraft and antenna geometries. Performance degradation caused by methods of antenna reflector construction and materials selection are also taken into consideration. However, the principal advantage of using this program linkage is to account for distortions caused by the thermal environment of space and the hygroscopic effects of the dry-out of graphite/epoxy materials after the antenna is placed into orbit. The results of this type of analysis could ultimately be used to predict antenna reflector shape versus orbital position. A phased array antenna distortion compensation system could then use this data to make RF phase front corrections. That is, the phase front could be adjusted to account for the distortions in the antenna feed and reflector geometry for a particular orbital position.
A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager.
Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert
2011-10-01
Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm(2) at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm(2). Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm(2) while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt.
A CMOS In-Pixel CTIA High Sensitivity Fluorescence Imager
Murari, Kartikeya; Etienne-Cummings, Ralph; Thakor, Nitish; Cauwenberghs, Gert
2012-01-01
Traditionally, charge coupled device (CCD) based image sensors have held sway over the field of biomedical imaging. Complementary metal oxide semiconductor (CMOS) based imagers so far lack sensitivity leading to poor low-light imaging. Certain applications including our work on animal-mountable systems for imaging in awake and unrestrained rodents require the high sensitivity and image quality of CCDs and the low power consumption, flexibility and compactness of CMOS imagers. We present a 132×124 high sensitivity imager array with a 20.1 μm pixel pitch fabricated in a standard 0.5 μ CMOS process. The chip incorporates n-well/p-sub photodiodes, capacitive transimpedance amplifier (CTIA) based in-pixel amplification, pixel scanners and delta differencing circuits. The 5-transistor all-nMOS pixel interfaces with peripheral pMOS transistors for column-parallel CTIA. At 70 fps, the array has a minimum detectable signal of 4 nW/cm2 at a wavelength of 450 nm while consuming 718 μA from a 3.3 V supply. Peak signal to noise ratio (SNR) was 44 dB at an incident intensity of 1 μW/cm2. Implementing 4×4 binning allowed the frame rate to be increased to 675 fps. Alternately, sensitivity could be increased to detect about 0.8 nW/cm2 while maintaining 70 fps. The chip was used to image single cell fluorescence at 28 fps with an average SNR of 32 dB. For comparison, a cooled CCD camera imaged the same cell at 20 fps with an average SNR of 33.2 dB under the same illumination while consuming over a watt. PMID:23136624
Gravity Wave Variances and Propagation Derived from AIRS Radiances
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.; Eckermann, S. D.
2012-01-01
As the first gravity wave (GW) climatology study using nadir-viewing infrared sounders, 50 Atmospheric Infrared Sounder (AIRS) radiance channels are selected to estimate GW variances at pressure levels between 2-100 hPa. The GW variance for each scan in the cross-track direction is derived from radiance perturbations in the scan, independently of adjacent scans along the orbit. Since the scanning swaths are perpendicular to the satellite orbits, which are inclined meridionally at most latitudes, the zonal component of GW propagation can be inferred by differencing the variances derived between the westmost and the eastmost viewing angles. Consistent with previous GW studies using various satellite instruments, monthly mean AIRS variance shows large enhancements over meridionally oriented mountain ranges as well as some islands at winter hemisphere high latitudes. Enhanced wave activities are also found above tropical deep convective regions. GWs prefer to propagate westward above mountain ranges, and eastward above deep convection. AIRS 90 field-of-views (FOVs), ranging from +48 deg. to -48 deg. off nadir, can detect large-amplitude GWs with a phase velocity propagating preferentially at steep angles (e.g., those from orographic and convective sources). The annual cycle dominates the GW variances and the preferred propagation directions for all latitudes. Indication of a weak two-year variation in the tropics is found, which is presumably related to the Quasi-biennial oscillation (QBO). AIRS geometry makes its out-tracks capable of detecting GWs with vertical wavelengths substantially shorter than the thickness of instrument weighting functions. The novel discovery of AIRS capability of observing shallow inertia GWs will expand the potential of satellite GW remote sensing and provide further constraints on the GW drag parameterization schemes in the general circulation models (GCMs).
Garrity, Steven R.; Allen, Craig D.; Brumby, Steven P.; Gangodagamage, Chandana; McDowell, Nate G.; Cai, D. Michael
2013-01-01
Widespread tree mortality events have recently been observed in several biomes. To effectively quantify the severity and extent of these events, tools that allow for rapid assessment at the landscape scale are required. Past studies using high spatial resolution satellite imagery have primarily focused on detecting green, red, and gray tree canopies during and shortly after tree damage or mortality has occurred. However, detecting trees in various stages of death is not always possible due to limited availability of archived satellite imagery. Here we assess the capability of high spatial resolution satellite imagery for tree mortality detection in a southwestern U.S. mixed species woodland using archived satellite images acquired prior to mortality and well after dead trees had dropped their leaves. We developed a multistep classification approach that uses: supervised masking of non-tree image elements; bi-temporal (pre- and post-mortality) differencing of normalized difference vegetation index (NDVI) and red:green ratio (RGI); and unsupervised multivariate clustering of pixels into live and dead tree classes using a Gaussian mixture model. Classification accuracies were improved in a final step by tuning the rules of pixel classification using the posterior probabilities of class membership obtained from the Gaussian mixture model. Classifications were produced for two images acquired post-mortality with overall accuracies of 97.9% and 98.5%, respectively. Classified images were combined with land cover data to characterize the spatiotemporal characteristics of tree mortality across areas with differences in tree species composition. We found that 38% of tree crown area was lost during the drought period between 2002 and 2006. The majority of tree mortality during this period was concentrated in piñon-juniper (Pinus edulis-Juniperus monosperma) woodlands. An additional 20% of the tree canopy died or was removed between 2006 and 2011, primarily in areas experiencing wildfire and management activity. -Our results demonstrate that unsupervised clustering of bi-temporal NDVI and RGI differences can be used to detect tree mortality resulting from numerous causes and in several forest cover types.
NASA Astrophysics Data System (ADS)
Rupper, S.; Maurer, J. M.; Schaefer, J. M.; Tsering, K.; Rinzin, T.; Dorji, C.; Johnson, E. S.; Cook, E. R.
2014-12-01
The rapid retreat of many glaciers in the monsoonal Himalaya is of potential societal concern. However, the retreat pattern in the region has been very heterogeneous, likely due in part to the inherent heterogeneity of climate and glaciers within the region. Assessing the impacts of glacier change on water resources, hydroelectric power, and hazard potential requires a detailed understanding of this potentially complex spatial pattern of glacier sensitivity to climate change. Here we quantify glacier surface-mass balance and meltwater flux across the entire glacierized region of the Bhutanese watershed using a full surface-energy and -mass balance model validated with field data. We then test the sensitivity of the glaciers to climatic change and compare the results to a thirty-year record of glacier volume changes. Bhutan is chosen because it (1) sits in the bulls-eye of the monsoon, (2) has >600 glaciers that exhibit the extreme glacier heterogeneity typical of the Himalayas, and (3) faces many of the economic and hazard challenges associated with glacier changes in the Himalaya. Therefore, the methods and results from this study should be broadly applicable to other regions of the monsoonal Himalaya. Our modeling results show a complex spatial pattern of glacier sensitivity to changes in climate across the Bhutanese Himalaya. However, our results also show that <15% of the glaciers in Bhutan account for >90% of the total meltwater flux, and that these glaciers are uniformly the glaciers most sensitive to changes in temperature (and less sensitive to other climate variables). We compare these results to a thirty-year record of glacier volume changes over the same region. In particular, we extract DEMs and orthorectified imagery from 1976 historical spy satellite images and 2006 ASTER images. DEM differencing shows that the glaciers that have changed most over the past thirty years also have the highest modeled temperature sensitivity. These results suggest that, despite the complex glacier heterogeneity in the region, the regional meltwater resources are controlled by a very small percentage of the glaciers, and that these glaciers are particularly vulnerable to changes in temperature.
NASA Astrophysics Data System (ADS)
Khare, S.; Latifi, H.; Ghosh, K.
2016-06-01
To assess the phenological changes in Moist Deciduous Forest (MDF) of western Himalayan region of India, we carried out NDVI time series analysis from 2013 to 2015 using Landsat 8 OLI data. We used the vegetation index differencing method to calculate the change in NDVI (NDVIchange) during pre and post monsoon seasons and these changes were used to assess the phenological behaviour of MDF by taking the effect of a set of environmental variables into account. To understand the effect of environmental variables on change in phenology, we designed a linear regression analysis with sample-based NDVIchange values as the response variable and elevation aspect, and Land Surface Temperature (LST) as explanatory variables. The Landsat-8 derived phenology transition stages were validated by calculating the phenology variation from Nov 2008 to April 2009 using Landsat-7 which has the same spatial resolution as Landsat-8. The Landsat-7 derived NDVI trajectories were plotted in accordance with MODIS derived phenology stages (from Nov 2008 to April 2009) of MDF. Results indicate that the Landsat -8 derived NDVI trajectories describing the phenology variation of MDF during spring, monsoon autumn and winter seasons agreed closely with Landsat-7 and MODIS derived phenology transition from Nov 2008 to April 2009. Furthermore, statistical analysis showed statistically significant correlations (p < 0.05) amongst the environmental variables and the NDVIchange between full greenness and maximum frequency stage of Onset of Greenness (OG) activity.. The major change in NDVI was observed in medium (600 to 650 m) and maximum (650 to 750 m) elevation areas. The change in LST showed also to be highly influential. The results of this study can be used for large scale monitoring of difficult-to-reach mountainous forests, with additional implications in biodiversity assessment. By means of a sufficient amount of available cloud-free imagery, detailed phenological trends across mountainous forests could be explained.
Post-fire suspended sediment dynamics in a Mediterranean terraced catchment using a nested approach
NASA Astrophysics Data System (ADS)
Garcia-Comendador, Julián; Fortesa, Josep; Calsamiglia, Aleix; Calvo-Cases, Adolfo; Estrany, Joan
2017-04-01
Wildfires promote serious disturbances in the hydrological and sediment dynamics at catchment scale modifying the runoff generation response and the sediment delivery. The hysteretic loops analyses can help to clarify some landscape changes induced by fire. Accordingly, these spatio-temporal relationships between discharge and sediment transport at event scale enable the location of sediment sources, the availability or depletion of sediment and the precipitation threshold necessary to generate functional hillslope-channel connectivity. In addition, a nested catchment approach allows the characterization of the hydro-sedimentological dynamics in different landscape compartments, observing the incidence of the changes generated in the landscape and its evolution to control soil erosion and to implement useful mitigation practices after fire. In July 2013 a large wildfire (2,450 ha) severely affected the western part of Mallorca (Balearic Islands, Spain). The hydrological and sediment delivery processes were assessed in the first three post-fire hydrological years in a representative catchment when the window of disturbance is typically more open. A nested approach was applied in which two gauging stations (i.e., US 1.2 km2 and DS 4.8 km2) were established in September 2013 with continuous measurement of rainfall, water and sediment yield. At DS, a minimal runoff (i.e., 11 mm with 2% of coefficient) and low sediment yield (i.e., 6.3 t km2 yr-1) were generated on average in the study period in which rainfall averaged amount (i.e., 468 mm ± 141) and intensities were representative of long-term records. The hysteretic analysis allowed a better understanding of the effects of wildfires and terraces in sediment yields. For the whole study period, the percentage distribution was 43% (US; two monitored years) and 40% (DS; three monitored years) for clockwise loops in front of 57% (US) and 60% (DS) for counter clockwise loops. This percentage of counter clockwise loops was high if compared with other studies on non-burned Mediterranean catchments probably related with the increased sensitivity of the landscape after wildfire perturbation. During the following years, this percentage -as well as the sediment yield- showed a significant decrease related to the vegetation recovery. The findings also illustrated a differenced behaviour between nested catchments. For the coincident floods between US and DS, only 40% recorded the same hysteresis behaviour. Counter clock loops were predominant in US because of the higher hillslope-channel connectivity of upstream parts of the catchment, whilst the predominance of clockwise loops in DS were indicative of the mobilization of sediment deposited along the river channel and its adjacent areas. These differenced patterns can be attributed to the sediment conveyance losses and storage along the stream channel between stations as well as the size characteristics and the buffering effect of the nested catchments.
EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES
Börgers, Christoph; Nectow, Alexander R.
2013-01-01
Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276
NASA Technical Reports Server (NTRS)
Olszewski, A. D., Jr.; Wilcox, T. P.; Beckman, Mark
1996-01-01
Many spacecraft are launched today with only an omni-directional (omni) antenna and do not have an onboard Tracking and Data Relay Satellite (TDRS) transponder that is capable of coherently returning a carrier signal through TDRS. Therefore, other means of tracking need to be explored and used to adequately acquire the spacecraft. Differenced One-Way Doppler (DOWD) tracking data are very useful in eliminating the problems associated with the instability of the onboard oscillators when using strictly one-way Doppler data. This paper investigates the TDRS DOWD tracking data received by the Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) during the launch and early orbit phases for the the Interplanetary Physics Laboratory (WIND) and the National Oceanographic and Atmospheric Administration (NOAA)-J missions. In particular FDF personnel performed an investigation of the data residuals and made an assessment of the acquisition capabilities of DOWD-based solutions. Comparisons of DOWD solutions with existing data types were performed and analyzed in this study. The evaluation also includes atmospheric editing of the DOWD data and a study of the feasibility of solving for Doppler biases in an attempt to minimize error. Furthermore, by comparing the results from WIND and NOAA-J, an attempt is made to show the limitations involved in using DOWD data for the two different mission profiles. The techniques discussed in this paper benefit the launches of spacecraft that do not have TDRS transponders on board, particularly those launched into a low Earth orbit. The use of DOWD data is a valuable asset to missions which do not have a stable local oscillator to enable high-quality solutions from the one-way/return-link Doppler tracking data.
3D unstructured-mesh radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morel, J.
1997-12-31
Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options:more » $$S{_}n$$ (discrete-ordinates), $$P{_}n$$ (spherical harmonics), and $$SP{_}n$$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $$S{_}n$$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.« less
Array-based satellite phase bias sensing: theory and GPS/BeiDou/QZSS results
NASA Astrophysics Data System (ADS)
Khodabandeh, A.; Teunissen, P. J. G.
2014-09-01
Single-receiver integer ambiguity resolution (IAR) is a measurement concept that makes use of network-derived non-integer satellite phase biases (SPBs), among other corrections, to recover and resolve the integer ambiguities of the carrier-phase data of a single GNSS receiver. If it is realized, the very precise integer ambiguity-resolved carrier-phase data would then contribute to the estimation of the receiver’s position, thus making (near) real-time precise point positioning feasible. Proper definition and determination of the SPBs take a leading part in developing the idea of single-receiver IAR. In this contribution, the concept of array-based between-satellite single-differenced (SD) SPB determination is introduced, which is aimed to reduce the code-dominated precision of the SD-SPB corrections. The underlying model is realized by giving the role of the local reference network to an array of antennas, mounted on rigid platforms, that are separated by short distances so that the same ionospheric delay is assumed to be experienced by all the antennas. To that end, a closed-form expression of the array-aided SD-SPB corrections is presented, thereby proposing a simple strategy to compute the SD-SPBs. After resolving double-differenced ambiguities of the array’s data, the variance of the SD-SPB corrections is shown to be reduced by a factor equal to the number of antennas. This improvement in precision is also affirmed by numerical results of the three GNSSs GPS, BeiDou and QZSS. Experimental results demonstrate that the integer-recovered ambiguities converge to integers faster, upon increasing the number of antennas aiding the SD-SPB corrections.
Tidewater dynamics at Store Glacier, West Greenland from daily repeat UAV surveys
NASA Astrophysics Data System (ADS)
Ryan, Jonathan; Hubbard, Alun; Toberg, Nick; Box, Jason; Todd, Joe; Christoffersen, Poul; Neal, Snooke
2017-04-01
A significant component of the Greenland ice sheet's mass wasteage to sea level rise is attributed to the acceleration and dynamic thinning at its tidewater margins. To improve understanding of the rapid mass loss processes occurring at large tidewater glaciers, we conducted a suite of daily repeat aerial surveys across the terminus of Store Glacier, a large outlet draining the western Greenland Ice Sheet, from May to July 2014 (https://www.youtube.com/watch?v=-y8kauAVAfE). The unmanned aerial vehicles (UAVs) were equipped with digital cameras, which, in combination with onboard GPS, enabled production of high spatial resolution orthophotos and digital elevation models (DEMs) using standard structure-from-motion techniques. These data provide insight into the short-term dynamics of Store Glacier surrounding the break-up of the sea-ice mélange that occurred between 4 and 7 June. Feature tracking of the orthophotos reveals that mean speed of the terminus is 16 - 18 m per day, which was independently verified against a high temporal resolution time-series derived from an expendable/telemetric GPS deployed at the terminus. Differencing the surface area of successive orthophotos enable quantification of daily calving rates, which significantly increase just after melange break-up. Likewise, by differencing bulk freeboard volume of icebergs through time we could also constrain the magnitude and variation of submarine melt. We calculate a mean submarine melt rate of 0.18 m per day throughout the spring period with relatively little supraglacial runoff and no active meltwater plumes to stimulate fjord circulation and upwelling of deeper, warmer water masses. Finally, we relate calving rates to the zonation and depth of water-filled crevasses, which were prominent across parts of the terminus from June onwards.
NASA Astrophysics Data System (ADS)
Reitman, N. G.; Briggs, R.; Gold, R. D.; DuRoss, C. B.
2015-12-01
Post-earthquake, field-based assessments of surface displacement commonly underestimate offsets observed with remote sensing techniques (e.g., InSAR, image cross-correlation) because they fail to capture the total deformation field. Modern earthquakes are readily characterized by comparing pre- and post-event remote sensing data, but historical earthquakes often lack pre-event data. To overcome this challenge, we use historical aerial photographs to derive pre-event digital surface models (DSMs), which we compare to modern, post-event DSMs. Our case study focuses on resolving on- and off-fault deformation along the Lost River fault that accompanied the 1983 M6.9 Borah Peak, Idaho, normal-faulting earthquake. We use 343 aerial images from 1952-1966 and vertical control points selected from National Geodetic Survey benchmarks measured prior to 1983 to construct a pre-event point cloud (average ~ 0.25 pts/m2) and corresponding DSM. The post-event point cloud (average ~ 1 pt/m2) and corresponding DSM are derived from WorldView 1 and 2 scenes processed with NASA's Ames Stereo Pipeline. The point clouds and DSMs are coregistered using vertical control points, an iterative closest point algorithm, and a DSM coregistration algorithm. Preliminary results of differencing the coregistered DSMs reveal a signal spanning the surface rupture that is consistent with tectonic displacement. Ongoing work is focused on quantifying the significance of this signal and error analysis. We expect this technique to yield a more complete understanding of on- and off-fault deformation patterns associated with the Borah Peak earthquake along the Lost River fault and to help improve assessments of surface deformation for other historical ruptures.
High-resolution AUV mapping of the 2015 flows at Axial Seamount, Juan de Fuca Ridge
NASA Astrophysics Data System (ADS)
Paduan, J. B.; Chadwick, W. W., Jr.; Clague, D. A.; Le Saout, M.; Caress, D. W.; Thomas, H. J.; Yoerger, D.
2016-12-01
Lava flows erupted in April 2015 at Axial Seamount were mapped at 1-m resolution with the AUV Sentry in August 2015 and the MBARI Mapping AUVs in July 2016 and observed and sampled with ROVs on those same expeditions. Thirty percent of terrain covered by new flows had been mapped by the MBARI AUVs prior to the eruption. Differencing of before and after maps (using ship-collected bathymetry where the AUV had not mapped before) allows calculation of extents and volumes of flows and shows new fissures. The maps reveal unexpected fissure patterns and shifts in the style of flow emplacement through a single eruption. There were 11 separate flows totaling 1.48 x 108 m3 of lava erupted from numerous en echelon fissures over 19 km on the NE caldera floor, on the NE flank, and down the N rift zone. Flows in and around the caldera have maximum thicknesses of 5-19 m. Most erupted as sheet flows and spread along intricate channels that terminated in thin margins. Some utilized pre-existing fissures. Some flows erupted from short fissures, while at least two longer new fissures produced little or no lava. A flow on the upper N rift has a spectacular lava channel flanked by narrow lava pillars supporting a thin roof left after the flow drained. A shatter ring still emanating warm fluid is visible in the map as a 15-m wide low cone. Hundreds of exploded pillows were observed but are not discernable in the bathymetry. The northern-most three flows deep on the N rift are similar in area to the others but comprise the bulk of the eruption volume. Differencing of ship-based bathymetry shows only these flows. Near the eruptive fissures they are sheet flows, but as they flowed downslope they built complexes of coalesced pillow mounds up to 67-128 m thick. Changes in flow morphology occurred through the course of the eruption. Large pillow mounds had molten cores that deformed as the eruption progressed. One flow began as a thin, effusive sheet flow but as the eruption rate decreased, a pillow mound built over the fissure. As the eruption waned on the caldera floor, near the fissure a small inflated margin developed on top of channels from an earlier phase of the flow. Several landslides occurred at the caldera wall. One is near where a 2015 fissure on the caldera floor cut through the caldera-bounding fault into the flank of the volcano.
Temporal and long-term trend analysis of class C notifiable diseases in China from 2009 to 2014
Zhang, Xingyu; Hou, Fengsu; Qiao, Zhijiao; Li, Xiaosong; Zhou, Lijun; Liu, Yuanyuan; Zhang, Tao
2016-01-01
Objectives Time series models are effective tools for disease forecasting. This study aims to explore the time series behaviour of 11 notifiable diseases in China and to predict their incidence through effective models. Settings and participants The Chinese Ministry of Health started to publish class C notifiable diseases in 2009. The monthly reported case time series of 11 infectious diseases from the surveillance system between 2009 and 2014 was collected. Methods We performed a descriptive and a time series study using the surveillance data. Decomposition methods were used to explore (1) their seasonality expressed in the form of seasonal indices and (2) their long-term trend in the form of a linear regression model. Autoregressive integrated moving average (ARIMA) models have been established for each disease. Results The number of cases and deaths caused by hand, foot and mouth disease ranks number 1 among the detected diseases. It occurred most often in May and July and increased, on average, by 0.14126/100 000 per month. The remaining incidence models show good fit except the influenza and hydatid disease models. Both the hydatid disease and influenza series become white noise after differencing, so no available ARIMA model can be fitted for these two diseases. Conclusion Time series analysis of effective surveillance time series is useful for better understanding the occurrence of the 11 types of infectious disease. PMID:27797981
Comparison of AVIRIS and Landsat ETM+ detection capabilities for burn severity
Van Wagtendonk, Jan W.; Root, Ralph R.; Key, Carl H.
2004-01-01
Our study compares data on burn severity collected from multi-temporal Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) with similar data from the Enhanced Thematic Mapper Plus (ETM+) using the differenced Normalized Burn Ratio (dNBR). Two AVIRIS and ETM+ data acquisitions recorded surface conditions immediately before the Hoover Fire began to spread rapidly and again the following year. Data were validated with 63 field plots using the Composite Burn Index (CBI). The relationship between spectral channels and burn severity was examined by comparing pre- and post-fire datasets. Based on the high burn severity comparison, AVIRIS channels 47 and 60 at wavelengths of 788 and 913 nm showed the greatest negative response to fire. Post-fire reflectance values decreased the most on average at those wavelengths, while channel 210 at 2370 nm showed the greatest positive response on average. Fire increased reflectance the most at that wavelength over the entire measured spectral range. Furthermore, channel 210 at 2370 nm exhibited the greatest variation in spectral response, suggesting potentially high information content for fire severity. Based on general remote sensing principles and the logic of variable spectral responses to fire, dNBR from both sensors should produce useful results in quantifying burn severity. The results verify the band–response relationships to burn severity as seen with ETM+ data and confirm the relationships by way of a distinctly different sensor system.
UXO Detection and Characterization using new Berkeley UXO Discriminator (BUD)
NASA Astrophysics Data System (ADS)
Gasperikova, E.; Morrison, H. F.; Smith, J. T.; Becker, A.
2006-05-01
An optimally designed active electromagnetic system (AEM), Berkeley UXO Discriminator, BUD, has been developed for detection and characterization of UXO in the 20 mm to 150 mm size range. The system incorporates three orthogonal transmitters, and eight pairs of differenced receivers. The transmitter-receiver assembly together with the acquisition box, as well as the battery power and GPS receiver, is mounted on a small cart to assure system mobility. BUD not only detects the object itself but also quantitatively determines its size, shape, orientation, and metal content (ferrous or non-ferrous, mixed metals). Moreover, the principal polarizabilities and size of a metallic target can be determined from a single position of the BUD platform. The search for UXO is a two-step process. The object must first be detected and its location determined then the parameters of the object must be defined. A satisfactory classification scheme is one that determines the principal dipole polarizabilities of a target. While UXO objects have a single major polarizability (principal moment) coincident with the long axis of the object and two equal transverse polarizabilities, the scrap metal has all three principal moments entirely different. This description of the inherent polarizabilities of a target is a major advance in discriminating UXO from irregular scrap metal. Our results clearly show that BUD can resolve the intrinsic polarizabilities of a target and that there are very clear distinctions between symmetric intact UXO and irregular scrap metal. Target properties are determined by an inversion algorithm, which at any given time inverts the response to yield the location (x, y, z) of the target, its attitude and its principal polarizabilities (yielding an apparent aspect ratio). Signal-to-noise estimates (or measurements) are interpreted in this inversion to yield error estimates on the location, attitude and polarizabilities. This inversion at a succession of times provides the polarizabilities as a function of time, which can in turn yield the size, true aspect ratio and estimates of the conductivity and permeability of the target. The accuracy of these property estimates depends on the time window over which the polarizability measurements, and their accuracies, are known. Initial tests at a local site over a variety of test objects and inert UXOs showed excellent detection and characterization results within the predicted size-depth range. This research was funded by the U.S. Department of Defense under ESTCP Project # UX-0437.
Chronic Health Outcomes and Prescription Drug Copayments in Medicaid.
Kostova, Deliana; Fox, Jared
2017-05-01
Prescription drug copayments and cost-sharing have been linked to reductions in prescription drug use and expenditures. However, little is known about their effect on specific health outcomes. To evaluate the association between prescription drug copayments and uncontrolled hypertension, uncontrolled hypercholesterolemia, and prescription drug utilization among Medicaid beneficiaries with these conditions. Select adults aged 20-64 from NHANES 1999-2012 in 18 states. Uncontrolled hypertension, uncontrolled hypercholesterolemia, and taking medication for each of these conditions. A differencing regression model was used to evaluate health outcomes among Medicaid beneficiaries in 4 states that introduced copayments during the study period, relative to 2 comparison groups-Medicaid beneficiaries in 14 states unaffected by shifts in copayment policy, and a within-state counterfactual group of low-income adults not on Medicaid, while controlling for individual demographic factors and unobserved state-level characteristics. Although uncontrolled hypertension and hypercholesterolemia declined among all low-income persons during the study period, the trend was less pronounced in Medicaid beneficiaries affected by copayments. After netting out concurrent trends in health outcomes of low-income persons unaffected by Medicaid copayment changes, we estimated that introduction of drug copayments in Medicaid was associated with an average rise in uncontrolled hypertension and uncontrolled hypercholesterolemia of 7.7 and 13.2 percentage points, respectively, and with reduced drug utilization for hypercholesterolemia. As Medicaid programs change in the years following the Affordable Care Act, prescription drug copayments may play a role as a lever for controlling hypertension and hypercholesterolemia at the population level.
Tapping the full potential of geodetic glacier change assessment with air and space borne sensors
NASA Astrophysics Data System (ADS)
Zemp, M.; Paul, F.; Machguth, H.; Fischer, M.
2016-12-01
Glacier changes are recognized as independent and high-confidence natural indicators of climate change. Past, current, and future glacier changes impact on global sea level, the regional water cycle, and local hazard situations. In the 5th Assessment Report of the IPCC, glacier mass budgets were reconciled by combining traditional observations (i.e. results from glaciological and geodetic measurements) with satellite altimetry and gravimetry to fill regional gaps and obtain global coverage. However, this approach is challenged by the relatively small number and inhomogeneous distribution of in-situ measurement series and their often unknown representativeness for the respective mountain range as well as by scale issues of current satellite altimetry (only point data) and gravimetry (coarse resolution) missions. In this presentation, we highlight the potential of air and space borne sensors for (i) validation and calibration of direct measurements using the glaciological method, (ii) assessing glacier volume changes over entire mountain ranges, and for (iii) determination of the representativeness of the field measurements for respective mountain ranges. Whereas long-term in-situ measurements provide the temporal variability of glacier mass changes with annual or seasonal resolution, differencing of high-resolution digital elevation models, such as from airborne (national) surveys or TanDEM-X, bear the potential to assess thickness and volume changes for thousands of individual glaciers over entire mountain ranges on a decadal time scale. In combination, the calibrated field measurements can be used to determine volume and mass changes over entire mountain ranges at high confidence. The spatial-temporal extrapolation can be supported using dense temporal series of snow cover evolution derived from optical satellite data such as Sentinel 2. Finally, these results can be used to reconcile satellite altimetry and gravimetry products. Provided that resources for corresponding glacier monitoring activities are made available (within or outside the scientific funding system), the combination of in-situ with air and space borne measurements will boost the scientific capacity to address the grand challenges from climate-induced glacier changes and related societal impacts.
Multidimensional computer simulation of Stirling cycle engines
NASA Technical Reports Server (NTRS)
Hall, C. A.; Porsching, T. A.; Medley, J.; Tew, R. C.
1990-01-01
The computer code ALGAE (algorithms for the gas equations) treats incompressible, thermally expandable, or locally compressible flows in complicated two-dimensional flow regions. The solution method, finite differencing schemes, and basic modeling of the field equations in ALGAE are applicable to engineering design settings of the type found in Stirling cycle engines. The use of ALGAE to model multiple components of the space power research engine (SPRE) is reported. Videotape computer simulations of the transient behavior of the working gas (helium) in the heater-regenerator-cooler complex of the SPRE demonstrate the usefulness of such a program in providing information on thermal and hydraulic phenomena in multiple component sections of the SPRE.
Exponential integrators in time-dependent density-functional calculations
NASA Astrophysics Data System (ADS)
Kidd, Daniel; Covington, Cody; Varga, Kálmán
2017-12-01
The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.
NASA Astrophysics Data System (ADS)
Melgaard, Seth D.; Seletskiy, Denis V.; Di Lieto, Alberto; Tonelli, Mauro; Sheik-Bahae, Mansoor
2012-03-01
Since recent demonstration of cryogenic optical refrigeration, a need for reliable characterization tools of cooling performance of different materials is in high demand. We present our experimental apparatus that allows for temperature and wavelength dependent characterization of the materials' cooling efficiency and is based on highly sensitive spectral differencing technique or two-band differential spectral metrology (2B-DSM). First characterization of a 5% w.t. ytterbium-doped YLF crystal showed quantitative agreement with the current laser cooling model, as well as measured a minimum achievable temperature (MAT) at 110 K. Other materials and ion concentrations are also investigated and reported here.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Three-dimensional time dependent computation of turbulent flow
NASA Technical Reports Server (NTRS)
Kwak, D.; Reynolds, W. C.; Ferziger, J. H.
1975-01-01
The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.
The upwind control volume scheme for unstructured triangular grids
NASA Technical Reports Server (NTRS)
Giles, Michael; Anderson, W. Kyle; Roberts, Thomas W.
1989-01-01
A new algorithm for the numerical solution of the Euler equations is presented. This algorithm is particularly suited to the use of unstructured triangular meshes, allowing geometric flexibility. Solutions are second-order accurate in the steady state. Implementation of the algorithm requires minimal grid connectivity information, resulting in modest storage requirements, and should enhance the implementation of the scheme on massively parallel computers. A novel form of upwind differencing is developed, and is shown to yield sharp resolution of shocks. Two new artificial viscosity models are introduced that enhance the performance of the new scheme. Numerical results for transonic airfoil flows are presented, which demonstrate the performance of the algorithm.
Combination of GPS and GLONASS IN PPP algorithms and its effect on site coordinates determination
NASA Astrophysics Data System (ADS)
Hefty, J.; Gerhatova, L.; Burgan, J.
2011-10-01
Precise Point Positioning (PPP) approach using the un-differenced code and phase GPS observations, precise orbits and satellite clocks is an important alternative to the analyses based on double differences. We examine the extension of the PPP method by introducing the GLONASS satellites into the processing algorithms. The procedures are demonstrated on the software package ABSOLUTE developed at the Slovak University of Technology. Partial results, like ambiguities and receiver clocks obtained from separate solutions of the two GNSS are mutually compared. Finally, the coordinate time series from combination of GPS and GLONASS observations are compared with GPS-only solutions.
The computation of dynamic fractional difference parameter for S&P500 index
NASA Astrophysics Data System (ADS)
Pei, Tan Pei; Cheong, Chin Wen; Galagedera, Don U. A.
2015-10-01
This study evaluates the time-varying long memory behaviors of the S&P500 volatility index using dynamic fractional difference parameters. Time-varying fractional difference parameter shows the dynamic of long memory in volatility series for the pre and post subprime mortgage crisis triggered by U.S. The results find an increasing trend in the S&P500 long memory volatility for the pre-crisis period. However, the onset of Lehman Brothers event reduces the predictability of volatility series following by a slight fluctuation of the factional differencing parameters. After that, the U.S. financial market becomes more informationally efficient and follows a non-stationary random process.
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
Category 3: Sound Generation by Interacting with a Gust
NASA Technical Reports Server (NTRS)
Scott, James R.
2004-01-01
The cascade-gust interaction problem is solved employing a time-domain approach. The purpose of this problem is to test the ability of a CFD/CAA code to accurately predict the unsteady aerodynamic and aeroacoustic response of a single airfoil to a two-dimensional, periodic vortical gust.Nonlinear time dependent Euler equations are solved using higher order spatial differencing and time marching techniques. The solutions indicate the generation and propagation of expected mode orders for the given configuration and flow conditions. The blade passing frequency (BPF) is cut off for this cascade while higher harmonic, 2BPF and 3BPF, modes are cut on.
Quadratic Finite Element Method for 1D Deterministic Transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolar, Jr., D R; Ferguson, J M
2004-01-06
In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ({und r}) and angular ({und {Omega}}) dependences on the angular flux {psi}{und r},{und {Omega}}are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of {psi}{und r},{und {Omega}}. Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable ({mu}) in developing the one-dimensional (1D) spherical geometry S{sub N} equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S{sub N} algorithms.
Automatic Differentiation as a tool in engineering design
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.; Hall, Laura E.
1992-01-01
Automatic Differentiation (AD) is a tool that systematically implements the chain rule of differentiation to obtain the derivatives of functions calculated by computer programs. In this paper, it is assessed as a tool for engineering design. The paper discusses the forward and reverse modes of AD, their computing requirements, and approaches to implementing AD. It continues with application to two different tools to two medium-size structural analysis problems to generate sensitivity information typically necessary in an optimization or design situation. The paper concludes with the observation that AD is to be preferred to finite differencing in most cases, as long as sufficient computer storage is available.
Zeng, Guang-Ming; Jiang, Yi-Min; Qin, Xiao-Sheng; Huang, Guo-He; Li, Jian-Bing
2003-01-01
Taking the distributing calculation of velocity and concentration as an example, the paper established a series of governing equations by the vorticity-stream function method, and dispersed the equations by the finite differencing method. After figuring out the distribution field of velocity, the paper also calculated the concentration distribution in sedimentation tank by using the two-dimensional concentration transport equation. The validity and feasibility of the numerical method was verified through comparing with experimental data. Furthermore, the paper carried out a tentative exploration into the application of numerical simulation of sedimentation tanks.
Wave Overtopping of a Barrier Beach
NASA Astrophysics Data System (ADS)
Thornton, E. B.; Laudier, N.; Macmahan, J. H.
2009-12-01
The rate of wave overtopping of a barrier beach is measured and modeled as a first step in modeling the breaching of a beach impounding an ephemeral river. Unique rate of wave overtopping data are obtained from the measure of the Carmel River, California, lagoon filling during a time when the lagoon is closed-off and there is no river inflow. Volume changes are calculated from measured lagoon height changes owing to wave overtopping by a stage-volume curve, then center differenced and averaged to provide volume rates of change in the lagoon. Wave height and period are obtained from CDIP MOPS directional wave spectra data in 15m fronting the beach. Beach morphology was measured by GPS walking surveys and interpolated for beach slopes and berm heights. Three empirical overtopping models by van der Meer and Janssen (1995), Hedges and Reis (1998) and Pullen et al. (2007) with differing parameterizations on wave height, period and beach slope and calibrated using extensive laboratory data obtained over plane, impermeable beaches are compared with the data. In addition, the run-up model by Stockdon et al. (2006) based on field data is examined. Three wave overtopping storm events are considered when morphology data were available less than 2 weeks prior to the event. The models are tuned to fit the data using a reduction factor to account for beach permeability, berm characteristics, non-normal wave incidence and surface roughness influence. It is concluded that the Stockdon et al. (2006) model underestimates run-up as no overtopping is predicted with this model. The three empirical overtopping models behaved similarly well with regression coefficients ranging 0.72 to 0.86 using a reasonable range of reduction factors 0.66 - 0.81 with an average of 0.74.
Bessette-Kirton, Erin; Coe, Jeffrey A.; Zhou, Wendy
2018-01-01
The use of preevent and postevent digital elevation models (DEMs) to estimate the volume of rock avalanches on glaciers is complicated by ablation of ice before and after the rock avalanche, scour of material during rock avalanche emplacement, and postevent ablation and compaction of the rock avalanche deposit. We present a model to account for these processes in volume estimates of rock avalanches on glaciers. We applied our model by calculating the volume of the 28 June 2016 Lamplugh rock avalanche in Glacier Bay National Park, Alaska. We derived preevent and postevent 2‐m resolution DEMs from WorldView satellite stereo imagery. Using data from DEM differencing, we reconstructed the rock avalanche and adjacent surfaces at the time of occurrence by accounting for elevation changes due to ablation and scour of the ice surface, and postevent deposit changes. We accounted for uncertainties in our DEMs through precise coregistration and an assessment of relative elevation accuracy in bedrock control areas. The rock avalanche initially displaced 51.7 ± 1.5 Mm3 of intact rock and then scoured and entrained 13.2 ± 2.2 Mm3 of snow and ice during emplacement. We calculated the total deposit volume to be 69.9 ± 7.9 Mm3. Volume estimates that did not account for topographic changes due to ablation, scour, and compaction underestimated the deposit volume by 31.0–46.8 Mm3. Our model provides an improved framework for estimating uncertainties affecting rock avalanche volume measurements in glacial environments. These improvements can contribute to advances in the understanding of rock avalanche hazards and dynamics.
Seafloor Characteristics and Bathymetric Change at Hunga Tonga-Hunga Ha'apai
NASA Astrophysics Data System (ADS)
Ferrini, V. L.; Spierer, H.; Peters, C.; Garvin, J. B.
2016-12-01
In April 2016, bathymetric mapping was conducted around the new island that formed in 2015 during a surtseyan style eruption at Hunga Tonga-Hunga Ha'apai in the Kingdom of Tonga. The new ship-based bathymetry and acoustic backscatter intensity data can be used to quantify morphologic details of the seafloor surrounding the new land. The new island, which stands 150m above sea level is nestled between two pre-existing islands located on the northern rim of the caldera of a large submarine volcano. The new bathymetry data reveal several cratered domes along the western and southern rims of the caldera, as well as what appear to be large consolidated blocks along the northwest rim of the caldera. In addition, an incised channel extends seaward from very close to the northern coast of new island and suggests a primary pathway for downslope movement during the formation of the island. The floor of the caldera is extremely flat at a water depth of -150 m. Pre-eruption bathymetric data were acquired along two survey lines during transits of a cruise in 2008. The spatial extent of these data is unfortunately limited but they allow quantitative bathymetric differencing over portions of the area mapped in 2016. Bathymetric change of as much as +35 m since 2008 is associated with volcanic domes along the western rim of the caldera. Smaller bathymetric changes are associated with the apparent downslope movement of consolidated blocks on the northwestern rim of the caldera. These data provide important clues about the submarine processes that took place during the eruption and complement ongoing studies of the subaerial portion of the island.
NASA Astrophysics Data System (ADS)
Comes, E.; Jaeger, K. L.
2016-12-01
Lowhead dams have had a profound cumulative impact on rivers and streams. Their removal is an increasingly popular restoration method, however, geomorphic response remains poorly resolved. This study quantified geomorphic change following two lowhead dam removal in the Olentangy River and the downstream Scioto River, which flows through Columbus, Ohio. A paired control-treatment design compared change above and below a removed dam (treatment) to an existing dam (control) in each river system over a two and three-year period. Upstream treatment reaches included passive and active restoration via in-channel engineering. Channel change was quantified through repeat bathymetric surveys using an acoustic Doppler current profiles and near-surface riverbed substrate sampling at several time periods ( 2 surveys/year). Differencing of digital elevation models from each bathymetric survey quantified changes in erosion and deposition patterns and bathymetric heterogeneity. Results indicate upstream treatment reaches were net erosional with overall substrate coarsening that included D84 sand to gravel clast size shifts. The Olentangy River's downstream treatment reach experienced concurrent erosion and deposition within a given survey although net erosion dominated the first year of the three-year study period. The downstream treatment reach also experienced substantial grain size fluctuation between surveys with little overall change. Unanticipated engineering activities in the downstream treatment reach of the Scioto River confounded geomorphic change in this reach. Non-metric multidimensional scaling analysis indicates a moderate, but abrupt change towards overall increased heterogeneity in the first year following dam removal in the downstream reach with little overall change in the following two years. Active restoration activities in the upstream treatment reach resulted in abrupt, but slight shifts towards decreased bathymetric heterogeneity despite substantial riverbed regrading to create pool-riffle features. Repeat intra-annual surveys revealed that the river system experiences clear seasonal patterns of erosion and deposition with associated substrate coarsening and fining that would not be evident in typical dam removal studies that generally are limited to annual surveys.
Sedimentation and bathymetry changes in Suisun Bay: 1867-1990
Cappiella, Karen; Malzone, Chris; Smith, Richard; Jaffe, Bruce
1999-01-01
Understanding patterns of historical erosion and deposition in San Francisco Bay is crucial in managing such issues as locating deposits of sediment-associated contaminants, and the restoration of wetland areas. These problems were addressed by quantitatively examining historical hydrographic surveys. The data from five hydrographic surveys, made from 1867 to 1990, were analyzed using surface modeling software to determine long-term changes in the sediment system of Suisun Bay and surrounding areas. A surface grid displaying the bathymetry was created for each survey period, and the bathymetric change between survey periods was computed by differencing these grids. Patterns and volumes of erosion and deposition, sedimentation rates, and shoreline changes were derived from the resulting change grids. Approximately 115 million cubic meters of sediment were deposited in the Suisun Bay area from 1867 to 1887, the majority of which was debris from hydraulic gold mining in the Sierra Nevada. Just under two-thirds of the area of the study site was depositional during this time period, while less than one-third of it was erosional. However, over the entire study period, the Suisun Bay area lost sediment, indicating that a large amount of erosion occurred from1887 to 1990. In fact, this area lost sediment during each of the change periods between 1887 and 1990. Because erosion and deposition are processes that may vary over space and time, further analyses of more specific areas were done to examine spatial and temporal patterns. The change in the Suisun Bay area from being a largely depositional environment to an erosional one is the result of a combination of several factors. These factors include the regulation and subsequent cessation of hydraulic mining practices, and the increase in flood control and water distribution projects that have decreased sediment supply to the bay by reducing the frequency and duration of peak flow conditions. Another pattern shown by the changing bathymetry is the substantial decrease in the area of tidal flat (defined in this study as the area between mean lower low water and the shoreline), particularly in Grizzly Bay and Honker Bay. These tidal flats are important to the bay ecosystem, providing stability and biologic diversity.
NASA Astrophysics Data System (ADS)
Liang, Zhang; Yanqing, Hou; Jie, Wu
2016-12-01
The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant calibration (LBCC) and MA approaches improved the positioning accuracy of the vertical component, while slightly degrading the accuracy of the horizontal components. The ARMA(1, 0) model, however, improved the positioning accuracy of all three components, with 40 and 50 % improvement of the vertical component for BDS and GPS, respectively. In the short baseline experiment, compared to the DDCP approach, the LBCC approach yielded bad positioning solutions and degraded the AD accuracy; both MA and ARMA-based filter approaches improved the AD accuracy. Moreover, the ARMA(1, 0) and ARMA(1, 1) models have relatively better performance, improving to 55 % and 48 % the elevation angle in ARMA(1, 1) and MA model for GPS, respectively. Furthermore, the drifting LB variation is found to be continuous and slowly cumulative; the variation magnitudes in the unit of length are almost identical on different frequency carrier phases, so the LB variation does not show obvious correlation between different frequencies. Consequently, the wide-lane LB in the unit of cycle is very stable, while the narrow-lane LB varies largely in time. This reasoning probably also explains the phenomenon that the wide-lane LB originating in the satellites is stable, while the narrow-lane LB varies. The results of ARMA-based filters are better than the MA model, which probably implies that the modeling for drifting LB can further improve the precise point positioning accuracy.
Kerr, William C.; Karriker-Jaffe, Katherine J.; Ye, Yu
2013-01-01
Aims: The aim of this study was to estimate the overall impact of alcohol on US race- and sex-specific age-adjusted cirrhosis mortality rates and to consider beverage-specific effects that represent changes in drinking patterns over time, comparing states with large and small African-American/White cirrhosis mortality differentials. Methods: Using series data from 1950 to 2002, the effects of per capita alcohol consumption on cirrhosis mortality for African American and White men and women were estimated using generalized least squares panel models on first-differenced data. Granger causality tests explored geographic patterning of racial differences in cirrhosis mortality. Results: Cirrhosis mortality was significantly positively related to apparent consumption of alcohol, with an overall impact of 8–14%/l of ethanol. This effect was driven by spirits which were more strongly associated with mortality for African-American women and for African-American men in states with larger mortality differentials. This disparity first emerged in New York and spread through the Northeast and into Midwestern states. Conclusion: Differences in the contribution of alcohol to cirrhosis mortality rates suggest variation by race and gender in life-course patterns of heavy consumption, illicit liquor and spirits use, as well as birth cohort effects. PMID:23558110
Inland thinning on the Greenland ice sheet controlled by outlet glacier geometry
NASA Astrophysics Data System (ADS)
Felikson, Denis; Bartholomaus, Timothy C.; Catania, Ginny A.; Korsgaard, Niels J.; Kjær, Kurt H.; Morlighem, Mathieu; Noël, Brice; van den Broeke, Michiel; Stearns, Leigh A.; Shroyer, Emily L.; Sutherland, David A.; Nash, Jonathan D.
2017-04-01
Greenland’s contribution to future sea-level rise remains uncertain and a wide range of upper and lower bounds has been proposed. These predictions depend strongly on how mass loss--which is focused at the termini of marine-terminating outlet glaciers--can penetrate inland to the ice-sheet interior. Previous studies have shown that, at regional scales, Greenland ice sheet mass loss is correlated with atmospheric and oceanic warming. However, mass loss within individual outlet glacier catchments exhibits unexplained heterogeneity, hindering our ability to project ice-sheet response to future environmental forcing. Using digital elevation model differencing, we spatially resolve the dynamic portion of surface elevation change from 1985 to present within 16 outlet glacier catchments in West Greenland, where significant heterogeneity in ice loss exists. We show that the up-glacier extent of thinning and, thus, mass loss, is limited by glacier geometry. We find that 94% of the total dynamic loss occurs between the terminus and the location where the down-glacier advective speed of a kinematic wave of thinning is at least three times larger than its diffusive speed. This empirical threshold enables the identification of glaciers that are not currently thinning but are most susceptible to future thinning in the coming decades.
The effects of prospective mate quality on investments in healthy body weight among single women.
Harris, Matthew C; Cronin, Christopher J
2017-02-01
This paper examines how a single female's investment in healthy body weight is affected by the quality of single males in her marriage market. A principle concern in estimation is the presence of market-level unobserved heterogeneity that may be correlated with changes in single male quality, measured as earning potential. To address this concern, we employ a differencing strategy that normalizes the exercise behaviors of single women to those of their married counterparts. Our main results suggest that when potential mate quality in a marriage market decreases, single black women invest less in healthy body weight. For example, we find that a 10 percentage point increase in the proportion of low quality single black males leads to a 5-10% decrease in vigorous exercise taken by single black females. Results for single white women are qualitatively similar, but not consistent across specifications. These results highlight the relationship between male and female human capital acquisition that is driven by participation in the marriage market. Our results suggest that programs designed to improve the economic prospects of single males may yield positive externalities in the form of improved health behaviors, such as more exercise, particularly for single black females. Copyright © 2016 Elsevier B.V. All rights reserved.
Salomon, M; Conklin, J W; Kozaczuk, J; Berberian, J E; Keiser, G M; Silbergleit, A S; Worden, P; Santiago, D I
2011-12-01
In this paper, we present a method to measure the frequency and the frequency change rate of a digital signal. This method consists of three consecutive algorithms: frequency interpolation, phase differencing, and a third algorithm specifically designed and tested by the authors. The succession of these three algorithms allowed a 5 parts in 10(10) resolution in frequency determination. The algorithm developed by the authors can be applied to a sampled scalar signal such that a model linking the harmonics of its main frequency to the underlying physical phenomenon is available. This method was developed in the framework of the gravity probe B (GP-B) mission. It was applied to the high frequency (HF) component of GP-B's superconducting quantum interference device signal, whose main frequency f(z) is close to the spin frequency of the gyroscopes used in the experiment. A 30 nHz resolution in signal frequency and a 0.1 pHz/s resolution in its decay rate were achieved out of a succession of 1.86 s-long stretches of signal sampled at 2200 Hz. This paper describes the underlying theory of the frequency measurement method as well as its application to GP-B's HF science signal.
Comparing different methods to model scenarios of future glacier change for the entire Swiss Alps
NASA Astrophysics Data System (ADS)
Linsbauer, A.; Paul, F.; Haeberli, W.
2012-04-01
There is general agreement that observed climate change already has strong impacts on the cryosphere. The rapid shrinkage of glaciers during the past two decades as observed in many mountain ranges globally and in particular in the Alps, are impressive confirmations of a changed climate. With the expected future temperature increase glacier shrinkage will likely further accelerate and their role as an important water resource more and more diminish. To determine the future contribution of glaciers to run-off with hydrological models, the change in glacier area and/or volume must be considered. As these models operate at regional scales, simplified approaches to model the future development of all glaciers in a mountain range need to be applied. In this study we have compared different simplified approaches to model the area and volume evolution of all glaciers in the Swiss Alps over the 21st century according to given climate change scenarios. One approach is based on an upward shift of the ELA (by 150 m per degree temperature increase) and the assumption that the glacier extent will shrink until the smaller accumulation area covers again 60% of the total glacier area. A second approach is based on observed elevation changes between 1985 and 2000 as derived from DEM differencing for all glaciers in Switzerland. With a related elevation-dependent parameterization of glacier thickness change and a modelled glacier thickness distribution, the 15-year trends in observed thickness loss are extrapolated into the future with glacier area loss taking place when thickness becomes zero. The models show an overall glacier area reduction between 60-80% until 2100 with some ice remaining at the highest elevations. However, compared to the ongoing temperature increase and considering that several reinforcement feedbacks (albedo lowering, lake formation) are not accounted for, the real area loss might even be stronger. Uncertainties in the modelled glacier thickness have only a small influence on the final area loss, but influence the temporal evolution of the loss. In particular the largest valley glaciers will suffer from a strong volume loss, as large parts of their beds have a small inclination and are thus located at low elevations.
Long memory in patterns of mobile phone usage
NASA Astrophysics Data System (ADS)
Owczarczuk, Marcin
2012-02-01
In this article we show that usage of a mobile phone, i.e. daily series of number of calls made by a customer, exhibits long memory. We use a sample of 4502 postpaid users from a Polish mobile operator and study their two-year billing history. We estimate Hurst exponent by nine estimators: aggregated variance method, differencing the variance, absolute values of the aggregated series, Higuchi's method, residuals of regression, the R/S method, periodogram method, modified periodogram method and Whittle estimator. We also analyze empirically relations between estimators. Long memory implies an inertial effect in clients' behavior which may be used by mobile operators to accelerate usage and gain additional profit.
An entropy correction method for unsteady full potential flows with strong shocks
NASA Technical Reports Server (NTRS)
Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.
1986-01-01
An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.
NASA Technical Reports Server (NTRS)
Lie-Svendsen, O.; Leer, E.
1995-01-01
We have studied the evolution of the velocity distribution function of a test population of electrons in the solar corona and inner solar wind region, using a recently developed kinetic model. The model solves the time dependent, linear transport equation, with a Fokker-Planck collision operator to describe Coulomb collisions between the 'test population' and a thermal background of charged particles, using a finite differencing scheme. The model provides information on how non-Maxwellian features develop in the distribution function in the transition region from collision dominated to collisionless flow. By taking moments of the distribution the evolution of higher order moments, such as the heat flow, can be studied.
Three-dimensional control of crystal growth using magnetic fields
NASA Astrophysics Data System (ADS)
Dulikravich, George S.; Ahuja, Vineet; Lee, Seungsoo
1993-07-01
Two coupled systems of partial differential equations governing three-dimensional laminar viscous flow undergoing solidification or melting under the influence of arbitrarily oriented externally applied magnetic fields have been formulated. The model accounts for arbitrary temperature dependence of physical properties including latent heat release, effects of Joule heating, magnetic field forces, and mushy region existence. On the basis of this model a numerical algorithm has been developed and implemented using central differencing on a curvilinear boundary-conforming grid and Runge-Kutta explicit time-stepping. The numerical results clearly demonstrate possibilities for active and practically instantaneous control of melt/solid interface shape, the solidification/melting front propagation speed, and the amount and location of solid accrued.
NASA Technical Reports Server (NTRS)
Stuart, J. R.
1984-01-01
The evolution of NASA's planetary navigation techniques is traced, and radiometric and optical data types are described. Doppler navigation; the Deep Space Network; differenced two-way range techniques; differential very long base interferometry; and optical navigation are treated. The Doppler system enables a spacecraft in cruise at high absolute declination to be located within a total angular uncertainty of 1/4 microrad. The two-station range measurement provides a 1 microrad backup at low declinations. Optical data locate the spacecraft relative to the target to an angular accuracy of 5 microrad. Earth-based radio navigation and its less accurate but target-relative counterpart, optical navigation, thus form complementary measurement sources, which provide a powerful sensory system to produce high-precision orbit estimates.
Multigrid for hypersonic viscous two- and three-dimensional flows
NASA Technical Reports Server (NTRS)
Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.
1991-01-01
The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.
Analytic regularization of uniform cubic B-spline deformation fields.
Shackleford, James A; Yang, Qi; Lourenço, Ana M; Shusharina, Nadya; Kandasamy, Nagarajan; Sharp, Gregory C
2012-01-01
Image registration is inherently ill-posed, and lacks a unique solution. In the context of medical applications, it is desirable to avoid solutions that describe physically unsound deformations within the patient anatomy. Among the accepted methods of regularizing non-rigid image registration to provide solutions applicable to medical practice is the penalty of thin-plate bending energy. In this paper, we develop an exact, analytic method for computing the bending energy of a three-dimensional B-spline deformation field as a quadratic matrix operation on the spline coefficient values. Results presented on ten thoracic case studies indicate the analytic solution is between 61-1371x faster than a numerical central differencing solution.
Incompressible viscous flow computations for the pump components and the artificial heart
NASA Technical Reports Server (NTRS)
Kiris, Cetin
1992-01-01
A finite difference, three dimensional incompressible Navier-Stokes formulation to calculate the flow through turbopump components is utilized. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. Both steady and unsteady flow calculations can be performed using the current algorithm. Here, equations are solved in steadily rotating reference frames by using the steady state formulation in order to simulate the flow through a turbopump inducer. Eddy viscosity is computed by using an algebraic mixing-length turbulence model. Numerical results are compared with experimental measurements and a good agreement is found between the two.
Thermal instability in post-flare plasmas
NASA Technical Reports Server (NTRS)
Antiochos, S. K.
1976-01-01
The cooling of post-flare plasmas is discussed and the formation of loop prominences is explained as due to a thermal instability. A one-dimensional model was developed for active loop prominences. Only the motion and heat fluxes parallel to the existing magnetic fields are considered. The relevant size scales and time scales are such that single-fluid MHD equations are valid. The effects of gravity, the geometry of the field and conduction losses to the chromosphere are included. A computer code was constructed to solve the model equations. Basically, the system is treated as an initial value problem (with certain boundary conditions at the chromosphere-corona transition region), and a two-step time differencing scheme is used.
NASA Technical Reports Server (NTRS)
Oaks, J.; Frank, A.; Falvey, S.; Lister, M.; Buisson, J.; Wardrip, C.; Warren, H.
1982-01-01
Time transfer equipment and techniques used with the Navigation Technology Satellites were modified and extended for use with the Global Positioning System (GPS) satellites. A prototype receiver was built and field tested. The receiver uses the GPS L1 link at 1575 MHz with C/A code only to resolve a measured range to the satellite. A theoretical range is computed from the satellite ephemeris transmitted in the data message and the user's coordinates. Results of user offset from GPS time are obtained by differencing the measured and theoretical ranges and applying calibration corrections. Results of the first field test evaluation of the receiver are presented.
Multigrid solutions to quasi-elliptic schemes
NASA Technical Reports Server (NTRS)
Brandt, A.; Taasan, S.
1985-01-01
Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate then corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.
Multigrid solutions to quasi-elliptic schemes
NASA Technical Reports Server (NTRS)
Brandt, A.; Taasan, S.
1985-01-01
Quasi-elliptic schemes arise from central differencing or finite element discretization of elliptic systems with odd order derivatives on non-staggered grids. They are somewhat unstable and less accurate than corresponding staggered-grid schemes. When usual multigrid solvers are applied to them, the asymptotic algebraic convergence is necessarily slow. Nevertheless, it is shown by mode analyses and numerical experiments that the usual FMG algorithm is very efficient in solving quasi-elliptic equations to the level of truncation errors. Also, a new type of multigrid algorithm is presented, mode analyzed and tested, for which even the asymptotic algebraic convergence is fast. The essence of that algorithm is applicable to other kinds of problems, including highly indefinite ones.
A numerical method for the solution of internal pipe/channel flows in laminar or turbulent motion
NASA Astrophysics Data System (ADS)
Lourenco, L.; Essers, J. A.
1981-11-01
A computer program which is useful in the solution of problems of internal turbulent or laminar flow without recirculation is described. The flow is treated in terms of parabolic boundary layer differential equations. The eddy diffusivity concept is used to model turbulent stresses. Two turbulent models are available: the Prandtl mixing length model and the Nee-Kovasznay model for the effective viscosity. Fluid is considered incompressible, but little program modification is needed to treat compressible flows. Initial conditions are prescribed as well as the boundary conditions. The differencing scheme employed is fully implicit for the dependent variables. This allows the use of relatively large forward steps without stability problems.
Glacier topography and elevation changes derived from Pléiades sub-meter stereo images
NASA Astrophysics Data System (ADS)
Berthier, E.; Vincent, C.; Magnússon, E.; Gunnlaugsson, Á. Þ.; Pitte, P.; Le Meur, E.; Masiokas, M.; Ruiz, L.; Pálsson, F.; Belart, J. M. C.; Wagnon, P.
2014-12-01
In response to climate change, most glaciers are losing mass and hence contribute to sea-level rise. Repeated and accurate mapping of their surface topography is required to estimate their mass balance and to extrapolate/calibrate sparse field glaciological measurements. In this study we evaluate the potential of sub-meter stereo imagery from the recently launched Pléiades satellites to derive digital elevation models (DEMs) of glaciers and their elevation changes. Our five evaluation sites, where nearly simultaneous field measurements were collected, are located in Iceland, the European Alps, the central Andes, Nepal and Antarctica. For Iceland, the Pléiades DEM is also compared to a lidar DEM. The vertical biases of the Pléiades DEMs are less than 1 m if ground control points (GCPs) are used, but reach up to 7 m without GCPs. Even without GCPs, vertical biases can be reduced to a few decimetres by horizontal and vertical co-registration of the DEMs to reference altimetric data on ice-free terrain. Around these biases, the vertical precision of the Pléiades DEMs is ±1 m and even ±0.5 m on the flat glacier tongues (1σ confidence level). Similar precision levels are obtained in the accumulation areas of glaciers and in Antarctica. We also demonstrate the high potential of Pléiades DEMs for measuring seasonal, annual and multi-annual elevation changes with an accuracy of 1 m or better if cloud-free images are available. The negative region-wide mass balances of glaciers in the Mont-Blanc area (-1.04 ± 0.23 m a-1 water equivalent, w.e.) are revealed by differencing Satellite pour l'Observation de la Terre 5 (SPOT 5) and Pléiades DEMs acquired in August 2003 and 2012, confirming the accelerated glacial wastage in the European Alps.
The Temperature and Distribution of Organic Molecules in the Inner Regions of T Tauri Disks
NASA Technical Reports Server (NTRS)
Mandell, Avi
2012-01-01
"High-resolution NIR spectroscopic observations of warm molecular gas emission from young circumstellar disks allow us to constrain the temperature and composition of material in the inner planet-forming region. By combining advanced data reduction algorithms with accurate modeling of the terrestrial atmospheric spectrum and a novel double-differencing data analysis technique, we have achieved very high-contrast measurements (S/N approx. 500-1000) of molecular emission at 3 microns. In disks around low-mass stars, we have achieved the first detections of emission from HCN and C2H2 at near-infrared wavelengths from several bright T Tauri stars using the CRIRES spectrograph on the Very Large Telescope and NIRSPEC spectrograph on the Keck Telescope. We spectrally resolve the line shape, showing that the emission has both a Keplerian and non-Keplerian component as observed previously for CO emission. We used a simplified single-temperature local thermal equilibrium (LTE) slab model with a Gaussian line profile to make line identifications and determine a best-fit temperature and initial abundance ratios, and we then compared these values with constraints derived from a detailed disk radiative transfer model assuming LTE excitation but utilizing a realistic temperature and density structure. Abundance ratios from both sets of models are consistent with each other and consistent with expected values from theoretical chemical models, and analysis of the line shapes suggests that the molecular emission originates from within a narrow region in the inner disk (R < 1 AU)."
Sedimentation and bathymetric change in San Pablo Bay, 1856-1983
Jaffe, Bruce E.; Smith, Richard E.; Torresan, Laura Zink
1998-01-01
A long-term perspective of erosion and deposition in San Francisco Bay is vital to understanding and managing wetland change, harbor and channel siltation, and other sediment-related phenomena such as particle and particle-associated substance (pollutants, trace metals, etc.) transport and deposition. A quantitative comparison of historical hydrographic surveys provides this perspective. This report presents results of such a comparison for San Pablo Bay, California. Six hydrographic surveys from 1856 to 1983 were analyzed to determine long-term changes in the sediment system of San Pablo Bay. Each survey was gridded using surface modeling software. Changes between survey periods were computed by differencing grids. Patterns and volumes of erosion and deposition in the Bay are derived from difference grids. More than 350 million cubic meters of sediment was deposited in San Pablo Bay from 1856 to 1983. This is equivalent to a Baywide accumulation rate of approximately 1 cm/yr. However, sediment deposition was not constant over time or throughout the Bay. Over two-thirds of that sediment was debris from hydraulic mining that accumulated from 1856 to 1887. During this period, deposition occurred in nearly the entire Bay. In contrast, from 1951 to 1983 much of the Bay changed from being depositional to erosional as sediment supply diminished and currents and waves continued to remove sediment from the Bay. The decrease in sediment supply is likely the result of upstream flood-control and water-distribution projects that have reduced peak flows, which are responsible for the greatest sediment transport. One consequence of the change in sedimentation was a loss of about half of the tidal flat areas from the late 1800's to the 1980's. Change in sedimentation must also have affected flow in the Bay, areas where polluted sediments were deposited, exchange of sediment between the nearshore and wetlands, and wave energy reaching the shoreline that was available to erode wetlands. Further work is needed. Studies of historical wetland change and the relationship between change and man-made and natural influences would be valuable for developing sound wetland management plans. Additionally, extending the historical hydrographic and wetland change analyses eastward into Suisun Bay will improve the understanding of the North Bay sediment system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasperikova, Erika; Smith, J. Torquil; Morrison, H.Frank
2008-01-14
The Berkeley UXO Discriminator (BUD) is an optimally designed active electromagnetic system that not only detects but also characterizes UXO. The performance of the system is governed by a target size-depth curve. BUD was designed to detect UXO in the 20 mm to 155 mm size range for depths between 0 and 1.5 m, and to characterize them in a depth range from 0 to 1.1 m. The system incorporates three orthogonal transmitters and eight pairs of differenced receivers. Eight receiver coils are placed horizontally along the two diagonals of the upper and lower planes of the two horizontal transmittermore » loops. These receiver coil pairs are located on symmetry lines through the center of the system and each pair sees identical fields during the on-time of the pulse in all of the transmitter coils. They are wired in opposition to produce zero output during the on-time of the pulses in three orthogonal transmitters. Moreover, this configuration dramatically reduces noise in the measurements by canceling the background electromagnetic fields (these fields are uniform over the scale of the receiver array and are consequently nulled by the differencing operation), and by canceling the noise contributed by the tilt motion of the receivers in the Earth's magnetic field, and greatly enhances receiver sensitivity to the gradients of the target response. BUD is mounted on a small cart to assure system mobility. System positioning is provided by a Real Time Kinematic (RTK) GPS receiver. The system has two modes of operation: (1) the search mode, in which BUD moves along a profile and exclusively detects targets in its vicinity providing target depth and horizontal location, and (2) the discrimination mode, in which BUD is stationary above a target, and determines three discriminating polarizability responses together with the object location and orientation from a single position of the system. The detection performance of the system is governed by a size-depth curve shown in Figure 2. This curve was calculated for BUD assuming that the receiver plane is 0.2 m above the ground. Figure 2 shows that, for example, BUD can detect an object with 0.1 m diameter down to the depth of 0.9 m with a depth uncertainty of 10%. Any objects buried at a depth of more than 1.3 m will have a low probability of detection. The discrimination performance of the system is governed by a size-depth curve shown in Figure 3. Again, this curve was calculated for BUD assuming that the receiver plane is 0.2 m above the ground. Figure 3 shows that, for example, BUD can determine the polarizability of an object with 0.1 m diameter down to the depth of 0.63 m with polarizability uncertainty of 10%. Any objects buried at the depth more than 0.9 m will have a low discrimination probability. Object orientation estimates and equivalent dipole polarizability estimates used for large and shallow UXO/scrap discrimination are more problematic as they are affected by higher order (non-dipole) terms induced in objects due to source field gradients along the length of the objects. For example, a vertical 0.4 m object directly below the system needs to be about 0.90 m deep for perturbations due to gradients along the length of the object to be of the order of 20% of the uniform field object response. Similarly, vertical objects 0.5 m, and 0.6 m long need to be 1.15 m, and 1.42 m, respectively, below the system. For horizontal objects the effect of gradients across the object diameter are much smaller. For example, 155 mm and 105 mm projectiles need to be only 0.30 m, and 0.19 m, respectively, below the system. A polarizability index (in cm{sup 3}), which is an average value of the product of time (in seconds) and polarizability rate (in m{sup 3}/s) over the 34 sample times logarithmically spaced from 143 to 1300 {micro}s, and three polarizabilities, can be calculated for any object. We used this polarizability index to decide when the object is in a uniform source field. Objects with the polarizability index smaller than 600 cm{sup 3} and deeper than 1.8 m below BUD, or smaller than 200 cm{sup 3} and deeper than 1.35 m, or smaller than 80 cm{sup 3} and deeper than 0.90 m, or smaller than 9 cm{sup 3} and deeper than 0.20 m below BUD are sufficiently deep that the effects of vertical source field gradients should be less than 15%. All other objects are considered large and shallow objects. At the moment, interpretation software is available for a single object only. In case of multiple objects the software indicates the possible presence of metallic objects but is unable to provide characteristics of each individual object.« less
NASA Astrophysics Data System (ADS)
Elliott, A. J.; Oskin, M. E.; Banesh, D.; Gold, P. O.; Hinojosa-Corona, A.; Styron, R. H.; Taylor, M. H.
2012-12-01
Differencing repeat terrestrial lidar scans of the 2010 M7.2 El Mayor-Cucapah (EMC) earthquake rupture reveals the rapid onset of surface processes that simultaneously degrade and preserve evidence of coseismic fault rupture in the landscape and paleoseismic record. We surveyed fresh fault rupture two weeks after the 4 April 2010 earthquake, then repeated these surveys one year later. We imaged fault rupture through four substrates varying in degree of consolidation and scarp facing-direction, recording modification due to a range of aeolian, fluvial, and hillslope processes. Using lidar-derived DEM rasters to calculate the topographic differences between years results in aliasing errors because GPS uncertainty between years (~1.5cm) exceeds lidar point-spacing (<1.0cm) shifting the raster sampling of the point cloud. Instead, we coregister each year's scans by iteratively minimizing the horizontal and vertical misfit between neighborhoods of points in each raw point cloud. With the misfit between datasets minimized, we compute the vertical difference between points in each scan within a specified neighborhood. Differencing results reveal two variables controlling the type and extent of erosion: cohesion of the substrate controls the degree to which hillslope processes affect the scarp, while scarp facing direction controls whether more effective fluvial erosion can act on the scarp. In poorly consolidated materials, large portions (>50% along strike distance) of the scarp crest are eroded up to 5cm by a combination of aeolian abrasion and diffusive hillslope processes, such as rainsplash and mass-wasting, while in firmer substrate (i.e., bedrock mantled by fault gouge) there is no detectable hillslope erosion. On the other hand, where small gullies cross downhill-facing scarps (<5% along strike distance), fluvial erosion has caused 5-50cm of headward scarp retreat in bedrock. Thus, although aeolian and hillslope processes operate over a greater along-strike distance, fluvial processes concentrated in pre-existing bedrock gullies transport a far greater volume of material across the scarp. Substrate cohesiveness dictates the degree to which erosive processes act to relax the scarp (e.g., gravels erode more easily than bedrock). However, scarp locations that favor fluvial processes suffer rapid, localized erosion of vertical scarp faces, regardless of substrate. Differential lidar also reveals debris cones formed at the base of the scarp below locations of scarp crest erosion. These indicate the rapid growth of a colluvial wedge. Where a fissure occupies the base of the scarp we observe nearly complete in-filling by silt and sand moved by both mass wasting and fluvial deposition, indicating that fissure fills observed in paleoseismic trenches likely bracket the age of an earthquake to within one year. We find no evidence of differential postseismic tectonic deformation across the fault within the ~100m aperture of our surveys.
The Continued Demise of Columbia Glacier: Insights On Dynamic Change
NASA Astrophysics Data System (ADS)
Enderlin, E. M.; Hamilton, G. S.; O'Neel, S.; Bartholomaus, T. C.
2016-12-01
Columbia Glacier, Alaska, has served as the archetype for the retreat phase of the tidewater glacier cycle for the past three decades. Since the mid-1980s, the terminus has retreated 16 kilometers and the two major tributaries have thinned by > 400 m. This retreat and thinning led to separation of the tributaries in the late 2000s. Since their separation, the tributaries have exhibited strikingly different dynamic behaviors over seasonal to inter-annual time scales as they continue to adjust to the long-term changes in glacier geometry. Here we use a combination of ground, airborne, and satellite remote sensing datasets to characterize the dynamic behavior of the Columbia Glacier system. We focus on the time period following tributary separation, when the observational record is most abundant, but also investigate longer-term changes in dynamics such as the reorganization of ice flow in the eastern tributary (Figure 1). From the mid 2000s through 2012, the tributaries thinned at comparable rates ( 25 m/yr) based on repeat DEM differencing. Their behavior diverged in 2012, when the eastern tributary appeared to stabilize but the western tributary continued its sustained thinning trend. Thinning resumed along the eastern tributary in late 2013, and was accompanied by modest terminus retreat and acceleration. In contrast, the rate of thinning dramatically increased along the western tributary as it began to rapidly retreat in late 2013. These changes coincided with the three-fold increase in flow speed and pronounced increase in iceberg discharge from the western tributary. Although variations in the timing and magnitude of the recent dynamic changes can be at least partially explained by differences in the geometries of the tributaries, the dynamic behavior of Columbia Glacier's major tributaries is unlikely to be totally independent of environmental perturbations (i.e., entirely driven by the long-term dynamic adjustment). To assess the influence of environmental perturbations on the dynamic behavior of the glacier, we compare weekly to multi-year changes in glacier dynamics constructed from our airborne and satellite remotely-sensed datasets to time series of frontal ablation (i.e., submarine melting and iceberg calving) and surface mass balance compiled from ground-based observations.
Navier-Stokes Aerodynamic Simulation of the V-22 Osprey on the Intel Paragon MPP
NASA Technical Reports Server (NTRS)
Vadyak, Joseph; Shrewsbury, George E.; Narramore, Jim C.; Montry, Gary; Holst, Terry; Kwak, Dochan (Technical Monitor)
1995-01-01
The paper will describe the Development of a general three-dimensional multiple grid zone Navier-Stokes flowfield simulation program (ENS3D-MPP) designed for efficient execution on the Intel Paragon Massively Parallel Processor (MPP) supercomputer, and the subsequent application of this method to the prediction of the viscous flowfield about the V-22 Osprey tiltrotor vehicle. The flowfield simulation code solves the thin Layer or full Navier-Stoke's equation - for viscous flow modeling, or the Euler equations for inviscid flow modeling on a structured multi-zone mesh. In the present paper only viscous simulations will be shown. The governing difference equations are solved using a time marching implicit approximate factorization method with either TVD upwind or central differencing used for the convective terms and central differencing used for the viscous diffusion terms. Steady state or Lime accurate solutions can be calculated. The present paper will focus on steady state applications, although time accurate solution analysis is the ultimate goal of this effort. Laminar viscosity is calculated using Sutherland's law and the Baldwin-Lomax two layer algebraic turbulence model is used to compute the eddy viscosity. The Simulation method uses an arbitrary block, curvilinear grid topology. An automatic grid adaption scheme is incorporated which concentrates grid points in high density gradient regions. A variety of user-specified boundary conditions are available. This paper will present the application of the scalable and superscalable versions to the steady state viscous flow analysis of the V-22 Osprey using a multiple zone global mesh. The mesh consists of a series of sheared cartesian grid blocks with polar grids embedded within to better simulate the wing tip mounted nacelle. MPP solutions will be shown in comparison to equivalent Cray C-90 results and also in comparison to experimental data. Discussions on meshing considerations, wall clock execution time, load balancing, and scalability will be provided.
Geomorphic Response to Significant Sediment Loading Along Tahoma Creek on Mount Rainier, WA
NASA Astrophysics Data System (ADS)
Anderson, S.; Kennard, P.; Pitlick, J.
2012-12-01
Increased sediment loading in streams draining the flanks of Mt. Rainier has caused significant damage to National Park Service infrastructure and has prompted concern in downstream communities. The processes driving sedimentation and the controls on downstream response are explored in the 37 km2 Tahoma Creek basin, using repeat LiDAR surveys supplemented with additional topographic datasets. DEM differencing between 2003 and 2008 LiDAR datasets shows that over 2.2 million cubic meters of material was evacuated from the upper reaches of the basin, predominately in the form of debris flows. These debris flows were sourced in recently exposed lateral moraines, bulking through the broad collapse of unstable hillslopes. 40% of this material was deposited in the historic debris fan 4-6 km downstream of the terminus, while 55% completely exited the system at the downstream point of the surveys. Distinct zones of aggradation and incision of up to one meter are present along the lower channel and appear to be controlled by valley constrictions and inflections. However, the lower channel has shown remarkable long-term stability in the face of significant sediment loads. Alder ages suggest fluvial high stands in the late 70's and early 90's, immediately following periods of significant debris flow activity, yet the river quickly returned to pre-disturbance elevations. On longer time scales, the presence of old-growth forest on adjacent floodplain/terrace surfaces indicates broad stability on both vertical and horizontal planes. More than a passive indicator, these forested surfaces play a significant role in maintaining channel stability through increased overbank roughness and the formation of bank-armoring log jams. Sediment transport mechanics along this lower reach are explored using the TomSED sediment transport model, driven by data from an extensive sediment sampling and stream gaging effort. In its current state, the model is able to replicate the stability of the channel but significantly under predicts total loads when compared to the LiDAR differencing.
Cheng, R.T.; Casulli, V.; Gartner, J.W.
1993-01-01
A numerical model using a semi-implicit finite-difference method for solving the two-dimensional shallow-water equations is presented. The gradient of the water surface elevation in the momentum equations and the velocity divergence in the continuity equation are finite-differenced implicitly, the remaining terms are finite-differenced explicitly. The convective terms are treated using an Eulerian-Lagrangian method. The combination of the semi-implicit finite-difference solution for the gravity wave propagation, and the Eulerian-Lagrangian treatment of the convective terms renders the numerical model unconditionally stable. When the baroclinic forcing is included, a salt transport equation is coupled to the momentum equations, and the numerical method is subject to a weak stability condition. The method of solution and the properties of the numerical model are given. This numerical model is particularly suitable for applications to coastal plain estuaries and tidal embayments in which tidal currents are dominant, and tidally generated residual currents are important. The model is applied to San Francisco Bay, California where extensive historical tides and current-meter data are available. The model calibration is considered by comparing time-series of the field data and of the model results. Alternatively, and perhaps more meaningfully, the model is calibrated by comparing the harmonic constants of tides and tidal currents derived from field data with those derived from the model. The model is further verified by comparing the model results with an independent data set representing the wet season. The strengths and the weaknesses of the model are assessed based on the results of model calibration and verification. Using the model results, the properties of tides and tidal currents in San Francisco Bay are characterized and discussed. Furthermore, using the numerical model, estimates of San Francisco Bay's volume, surface area, mean water depth, tidal prisms, and tidal excursions at spring and neap tides are computed. Additional applications of the model reveal, qualitatively the spatial distribution of residual variables. ?? 1993 Academic Press. All rights reserved.
Flow interaction experiment. Volume 2: Aerothermal modeling, phase 2
NASA Technical Reports Server (NTRS)
Nikjooy, M.; Mongia, H. C.; Sullivan, J. P.; Murthy, S. N. B.
1993-01-01
An experimental and computational study is reported for the flow of a turbulent jet discharging into a rectangular enclosure. The experimental configurations consisting of primary jets only, annular jets only, and a combination of annular and primary jets are investigated to provide a better understanding of the flow field in an annular combustor. A laser Doppler velocimeter is used to measure mean velocity and Reynolds stress components. Major features of the flow field include recirculation, primary and annular jet interaction, and high turbulence. A significant result from this study is the effect the primary jets have on the flow field. The primary jets are seen to create statistically larger recirculation zones and higher turbulence levels. In addition, a technique called marker nephelometry is used to provide mean concentration values in the model combustor. Computations are performed using three levels of turbulence closures, namely k-epsilon model, algebraic second moment (ASM), and differential second moment (DSM) closure. Two different numerical schemes are applied. One is the lower-order power-law differencing scheme (PLDS) and the other is the higher-order flux-spline differencing scheme (FSDS). A comparison is made of the performance of these schemes. The numerical results are compared with experimental data. For the cases considered in this study, the FSDS is more accurate than the PLDS. For a prescribed accuracy, the flux-spline scheme requires a far fewer number of grid points. Thus, it has the potential for providing a numerical error-free solution, especially for three-dimensional flows, without requiring an excessively fine grid. Although qualitatively good comparison with data was obtained, the deficiencies regarding the modeled dissipation rate (epsilon) equation, pressure-strain correlation model, and the inlet epsilon profile and other critical closure issues need to be resolved before one can achieve the degree of accuracy required to analytically design combustion systems.
Flow interaction experiment. Volume 1: Aerothermal modeling, phase 2
NASA Technical Reports Server (NTRS)
Nikjooy, M.; Mongia, H. C.; Sullivan, J. P.; Murthy, S. N. B.
1993-01-01
An experimental and computational study is reported for the flow of a turbulent jet discharging into a rectangular enclosure. The experimental configurations consisting of primary jets only, annular jets only, and a combination of annular and primary jets are investigated to provide a better understanding of the flow field in an annular combustor. A laser Doppler velocimeter is used to measure mean velocity and Reynolds stress components. Major features of the flow field include recirculation, primary and annular jet interaction, and high turbulence. A significant result from this study is the effect the primary jets have on the flow field. The primary jets are seen to create statistically larger recirculation zones and higher turbulence levels. In addition, a technique called marker nephelometry is used to provide mean concentration values in the model combustor. Computations are performed using three levels of turbulence closures, namely k-epsilon model, algebraic second moment (ASM), and differential second moment (DSM) closure. Two different numerical schemes are applied. One is the lower-order power-law differencing scheme (PLDS) and the other is the higher-order flux-spline differencing scheme (FSDS). A comparison is made of the performance of these schemes. The numerical results are compared with experimental data. For the cases considered in this study, the FSDS is more accurate than the PLDS. For a prescribed accuracy, the flux-spline scheme requires a far fewer number of grid points. Thus, it has the potential for providing a numerical error-free solution, especially for three-dimensional flows, without requiring an excessively fine grid. Although qualitatively good comparison with data was obtained, the deficiencies regarding the modeled dissipation rate (epsilon) equation, pressure-strain correlation model, and the inlet epsilon profile and other critical closure issues need to be resolved before one can achieve the degree of accuracy required to analytically design combustion systems.
Volume 2: Explicit, multistage upwind schemes for Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Ash, Robert L.
1992-01-01
The objective of this study was to develop a high-resolution-explicit-multi-block numerical algorithm, suitable for efficient computation of the three-dimensional, time-dependent Euler and Navier-Stokes equations. The resulting algorithm has employed a finite volume approach, using monotonic upstream schemes for conservation laws (MUSCL)-type differencing to obtain state variables at cell interface. Variable interpolations were written in the k-scheme formulation. Inviscid fluxes were calculated via Roe's flux-difference splitting, and van Leer's flux-vector splitting techniques, which are considered state of the art. The viscous terms were discretized using a second-order, central-difference operator. Two classes of explicit time integration has been investigated for solving the compressible inviscid/viscous flow problems--two-state predictor-corrector schemes, and multistage time-stepping schemes. The coefficients of the multistage time-stepping schemes have been modified successfully to achieve better performance with upwind differencing. A technique was developed to optimize the coefficients for good high-frequency damping at relatively high CFL numbers. Local time-stepping, implicit residual smoothing, and multigrid procedure were added to the explicit time stepping scheme to accelerate convergence to steady-state. The developed algorithm was implemented successfully in a multi-block code, which provides complete topological and geometric flexibility. The only requirement is C degree continuity of the grid across the block interface. The algorithm has been validated on a diverse set of three-dimensional test cases of increasing complexity. The cases studied were: (1) supersonic corner flow; (2) supersonic plume flow; (3) laminar and turbulent flow over a flat plate; (4) transonic flow over an ONERA M6 wing; and (5) unsteady flow of a compressible jet impinging on a ground plane (with and without cross flow). The emphasis of the test cases was validation of code, and assessment of performance, as well as demonstration of flexibility.
Glacial lakes amplify glacier recession in the central Himalaya
NASA Astrophysics Data System (ADS)
King, Owen; Quincey, Duncan; Carrivick, Jonathan; Rowan, Ann
2016-04-01
The high altitude and high latitude regions of the world are amongst those which react most intensely to climatic change. Across the Himalaya glacier mass balance is predominantly negative. The spatial and temporal complexity associated with this ice loss across different glacier clusters is poorly documented however, and our understanding of the processes driving change is limited. Here, we look at the spatial variability of glacier hypsometry and glacial mass loss from three catchments in the central Himalaya; the Dudh Koshi basin, Tama Koshi basin and an adjoining section of the Tibetan Plateau. ASTER and SETSM digital elevation models (2014/15), corrected for elevation dependant biases, co-registration errors and along or cross track tilts, are differenced from Shuttle Radar Topographic Mission (SRTM) data (2000) to yield surface lowering estimates. Landsat data and a hypsometric index (HI), a classification scheme used to group glaciers of similar hypsometry, are used to examine the distribution of glacier area with altitude in each catchment. Surface lowering rates of >3 m/yr can be detected on some glaciers, generally around the clean-ice/debris-cover boundary, where dark but thin surface deposits are likely to enhance ablation. More generally, surface lowering rates of around 1 m/yr are more pervasive, except around the terminus areas of most glaciers, emphasising the influence of a thick debris cover on ice melt. Surface lowering is only concentrated at glacier termini where glacial lakes have developed, where surface lowering rates are commonly greater than 2.5 m/yr. The three catchments show contrasting hypsometric distributions, which is likely to impact their future response to climatic changes. Glaciers of the Dudh Koshi basin store large volumes of ice at low elevation (HI > 1.5) in long, debris covered tongues, although their altitudinal range is greatest given the height of mountain peaks in the catchment. In contrast, glaciers of the Tama Koshi store large amounts of ice in broad accumulation zones and are more equidimensional (HI -1.2 to 1.2). Glaciers flowing onto the Tibetan Plateau have a similar hypsometric distribution to glaciers of the Dudh Koshi, but terminate at a higher altitude overall, approximately 500 m higher than glaciers of the Dudh Koshi or Tama Koshi. We estimate the approximate Equilibrium Line Altitudes (ELA) of the last 15 years to be above a substantial portion (66%- Dudh Koshi; 87%- Tama Koshi; 83% Tibetan Plateau) of the glacierised area for all three catchments. Future ice recession may therefore be governed primarily by glacier hypsometry, but is likely to be amplified by the continued development of new, or growth of current glacial lakes.
Seismic Borehole Monitoring of CO2 Injection in an Oil Reservoir
NASA Astrophysics Data System (ADS)
Gritto, R.; Daley, T. M.; Myer, L. R.
2002-12-01
A series of time-lapse seismic cross well and single well experiments were conducted in a diatomite reservoir to monitor the injection of CO2 into a hydrofracture zone, based on P- and S-wave data. A high-frequency piezo-electric P-wave source and an orbital-vibrator S-wave source were used to generate waves that were recorded by hydrophones as well as three-component geophones. The injection well was located about 12 m from the source well. During the pre-injection phase water was injected into the hydrofrac-zone. The set of seismic experiments was repeated after a time interval of 7 months during which CO2 was injected into the hydrofractured zone. The questions to be answered ranged from the detectability of the geologic structure in the diatomic reservoir to the detectability of CO2 within the hydrofracture. Furthermore it was intended to determine which experiment (cross well or single well) is best suited to resolve these features. During the pre-injection experiment, the P-wave velocities exhibited relatively low values between 1700-1900 m/s, which decreased to 1600-1800 m/s during the post-injection phase (-5%). The analysis of the pre-injection S-wave data revealed slow S-wave velocities between 600-800 m/s, while the post-injection data revealed velocities between 500-700 m/s (-6%). These velocity estimates produced high Poisson ratios between 0.36 and 0.46 for this highly porous (~ 50%) material. Differencing post- and pre-injection data revealed an increase in Poisson ratio of up to 5%. Both, velocity and Poisson estimates indicate the dissolution of CO2 in the liquid phase of the reservoir accompanied by a pore-pressure increase. The single well data supported the findings of the cross well experiments. P- and S-wave velocities as well as Poisson ratios were comparable to the estimates of the cross well data.
Large-scale sea surface temperature variability from satellite and shipboard measurements
NASA Technical Reports Server (NTRS)
Bernstein, R. L.; Chelton, D. B.
1985-01-01
A series of satellite sea surface temperature intercomparison workshops were conducted under NASA sponsorship at the Jet Propulsion Laboratory. Three different satellite data sets were compared with each other, with routinely collected ship data, and with climatology, for the months of November 1979, December 1981, March 1982, and July 1982. The satellite and ship data were differenced against an accepted climatology to produce anomalies, which in turn were spatially and temporally averaged into two-degree latitude-longitude, one-month bins. Monthly statistics on the satellite and ship bin average temperatures yielded rms differences ranging from 0.58 to 1.37 C, and mean differences ranging from -0.48 to 0.72 C, varying substantially from month to month, and sensor to sensor.
A fourth order accurate finite difference scheme for the computation of elastic waves
NASA Technical Reports Server (NTRS)
Bayliss, A.; Jordan, K. E.; Lemesurier, B. J.; Turkel, E.
1986-01-01
A finite difference for elastic waves is introduced. The model is based on the first order system of equations for the velocities and stresses. The differencing is fourth order accurate on the spatial derivatives and second order accurate in time. The model is tested on a series of examples including the Lamb problem, scattering from plane interf aces and scattering from a fluid-elastic interface. The scheme is shown to be effective for these problems. The accuracy and stability is insensitive to the Poisson ratio. For the class of problems considered here it is found that the fourth order scheme requires for two-thirds to one-half the resolution of a typical second order scheme to give comparable accuracy.
Filtering of non-linear instabilities
NASA Technical Reports Server (NTRS)
Khosla, P. K.; Rubin, S. G.
1978-01-01
For Courant numbers larger than one and cell Reynolds numbers larger than two, oscillations and in some cases instabilities are typically found with implicit numerical solutions of the fluid dynamics equations. This behavior has sometimes been associated with the loss of diagonal dominance of the coefficient matrix. It is shown that these problems can be related to the choice of the spatial differences, with the resulting instability related to aliasing or nonlinear interaction. Appropriate filtering can reduce the intensity of these oscillations and possibly eliminate the instability. These filtering procedures are equivalent to a weighted average of conservation and nonconservation differencing. The entire spectrum of filtered equations retains a three point character as well as second order spatial accuracy. Burgers equation was considered as a model.
NASA Technical Reports Server (NTRS)
Schumer, R.
1980-01-01
Variables in a study of noise perception near the Munich-Reims airport are explained. The interactive effect of the stimulus (aircraft noise) and moderator (noise sensitivity) on the aircraft noise reaction (disturbance or annoyance) is considered. Methods employed to demonstrate that the moderator has a differencing effect on various stimulus levels are described. Results of the social-scientific portion of the aircraft noise project are compared with those of other survey studies on the problem of aircraft noise. Procedures for contrast group analysis and multiple classification analysis are examined with focus on some difficulties in their application.
A Navier-Stokes Solution of Hull-Ring Wing-Thruster Interaction
NASA Technical Reports Server (NTRS)
Yang, C.-I.; Hartwich, P.; Sundaram, P.
1991-01-01
Navier-Stokes simulations of high Reynolds number flow around an axisymmetric body supported in a water tunnel were made. The numerical method is based on a finite-differencing high resolution second-order accurate implicit upwind scheme. Four different configurations were investigated, these are: (1) barebody; (2) body with an operating propeller; (3) body with a ring wing; and (4) body with a ring wing and an operating propeller. Pressure and velocity components near the stern region were obtained computationally and are shown to compare favorably with the experimental data. The method correctly predicts the existence and extent of stern flow separation for the barebody and the absence of flow separation for the three other configurations with ring wing and/or propeller.
Automatic differentiation as a tool in engineering design
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois; Hall, Laura E.
1992-01-01
Automatic Differentiation (AD) is a tool that systematically implements the chain rule of differentiation to obtain the derivatives of functions calculated by computer programs. AD is assessed as a tool for engineering design. The forward and reverse modes of AD, their computing requirements, as well as approaches to implementing AD are discussed. The application of two different tools to two medium-size structural analysis problems to generate sensitivity information typically necessary in an optimization or design situation is also discussed. The observation is made that AD is to be preferred to finite differencing in most cases, as long as sufficient computer storage is available; in some instances, AD may be the alternative to consider in lieu of analytical sensitivity analysis.
Exploration of Mars by Mariner 9 - Television sensors and image processing.
NASA Technical Reports Server (NTRS)
Cutts, J. A.
1973-01-01
Two cameras equipped with selenium sulfur slow scan vidicons were used in the orbital reconnaissance of Mars by the U.S. Spacecraft Mariner 9 and the performance characteristics of these devices are presented. Digital image processing techniques have been widely applied in the analysis of images of Mars and its satellites. Photometric and geometric distortion corrections, image detail enhancement and transformation to standard map projection have been routinely employed. More specializing applications included picture differencing, limb profiling, solar lighting corrections, noise removal, line plots and computer mosaics. Information on enhancements as well as important picture geometric information was stored in a master library. Display of the library data in graphic or numerical form was accomplished by a data management computer program.
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Liou, Meng-Sing; Povinelli, Louis A.; Arnone, Andrea
1993-01-01
This paper reports the results of numerical simulations of steady, laminar flow over a backward-facing step. The governing equations used in the simulations are the full 'compressible' Navier-Stokes equations, solutions to which were computed by using a cell-centered, finite volume discretization. The convection terms of the governing equations were discretized by using the Advection Upwind Splitting Method (AUSM), whereas the diffusion terms were discretized using central differencing formulas. The validity and accuracy of the numerical solutions were verified by comparing the results to existing experimental data for flow at identical Reynolds numbers in the same back step geometry. The paper focuses attention on the details of the flow field near the side wall of the geometry.
Time-marching transonic flutter solutions including angle-of-attack effects
NASA Technical Reports Server (NTRS)
Edwards, J. W.; Bennett, R. M.; Whitlow, W., Jr.; Seidel, D. A.
1982-01-01
Transonic aeroelastic solutions based upon the transonic small perturbation potential equation were studied. Time-marching transient solutions of plunging and pitching airfoils were analyzed using a complex exponential modal identification technique, and seven alternative integration techniques for the structural equations were evaluated. The HYTRAN2 code was used to determine transonic flutter boundaries versus Mach number and angle-of-attack for NACA 64A010 and MBB A-3 airfoils. In the code, a monotone differencing method, which eliminates leading edge expansion shocks, is used to solve the potential equation. When the effect of static pitching moment upon the angle-of-attack is included, the MBB A-3 airfoil can have multiple flutter speeds at a given Mach number.
Generalized three-dimensional experimental lightning code (G3DXL) user's manual
NASA Technical Reports Server (NTRS)
Kunz, Karl S.
1986-01-01
Information concerning the programming, maintenance and operation of the G3DXL computer program is presented and the theoretical basis for the code is described. The program computes time domain scattering fields and surface currents and charges induced by a driving function on and within a complex scattering object which may be perfectly conducting or a lossy dielectric. This is accomplished by modeling the object with cells within a three-dimensional, rectangular problem space, enforcing the appropriate boundary conditions and differencing Maxwell's equations in time. In the present version of the program, the driving function can be either the field radiated by a lightning strike or a direct lightning strike. The F-106 B aircraft is used as an example scattering object.
Superconducting tensor gravity gradiometer for satellite geodesy and inertial navigation
NASA Technical Reports Server (NTRS)
Paik, H. J.
1981-01-01
A sensitive gravity gradiometer can provide much needed gravity data of the earth and improve the accuracy of inertial navigation. Superconductivity and other properties of materials at low temperatures can be used to obtain a sensitive, low-drift gravity gradiometer; by differencing the outputs of accelerometer pairs using superconducting circuits, it is possible to construct a tensor gravity gradiometer which measures all the in-line and cross components of the tensor simultaneously. Additional superconducting circuits can be provided to determine the linear and angular acceleration vectors. A tensor gravity gradiometer with these features is being developed for satellite geodesy. The device constitutes a complete package of inertial navigation instruments with angular and linear acceleration readouts as well as gravity signals.
Numerical Methods for Nonlinear Fokker-Planck Collision Operator in TEMPEST
NASA Astrophysics Data System (ADS)
Kerbel, G.; Xiong, Z.
2006-10-01
Early implementations of Fokker-Planck collision operator and moment computations in TEMPEST used low order polynomial interpolation schemes to reuse conservative operators developed for speed/pitch-angle (v, θ) coordinates. When this approach proved to be too inaccurate we developed an alternative higher order interpolation scheme for the Rosenbluth potentials and a high order finite volume method in TEMPEST (,) coordinates. The collision operator is thus generated by using the expansion technique in (v, θ) coordinates for the diffusion coefficients only, and then the fluxes for the conservative differencing are computed directly in the TEMPEST (,) coordinates. Combined with a cut-cell treatment at the turning-point boundary, this new approach is shown to have much better accuracy and conservation properties.
NASA Technical Reports Server (NTRS)
Desideri, J. A.; Steger, J. L.; Tannehill, J. C.
1978-01-01
The iterative convergence properties of an approximate-factorization implicit finite-difference algorithm are analyzed both theoretically and numerically. Modifications to the base algorithm were made to remove the inconsistency in the original implementation of artificial dissipation. In this way, the steady-state solution became independent of the time-step, and much larger time-steps can be used stably. To accelerate the iterative convergence, large time-steps and a cyclic sequence of time-steps were used. For a model transonic flow problem governed by the Euler equations, convergence was achieved with 10 times fewer time-steps using the modified differencing scheme. A particular form of instability due to variable coefficients is also analyzed.
Extension of transonic flow computational concepts in the analysis of cavitated bearings
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Keith, T. G., Jr.; Brewe, D. E.
1990-01-01
An analogy between the mathematical modeling of transonic potential flow and the flow in a cavitating bearing is described. Based on the similarities, characteristics of the cavitated region and jump conditions across the film reformation and rupture fronts are developed using the method of weak solutions. The mathematical analogy is extended by utilizing a few computational concepts of transonic flow to numerically model the cavitating bearing. Methods of shock fitting and shock capturing are discussed. Various procedures used in transonic flow computations are adapted to bearing cavitation applications, for example, type differencing, grid transformation, an approximate factorization technique, and Newton's iteration method. These concepts have proved to be successful and have vastly improved the efficiency of numerical modeling of cavitated bearings.
Poland, Michael P.
2014-01-01
Differencing digital elevation models (DEMs) derived from TerraSAR add-on for Digital Elevation Measurements (TanDEM-X) synthetic aperture radar imagery provides a measurement of elevation change over time. On the East Rift Zone (EZR) of Kīlauea Volcano, Hawai‘i, the effusion of lava causes changes in topography. When these elevation changes are summed over the area of an active lava flow, it is possible to quantify the volume of lava emplaced at the surface during the time spanned by the TanDEM-X data—a parameter that can be difficult to measure across the entirety of an ~100 km2 lava flow field using ground-based techniques or optical remote sensing data. Based on the differences between multiple TanDEM-X-derived DEMs collected days to weeks apart, the mean dense-rock equivalent time-averaged discharge rate of lava at Kīlauea between mid-2011 and mid-2013 was approximately 2 m3/s, which is about half the long-term average rate over the course of Kīlauea's 1983–present ERZ eruption. This result implies that there was an increase in the proportion of lava stored versus erupted, a decrease in the rate of magma supply to the volcano, or some combination of both during this time period. In addition to constraining the time-averaged discharge rate of lava and the rates of magma supply and storage, topographic change maps derived from space-based TanDEM-X data provide insights into the four-dimensional evolution of Kīlauea's ERZ lava flow field. TanDEM-X data are a valuable complement to other space-, air-, and ground-based observations of eruptive activity at Kīlauea and offer great promise at locations around the world for aiding with monitoring not just volcanic eruptions but any hazardous activity that results in surface change, including landslides, floods, earthquakes, and other natural and anthropogenic processes.
Surface forcing of non-stand-replacing fires in Siberian larch forests
NASA Astrophysics Data System (ADS)
Chen, Dong; Loboda, Tatiana V.
2018-04-01
Wildfires are the dominant disturbance agent in the Siberian larch forests. Extensive low- to mediate-intensity non-stand-replacing fires are a notable property of fire regime in these forests. Recent large scale studies of these fires have focused mostly on their impacts on carbon budget; however, their potential impacts on energy budget through post-fire albedo changes have not been considered. This study quantifies the post-fire surface forcing for Siberian larch forests that experienced non-stand-replacing fires between 2001 and 2012 using the full record of MODIS MCD43A3 albedo product and a burned area product developed specifically for the Russian forests. Despite a large variability, the mean effect of non-stand-replacing fires imposed through albedo is a negative forcing which lasts for at least 14 years. However, the magnitude of the forcing is much smaller than that imposed by stand-replacing fires, highlighting the importance of differentiating between the two fire types in the studies involving the fire impacts in the region. The results of this study also show that MODIS-based summer differenced normalized burn ratio (dNBR) provides a reliable metric for differentiating non-stand-replacing from stand-replacing fires with an overall accuracy of 88%, which is of considerable importance for future work on modeling post-fire energy budget and carbon budget in the region.
Battaglia, Maurizio; Segall, P.; Murray, J.; Cervelli, Peter; Langbein, J.
2003-01-01
We surveyed 44 existing leveling monuments in Long Valley caldera in July 1999, using dual frequency global positioning system (GPS) receivers. We have been able to tie GPS and leveling to a common reference frame in the Long Valley area and computed the vertical deformation by differencing GPS-based and leveled orthometric heights. The resurgent dome uplifted 74??7 cm from 1975 to 1999. To define the inflation source, we invert two-color EDM and uplift data from the 1985-1999 unrest period using spherical or ellipsoidal sources. We find that the ellipsoidal source satisfies both the vertical and horizontal deformation data, whereas the spherical point source cannot. According to our analysis of the 1985-1999 data, the main source of deformation is a prolate ellipsoid located beneath the resurgent dome at a depth of 5.9 km (95% bounds of 4.9-7.5 km). This body is vertically elongated, has an aspect ratio of 0.475 (95% bounds are 0.25-0.65) and a volume change of 0.086 km3 (95% bounds are 0.06-0.13 km3). Failure to account for the ellipsoidal nature of the source biases the estimated source depth by 2.1 km (35%), and the source volume by 0.038 km3 (44%). ?? 2003 Elsevier B.V. All rights reserved.
Alexander, David M; Trengove, Chris; van Leeuwen, Cees
2015-11-01
An assumption nearly all researchers in cognitive neuroscience tacitly adhere to is that of space-time separability. Historically, it forms the basis of Donders' difference method, and to date, it underwrites all difference imaging and trial-averaging of cortical activity, including the customary techniques for analyzing fMRI and EEG/MEG data. We describe the assumption and how it licenses common methods in cognitive neuroscience; in particular, we show how it plays out in signal differencing and averaging, and how it misleads us into seeing the brain as a set of static activity sources. In fact, rather than being static, the domains of cortical activity change from moment to moment: Recent research has suggested the importance of traveling waves of activation in the cortex. Traveling waves have been described at a range of different spatial scales in the cortex; they explain a large proportion of the variance in phase measurements of EEG, MEG and ECoG, and are important for understanding cortical function. Critically, traveling waves are not space-time separable. Their prominence suggests that the correct frame of reference for analyzing cortical activity is the dynamical trajectory of the system, rather than the time and space coordinates of measurements. We illustrate what the failure of space-time separability implies for cortical activation, and what consequences this should have for cognitive neuroscience.
NASA Astrophysics Data System (ADS)
Moyer, Alexis N.; Nienow, Peter W.; Gourmelen, Noel; Sole, Andrew J.; Slater, Donald A.
2017-12-01
Oceanic forcing of the Greenland Ice Sheet is believed to promote widespread thinning at tidewater glaciers, with submarine melting proposed as a potential trigger of increased glacier calving, retreat, and subsequent acceleration. The precise mechanism(s) driving glacier instability, however, remain poorly understood, and while increasing evidence points to the importance of submarine melting, estimates of melt rates are uncertain. Here we estimate submarine melt rate by examining freeboard changes in the seasonal ice tongue of Kangiata Nunaata Sermia at the head of Kangersuneq Fjord, southwest Greenland. We calculate melt rates for March and May 2013 by differencing along-fjord surface elevation, derived from high-resolution TanDEM-X digital elevation models, in combination with ice velocities derived from offset tracking applied to TerraSAR-X imagery. Estimated steady state melt rates reach up to 1.4 ± 0.5 m d^-1 near the glacier grounding line, with mean values of up to 0.8 ± 0.3 and 0.7 ± 0.3 m d^1 for the eastern and western parts of the ice tongue, respectively. Melt rates decrease with distance from the ice front and vary across the fjord. This methodology reveals spatio-temporal variations in submarine melt rates at tidewater glaciers which develop floating termini, and can be used to improve our understanding of ice-ocean interactions and submarine melting in glacial fjords.
Imaging, object detection, and change detection with a polarized multistatic GPR array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, N. Reginald; Paglieroni, David W.
A polarized detection system performs imaging, object detection, and change detection factoring in the orientation of an object relative to the orientation of transceivers. The polarized detection system may operate on one of several modes of operation based on whether the imaging, object detection, or change detection is performed separately for each transceiver orientation. In combined change mode, the polarized detection system performs imaging, object detection, and change detection separately for each transceiver orientation, and then combines changes across polarizations. In combined object mode, the polarized detection system performs imaging and object detection separately for each transceiver orientation, and thenmore » combines objects across polarizations and performs change detection on the result. In combined image mode, the polarized detection system performs imaging separately for each transceiver orientation, and then combines images across polarizations and performs object detection followed by change detection on the result.« less
Herman, Peter; Sanganahalli, Basavaraju G.; Coman, Daniel; Blumenfeld, Hal; Rothman, Douglas L.
2011-01-01
Abstract A primary objective in neuroscience is to determine how neuronal populations process information within networks. In humans and animal models, functional magnetic resonance imaging (fMRI) is gaining increasing popularity for network mapping. Although neuroimaging with fMRI—conducted with or without tasks—is actively discovering new brain networks, current fMRI data analysis schemes disregard the importance of the total neuronal activity in a region. In task fMRI experiments, the baseline is differenced away to disclose areas of small evoked changes in the blood oxygenation level-dependent (BOLD) signal. In resting-state fMRI experiments, the spotlight is on regions revealed by correlations of tiny fluctuations in the baseline (or spontaneous) BOLD signal. Interpretation of fMRI-based networks is obscured further, because the BOLD signal indirectly reflects neuronal activity, and difference/correlation maps are thresholded. Since the small changes of BOLD signal typically observed in cognitive fMRI experiments represent a minimal fraction of the total energy/activity in a given area, the relevance of fMRI-based networks is uncertain, because the majority of neuronal energy/activity is ignored. Thus, another alternative for quantitative neuroimaging of fMRI-based networks is a perspective in which the activity of a neuronal population is accounted for by the demanded oxidative energy (CMRO2). In this article, we argue that network mapping can be improved by including neuronal energy/activity of both the information about baseline and small differences/fluctuations of BOLD signal. Thus, total energy/activity information can be obtained through use of calibrated fMRI to quantify differences of ΔCMRO2 and through resting-state positron emission tomography/magnetic resonance spectroscopy measurements for average CMRO2. PMID:22433047
NASA Astrophysics Data System (ADS)
Loye, Alexandre; Jaboyedoff, Michel; Theule, Joshua Isaac; Liébault, Frédéric
2016-06-01
Debris flows have been recognized to be linked to the amounts of material temporarily stored in torrent channels. Hence, sediment supply and storage changes from low-order channels of the Manival catchment, a small tributary valley with an active torrent system located exclusively in sedimentary rocks of the Chartreuse Massif (French Alps), were surveyed periodically for 16 months using terrestrial laser scanning (TLS) to study the coupling between sediment dynamics and torrent responses in terms of debris flow events, which occurred twice during the monitoring period. Sediment transfer in the main torrent was monitored with cross-section surveys. Sediment budgets were generated seasonally using sequential TLS data differencing and morphological extrapolations. Debris production depends strongly on rockfall occurring during the winter-early spring season, following a power law distribution for volumes of rockfall events above 0.1 m3, while hillslope sediment reworking dominates debris recharge in spring and autumn, which shows effective hillslope-channel coupling. The occurrence of both debris flow events that occurred during the monitoring was linked to recharge from previous debris pulses coming from the hillside and from bedload transfer. Headwater debris sources display an ambiguous behaviour in sediment transfer: low geomorphic activity occurred in the production zone, despite rainstorms inducing debris flows in the torrent; still, a general reactivation of sediment transport in headwater channels was observed in autumn without new debris supply, suggesting that the stored debris was not exhausted. The seasonal cycle of sediment yield seems to depend not only on debris supply and runoff (flow capacity) but also on geomorphic conditions that destabilize remnant debris stocks. This study shows that monitoring the changes within a torrent's in-channel storage and its debris supply can improve knowledge on recharge thresholds leading to debris flow.
Accuracy of snow depth estimation in mountain and prairie environments by an unmanned aerial vehicle
NASA Astrophysics Data System (ADS)
Harder, Phillip; Schirmer, Michael; Pomeroy, John; Helgason, Warren
2016-11-01
Quantifying the spatial distribution of snow is crucial to predict and assess its water resource potential and understand land-atmosphere interactions. High-resolution remote sensing of snow depth has been limited to terrestrial and airborne laser scanning and more recently with application of structure from motion (SfM) techniques to airborne (manned and unmanned) imagery. In this study, photography from a small unmanned aerial vehicle (UAV) was used to generate digital surface models (DSMs) and orthomosaics for snow cover at a cultivated agricultural Canadian prairie and a sparsely vegetated Rocky Mountain alpine ridgetop site using SfM. The accuracy and repeatability of this method to quantify snow depth, changes in depth and its spatial variability was assessed for different terrain types over time. Root mean square errors in snow depth estimation from differencing snow-covered and non-snow-covered DSMs were 8.8 cm for a short prairie grain stubble surface, 13.7 cm for a tall prairie grain stubble surface and 8.5 cm for an alpine mountain surface. This technique provided useful information on maximum snow accumulation and snow-covered area depletion at all sites, while temporal changes in snow depth could also be quantified at the alpine site due to the deeper snowpack and consequent higher signal-to-noise ratio. The application of SfM to UAV photographs returns meaningful information in areas with mean snow depth > 30 cm, but the direct observation of snow depth depletion of shallow snowpacks with this method is not feasible. Accuracy varied with surface characteristics, sunlight and wind speed during the flight, with the most consistent performance found for wind speeds < 10 m s-1, clear skies, high sun angles and surfaces with negligible vegetation cover.
Heat receivers for solar dynamic space power systems
NASA Astrophysics Data System (ADS)
Perez-Davis, Marla Esther
A review of state-of-the-art technology is presented and discussed for phase change materials. Some of the advanced solar dynamic designs developed as part of the Advanced Heat Receiver Conceptual Design Study performed for LeRC are discussed. The heat receivers are analyzed and several recommendations are proposed, including two new concepts. The first concept evaluated the effect of tube geometries inside the heat receiver. It was found that a triangular configuration would provide better heat transfer to the working fluid, although not necessarily with a reduction in receiver size. A sensible heat receiver considered in this study uses vapor grown graphite fiber-carbon (VGCF/C) composite as the thermal storage media and was designed for a 7 kW Brayton engine. The proposed heat receiver stores the required energy to power the system during eclipse in the VGCF/C composite. The heat receiver analysis was conducted through the Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA) software package. The proposed heat receiver compares well with other latent and advanced sensible heat receivers while avoiding the problems associated with latent heat storage salts and liquid metal heat pipes. The weight and size of the system can be optimized by changes in geometry and technology advances for this new material. In addition to the new concepts, the effect of atomic oxygen on several materials is reviewed. A test was conducted for atomic oxygen attack on boron nitride, which experienced a negligible mass loss when exposed to an atomic oxygen fluence of 5 x 10 exp 21 atoms/sq cm. This material could be used to substitute the graphite aperture plate of the heat receiver.
NASA Astrophysics Data System (ADS)
Wigmore, O.; Mark, B. G.; Lagos, P.; Somers, L. D.; McKenzie, J. M.; Huh, K. I.; Hopkinson, C.; Baraer, M.; Crumley, R. L.
2016-12-01
Terrestrial photogrammetry has a long and successful history of application to glaciological research. However, traditional methods rely upon large and expensive metric cameras and detailed triangulation of in-scene points for derivation of terrain models and analysis of glacier change. Recent developments in computer vision, including the advent of Structure from Motion (SfM) algorithms and associated software packages have made it possible to use consumer grade digital cameras to produce highly precise digital elevation models. This has facilitated the rapid expansion of unmanned aerial vehicles (UAVs) for mapping purposes. However, without onboard RTK GNSS positions of the UAV, within scene survey-grade ground targets are required for accurate georectification. Gaining access to mountain glaciers for the installation and survey of ground targets is often labour intensive, hazardous and sometimes impossible. Compounding this are limitations of UAV flight within these confined and high elevation locations and reduced flight times that limit the total survey area. Luckily, these environments also present a highly suitable location for the application of terrestrial SfM photogrammetry; because; high moraines, cliffs and ridgelines provide excellent 'semi-nadir' viewing of the glacier surface; while steep mountain walls present a close to nadir view from an oblique angle. In this study we present a workflow and results from an integrated UAV and terrestrial SfM photogrammetry campaign at Huaytapallana glacier, Huancayo Peru. We combined terrestrial images taken from GNSS surveyed positions with oblique UAV imagery of the mountain face. From this data a centimetre resolution orthomosaic and a decimetre resolution DEM of the snow and ice covered mountain face and proglacial lake were generated, covering over 6km2. Accuracy of the surface was determined from comparison over ice free areas to 1m aerial LiDAR data collected in 2009. Changes in glacier volume were then determined through DEM differencing with the LiDAR data.
NASA Astrophysics Data System (ADS)
Chan, Y. C.; Hsieh, Y. C.
2017-12-01
Recent advances in airborne laser scanning (ALS) technology have provided a great opportunity for characterizing surface erosion through developing improved methods in multi-period DEM differencing and geomorphometry. This study uses three periods of ALS digital elevation model (DEM) data to analyze the short-term erosional features of the Tsaoling landslide triggered by the 1999 Chi-Chi earthquake in Taiwan. Two methods for calculating the bedrock incision rate, the equal-interval cross section selection method and the continuous swath profiles selection method, were used in the study after nearly ten years of gully incision following the earthquake-triggered dip-slope landslide. Multi-temporal gully incision rates were obtained using the continuous swath profiles selection method, which is considered a practical and convenient approach in terrain change studies. After error estimation and comparison of the multi-period ALS DEMs, the terrain change in different periods can be directly calculated, reducing time-consuming fieldwork such as installation of erosion pins and measurement of topographic cross sections on site. In this study, the gully bedrock incision rates ranged between 0.23 and 3.98 m/year, remarkably higher than the typical results from the previous studies. By comparing the DEM data, aerial photos, and precipitation records of this area, the effects of erosion could be observed from the retreat of the Chunqiu Cliff outline during August 2011 to September 2012. It was inferred that the change in the topographic elevation during 2011-2012 was mainly due to the torrential rain brought by Typhoon Soula, which occurred on 30 July 2012. The local gully incision rate in the lower part of the landslide surface was remarkably faster than that of the other regions, suggesting that the fast incision of the toe area possibly contributes to the occurrence of repeated landslides in the Tsaoling area.
Leonard, Christina M.; Legleiter, Carl; Overstreet, Brandon T.
2017-01-01
This study examined the effects of natural and anthropogenic changes in confining margin width by applying remote sensing techniques – fusing LiDAR topography with image-derived bathymetry – over a large spatial extent: 58 km of the Snake River, Wyoming, USA. Fused digital elevation models from 2007 and 2012 were differenced to quantify changes in the volume of stored sediment, develop morphological sediment budgets, and infer spatial gradients in bed material transport. Our study spanned two similar reaches that were subject to different controls on confining margin width: natural terraces versus artificial levees. Channel planform in reaches with similar slope and confining margin width differed depending on whether the margins were natural or anthropogenic. The effects of tributaries also differed between the two reaches. Generally, the natural reach featured greater confining margin widths and was depositional, whereas artificial lateral constriction in the leveed reach produced a sediment budget that was closer to balanced. Although our remote sensing methods provided topographic data over a large area, net volumetric changes were not statistically significant due to the uncertainty associated with bed elevation estimates. We therefore focused on along-channel spatial differences in bed material transport rather than absolute volumes of sediment. To complement indirect estimates of sediment transport derived by morphological sediment budgeting, we collected field data on bed mobility through a tracer study. Surface and subsurface grain size measurements were combined with bed mobility observations to calculate armoring and dimensionless sediment transport ratios, which indicated that sediment supply exceeded transport capacity in the natural reach and vice versa in the leveed reach. We hypothesize that constriction by levees induced an initial phase of incision and bed armoring. Because levees prevented bank erosion, the channel excavated sediment by migrating rapidly across the restricted braidplain and eroding bars and islands.
Observation of wave celerity evolution in the nearshore using digital video imagery
NASA Astrophysics Data System (ADS)
Yoo, J.; Fritz, H. M.; Haas, K. A.; Work, P. A.; Barnes, C. F.; Cho, Y.
2008-12-01
Celerity of incident waves in the nearshore is observed from oblique video imagery collected at Myrtle Beach, S.C.. The video camera covers the field view of length scales O(100) m. Celerity of waves propagating in shallow water including the surf zone is estimated by applying advanced image processing and analysis methods to the individual video images sampled at 3 Hz. Original image sequences are processed through video image frame differencing, directional low-pass image filtering to reduce the noise arising from foam in the surf zone. The breaking wave celerity is computed along a cross-shore transect from the wave crest tracks extracted by a Radon transform-based line detection method. The observed celerity from the nearshore video imagery is larger than the linear wave celerity computed from the measured water depths over the entire surf zone. Compared to the nonlinear shallow water wave equation (NSWE)-based celerity computed using the measured depths and wave heights, in general, the video-based celerity shows good agreements over the surf zone except the regions across the incipient wave breaking locations. In the regions across the breaker points, the observed wave celerity is even larger than the NSWE-based celerity due to the transition of wave crest shapes. The observed celerity using the video imagery can be used to monitor the nearshore geometry through depth inversion based on the nonlinear wave celerity theories. For this purpose, the exceeding celerity across the breaker points needs to be corrected accordingly compared to a nonlinear wave celerity theory applied.
LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas
2015-01-01
High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV <15%). Satellite imagery may be an effective monitoring tool in certain areas, but large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.
NASA Technical Reports Server (NTRS)
Pearson, T. J.; Mason, B. S.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J. L.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.;
2002-01-01
Using the Cosmic Background Imager, a 13-element interferometer array operating in the 26-36 GHz frequency band, we have observed 40 deg (sup 2) of sky in three pairs of fields, each approximately 145 feet x 165 feet, using overlapping pointings: (mosaicing). We present images and power spectra of the cosmic microwave background radiation in these mosaic fields. We remove ground radiation and other low-level contaminating signals by differencing matched observations of the fields in each pair. The primary foreground contamination is due to point sources (radio galaxies and quasars). We have subtracted the strongest sources from the data using higher-resolution measurements, and we have projected out the response to other sources of known position in the power-spectrum analysis. The images show features on scales approximately 6 feet-15 feet, corresponding to masses approximately 5-80 x 10(exp 14) solar mass at the surface of last scattering, which are likely to be the seeds of clusters of galaxies. The power spectrum estimates have a resolution delta l approximately 200 and are consistent with earlier results in the multipole range l approximately less than 1000. The power spectrum is detected with high signal-to-noise ratio in the range 300 approximately less than l approximately less than 1700. For 1700 approximately less than l approximately less than 3000 the observations are consistent with the results from more sensitive CBI deep-field observations. The results agree with the extrapolation of cosmological models fitted to observations at lower l, and show the predicted drop at high l (the "damping tail").
Change Detection: Training and Transfer
Gaspar, John G.; Neider, Mark B.; Simons, Daniel J.; McCarley, Jason S.; Kramer, Arthur F.
2013-01-01
Observers often fail to notice even dramatic changes to their environment, a phenomenon known as change blindness. If training could enhance change detection performance in general, then it might help to remedy some real-world consequences of change blindness (e.g. failing to detect hazards while driving). We examined whether adaptive training on a simple change detection task could improve the ability to detect changes in untrained tasks for young and older adults. Consistent with an effective training procedure, both young and older adults were better able to detect changes to trained objects following training. However, neither group showed differential improvement on untrained change detection tasks when compared to active control groups. Change detection training led to improvements on the trained task but did not generalize to other change detection tasks. PMID:23840775
Object memory and change detection: dissociation as a function of visual and conceptual similarity.
Yeh, Yei-Yu; Yang, Cheng-Ta
2008-01-01
People often fail to detect a change between two visual scenes, a phenomenon referred to as change blindness. This study investigates how a post-change object's similarity to the pre-change object influences memory of the pre-change object and affects change detection. The results of Experiment 1 showed that similarity lowered detection sensitivity but did not affect the speed of identifying the pre-change object, suggesting that similarity between the pre- and post-change objects does not degrade the pre-change representation. Identification speed for the pre-change object was faster than naming the new object regardless of detection accuracy. Similarity also decreased detection sensitivity in Experiment 2 but improved the recognition of the pre-change object under both correct detection and detection failure. The similarity effect on recognition was greatly reduced when 20% of each pre-change stimulus was masked by random dots in Experiment 3. Together the results suggest that the level of pre-change representation under detection failure is equivalent to the level under correct detection and that the pre-change representation is almost complete. Similarity lowers detection sensitivity but improves explicit access in recognition. Dissociation arises between recognition and change detection as the two judgments rely on the match-to-mismatch signal and mismatch-to-match signal, respectively.
Data-Intensive Discovery Methods for Seismic Monitoring
NASA Astrophysics Data System (ADS)
Richards, P. G.; Schaff, D. P.; Young, C. J.; Slinkard, M.; Heck, S.; Ammon, C. J.; Cleveland, M.
2011-12-01
For most regions of our planet, earthquakes and explosions are still located one-at-a-time using seismic phase picks-a procedure that has not fundamentally changed for more than a century. But methods that recognize and use seismogram archives as a major resource, enabling comparisons of waveforms recorded from neighboring events and relocating numerous events relative to each other, have been successfully demonstrated, especially for California, where they have enabled new insights into earthquake physics and Earth structure, and have raised seismic monitoring to new levels. We are beginning a series of projects to evaluate such data-intensive methods on ever-larger scales, using cross correlation (CC) to analyze seismicity in three different ways: (1) to find repeating earthquakes (whose waveforms are very similar, so the CC value measured over long windows must be high); (2) to measure time differences and amplitude differences to enable precise relocations and relative amplitude studies, of seismic events with respect to their neighboring events (then CC can be much lower, yet still give a better estimate of arrival time differences and relative amplitudes, compared to differencing phase picks and magnitudes); and, perhaps most importantly, (3) as a detector, to find new events in current data streams that are similar to events already in the archive, or to add to the number of detections of an already known event. Experience documented by Schaff and Waldhauser (2005) for California and Schaff (2009) for China indicates that the great majority of events in seismically active regions generate waveforms that are sufficiently similar to the waveforms of neighboring events to allow CC methods to be used to obtain relative locations. Schaff (2008, 2010) has demonstrated the capability of CC methods to achieve detections, with minimal false alarms, down to more than a magnitude unit below conventional STA/LTA detectors though CC methods are far more computationally-intensive. Elsewhere at this meeting Cleveland, Ammon, and Van DeMark report in more detail on greatly-improved event locations along oceanic fracture zones using CC methods applied to 40-80s Rayleigh waves; and Slinkard, Carr, Heck and Young at Sandia have reported greatly-improved computational approaches that reduce CPU demands from hours using a fast workstation to minutes using a GPU, when a continuous data stream lasting several days is searched (using CC methods) for seismic signals similar to those of hundreds of previously documented events. From diverse results such as these, it seems appropriate to consider the future possibility of radical improvement in monitoring virtually all seismically active areas, using archives of prior events as the major resource-though we recognize that such an approach does not directly help to characterize seismic events in inactive regions, or events in active regions which are dissimilar to previously recorded events.
NASA Technical Reports Server (NTRS)
Imlay, S. T.
1986-01-01
An implicit finite volume method is investigated for the solution of the compressible Navier-Stokes equations for flows within thrust reversing and thrust vectoring nozzles. Thrust reversing nozzles typically have sharp corners, and the rapid expansion and large turning angles near these corners are shown to cause unacceptable time step restrictions when conventional approximate factorization methods are used. In this investigation these limitations are overcome by using second-order upwind differencing and line Gauss-Siedel relaxation. This method is implemented with a zonal mesh so that flows through complex nozzle geometries may be efficiently calculated. Results are presented for five nozzle configurations including two with time varying geometries. Three cases are compared with available experimental data and the results are generally acceptable.
NASA Technical Reports Server (NTRS)
Rudy, David H.; Kumar, Ajay; Thomas, James L.; Gnoffo, Peter A.; Chakravarthy, Sukumar R.
1988-01-01
A comparative study was made using 4 different computer codes for solving the compressible Navier-Stokes equations. Three different test problems were used, each of which has features typical of high speed internal flow problems of practical importance in the design and analysis of propulsion systems for advanced hypersonic vehicles. These problems are the supersonic flow between two walls, one of which contains a 10 deg compression ramp, the flow through a hypersonic inlet, and the flow in a 3-D corner formed by the intersection of two symmetric wedges. Three of the computer codes use similar recently developed implicit upwind differencing technology, while the fourth uses a well established explicit method. The computed results were compared with experimental data where available.
Statistical analysis of low level atmospheric turbulence
NASA Technical Reports Server (NTRS)
Tieleman, H. W.; Chen, W. W. L.
1974-01-01
The statistical properties of low-level wind-turbulence data were obtained with the model 1080 total vector anemometer and the model 1296 dual split-film anemometer, both manufactured by Thermo Systems Incorporated. The data obtained from the above fast-response probes were compared with the results obtained from a pair of Gill propeller anemometers. The digitized time series representing the three velocity components and the temperature were each divided into a number of blocks, the length of which depended on the lowest frequency of interest and also on the storage capacity of the available computer. A moving-average and differencing high-pass filter was used to remove the trend and the low frequency components in the time series. The calculated results for each of the anemometers used are represented in graphical or tabulated form.
Extreme Rock Distributions on Mars and Implications for Landing Safety
NASA Technical Reports Server (NTRS)
Golombek, M. P.
2001-01-01
Prior to the landing of Mars Pathfinder, the size-frequency distribution of rocks from the two Viking landing sites and Earth analog surfaces was used to derive a size-frequency model, for nomimal rock distributions on Mars. This work, coupled with extensive testing of the Pathfinder airbag landing system, allowed an estimate of what total rock abundances derived from thermal differencing techniques could be considered safe for landing. Predictions based on this model proved largely correct at predicting the size-frequency distribution of rocks at the Mars Pathfinder site and the fraction of potentially hazardous rocks. In this abstract, extreme rock distributions observed in Mars Orbiter Camera (MOC) images are compared with those observed at the three landing sites and model distributions as an additional constraint on potentially hazardous surfaces on Mars.
NASA Technical Reports Server (NTRS)
Kaushik, Dinesh K.; Baysal, Oktay
1997-01-01
Accurate computation of acoustic wave propagation may be more efficiently performed when their dispersion relations are considered. Consequently, computational algorithms which attempt to preserve these relations have been gaining popularity in recent years. In the present paper, the extensions to one such scheme are discussed. By solving the linearized, 2-D Euler and Navier-Stokes equations with such a method for the acoustic wave propagation, several issues were investigated. Among them were higher-order accuracy, choice of boundary conditions and differencing stencils, effects of viscosity, low-storage time integration, generalized curvilinear coordinates, periodic series, their reflections and interference patterns from a flat wall and scattering from a circular cylinder. The results were found to be promising en route to the aeroacoustic simulations of realistic engineering problems.
The performance of differential VLBI delay during interplanetary cruise
NASA Technical Reports Server (NTRS)
Moultrie, B.; Wolff, P. J.; Taylor, T. H.
1984-01-01
Project Voyager radio metric data are used to evaluate the orbit determination abilities of several data strategies during spacecraft interplanetary cruise. Benchmark performance is established with an operational data strategy of conventional coherent doppler, coherent range, and explicitly differenced range data from two intercontinental baselines to ameliorate the low declination singularity of the doppler data. Employing a Voyager operations trajectory as a reference, the performance of the operational data strategy is compared to the performances of data strategies using differential VLBI delay data (spacecraft delay minus quasar delay) in combinations with the aforementioned conventional data types. The comparison of strategy performances indicates that high accuracy cruise orbit determination can be achieved with a data strategy employing differential VLBI delay data, where the quantity of coherent radio metric data has been greatly reduced.