Sparse distributed memory overview
NASA Technical Reports Server (NTRS)
Raugh, Mike
1990-01-01
The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.
NASA Technical Reports Server (NTRS)
Courchaine, Brian; Venable, Jessica C.
1995-01-01
Methane is an important trace gas because it is a greenhouse gas that affects the oxidative capacity of the atmosphere. It is produced from biological and anthropogenic sources, and is increasing globally at a rate of approximately 0.6% per year [Climate Change 1992, IPCC]. By using National Oceanic and Atmospheric Administration/Climate Monitoring and Diagnostics Laboratory (NOAA/CMDL) ground station data, a global climatology of methane values was produced. Unfortunately, because the NOAA/CMDL ground stations are so sparse, the global climatology is low resolution. In order to compensate for this low resolution data, it was compared to in-situ flight data obtained from the NASA Global Tropospheric Experiment (GTE). The smoothed ground station data correlated well with the flight data. Thus, for the first time it is shown that the smoothing process used to make global contours of methane using the ground stations is a plausible way to approximate global atmospheric concentrations of the gas. These verified climatologies can be used for testing large-scale models of chemical production, destruction, and transport. This project develops the groundwork for further research in building global climatologies from sparse ground station data and studying the transport and distribution of trace gases.
Dual-band beacon experiment over Southeast Asia for ionospheric irregularity analysis
NASA Astrophysics Data System (ADS)
Watthanasangmechai, K.; Yamamoto, M.; Saito, A.; Saito, S.; Maruyama, T.; Tsugawa, T.; Nishioka, M.
2013-12-01
An experiment of dual-band beacon over Southeast Asia was started in March 2012 in order to capture and analyze ionospheric irregularities in equatorial region. Five GNU Radio Beacon Receivers (GRBRs) were aligned along 100 degree geographic longitude. The distances between the stations reach more than 500 km. The field of view of this observational network covers +/- 20 degree geomagnetic latitude including the geomagnetic equator. To capture ionospheric irregularities, the absolute TEC estimation technique was developed. The two-station method (Leitinger et al., 1975) is generally accepted as a suitable method to estimate TEC offsets of dual-band beacon experiment. However, the distances between the stations directly affect on the robustness of the technique. In Southeast Asia, the observational network is too sparse to attain a benefit of the classic two-station method. Moreover, the least-squares approch used in the two-station method tries too much to adjust the small scales of the TEC distribution which are the local minima. We thus propose a new technique to estimate the TEC offsets with the supporting data from absolute GPS-TEC from local GPS receivers and the ionospheric height from local ionosondes. The key of the proposed technique is to utilize the brute-force technique with weighting function to find the TEC offset set that yields a global minimum of RMSE in whole parameter space. The weight is not necessary when the TEC distribution is smooth, while it significantly improves the TEC estimation during the ESF events. As a result, the latitudinal TEC shows double-hump distribution because of the Equatorial Ionization Anomaly (EIA). In additions, the 100km-scale fluctuations from an Equatorial Spread F (ESF) are captured at night time in equinox seasons. The plausible linkage of the meridional wind with triggering of ESF is under invatigating and will be presented. The proposed method is successful to estimate the latitudinal TEC distribution from dual-band frequency beacon data for the sparse observational network in Southeast Asia which may be useful for other equatorial sectors like Affrican region as well.
Use of satellites to determine optimum locations for solar power stations
NASA Technical Reports Server (NTRS)
Hiser, H. W.; Senn, H. V.
1976-01-01
Ground measurements of solar radiation are too sparse to determine important mesoscale differences that can be of major significance in solar power station locations. Cloud images in the visual spectrum from the SMS/GOES geostationary satellites are used to determine the hourly distribution of sunshine on a mesoscale in the continental United States excluding Alaska. Cloud coverage and density as a function of time of day and season are considered through the use of digital data processing techniques. Low density cirrus clouds are less detrimental to solar energy collection than other types; and clouds in the morning and evening are less detrimental than those during midday hours of maximum insolation. The seasonal geographic distributions of sunshine are converted to Langleys of solar radiation received at the earth's surface through the use of transform equations developed from long-term measurements of these two parameters at 18 widely distributed stations. The high correlation between measurements of sunshine and radiation makes this possible. The output product will be maps showing the geographic distribution of total solar radiation on the mesoscale which is received at the earth's surface during each season.
1980-10-01
infestation or extent of open water was measured following the same procedures described for deter- fmination of transect percent cover. This value was...procedure where the last vegetation type ended along the transect (i.e. hydrilla, eelgrass, open water ), vegetation coverage was determined for the entire...ated open water , no measurements were made. Approximately 150 to 200 prediction stations were used per monthly sample. 61. For sparse and thick
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.
Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat
2013-07-01
The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kucera, P. A.; Steinson, M.
2016-12-01
Accurate and reliable real-time monitoring and dissemination of observations of precipitation and surface weather conditions in general is critical for a variety of research studies and applications. Surface precipitation observations provide important reference information for evaluating satellite (e.g., GPM) precipitation estimates. High quality surface observations of precipitation, temperature, moisture, and winds are important for applications such as agriculture, water resource monitoring, health, and hazardous weather early warning systems. In many regions of the World, surface weather station and precipitation gauge networks are sparsely located and/or of poor quality. Existing stations have often been sited incorrectly, not well-maintained, and have limited communications established at the site for real-time monitoring. The University Corporation for Atmospheric Research (UCAR)/National Center for Atmospheric Research (NCAR), with support from USAID, has started an initiative to develop and deploy low-cost weather instrumentation including tipping bucket and weighing-type precipitation gauges in sparsely observed regions of the world. The goal is to improve the number of observations (temporally and spatially) for the evaluation of satellite precipitation estimates in data-sparse regions and to improve the quality of applications for environmental monitoring and early warning alert systems on a regional to global scale. One important aspect of this initiative is to make the data open to the community. The weather station instrumentation have been developed using innovative new technologies such as 3D printers, Raspberry Pi computing systems, and wireless communications. An initial pilot project have been implemented in the country of Zambia. This effort could be expanded to other data sparse regions around the globe. The presentation will provide an overview and demonstration of 3D printed weather station development and initial evaluation of observed precipitation datasets.
Reliable positioning in a sparse GPS network, eastern Ontario
NASA Astrophysics Data System (ADS)
Samadi Alinia, H.; Tiampo, K.; Atkinson, G. M.
2013-12-01
Canada hosts two regions that are prone to large earthquakes: western British Columbia, and the St. Lawrence River region in eastern Canada. Although eastern Ontario is not as seismically active as other areas of eastern Canada, such as the Charlevoix/Ottawa Valley seismic zone, it experiences ongoing moderate seismicity. In historic times, potentially damaging events have occurred in New York State (Attica, 1929, M=5.7; Plattsburg, 2002, M=5.0), north-central Ontario (Temiskaming, 1935, M=6.2; North Bay, 2000, M=5.0), eastern Ontario (Cornwall, 1944, M=5.8), Georgian Bay (2005, MN=4.3), and western Quebec (Val-Des-Bois,2010, M=5.0, MN=5.8). In eastern Canada, the analysis of detailed, high-precision measurements of surface deformation is a key component in our efforts to better characterize the associated seismic hazard. The data from precise, continuous GPS stations is necessary to adequately characterize surface velocities from which patterns and rates of stress accumulation on faults can be estimated (Mazzotti and Adams, 2005; Mazzotti et al., 2005). Monitoring of these displacements requires employing high accuracy GPS positioning techniques. Detailed strain measurements can determine whether the regional strain everywhere is commensurate with a large event occurring every few hundred years anywhere within this general area or whether large earthquakes are limited to specific areas (Adams and Halchuck, 2003; Mazzotti and Adams, 2005). In many parts of southeastern Ontario and western Québec, GPS stations are distributed quite sparsely, with spacings of approximately 100 km or more. The challenge is to provide accurate solutions for these sparse networks with an approach that is capable of achieving high-accuracy positioning. Here, various reduction techniques are applied to a sparse network installed with the Southern Ontario Seismic Network in eastern Ontario. Recent developments include the implementation of precise point positioning processing on acquired GPS raw data. These are based on precise GPS orbit and clock data products with centimeter accuracy computed beforehand. Here, the analysis of 1Hz GPS data is conducted in order to find the most reliable regional network from eight stations (STCO, TYNO, ACTO, INUQ, IVKQ, KLBO, MATQ and ALGO) that cover the study area in eastern Ontario. In this way, the estimated parameters are the total number of ambiguities and resolved ambiguities, posteriori rms of each baseline and the coordinates for each station and their differences with the known coordinates. The positioning accuracy, the corrections and the accuracy of interpolated corrections, and the initialization time required for precise positioning are presented for the various applications.
NASA Astrophysics Data System (ADS)
Li, Zishen; Wang, Ningbo; Li, Min; Zhou, Kai; Yuan, Yunbin; Yuan, Hong
2017-04-01
The Earth's ionosphere is part of the atmosphere stretching from an altitude of about 50 km to more than 1000 km. When the Global Navigation Satellite System (GNSS) signal emitted from a satellite travels through the ionosphere before reaches a receiver on or near the Earth surface, the GNSS signal is significantly delayed by the ionosphere and this delay bas been considered as one of the major errors in the GNSS measurement. The real-time global ionospheric map calculated from the real-time data obtained by global stations is an essential method for mitigating the ionospheric delay for real-time positioning. The generation of an accurate global ionospheric map generally depends on the global stations with dense distribution; however, the number of global stations that can produce the real-time data is very limited at present, which results that the generation of global ionospheric map with a high accuracy is very different when only using the current stations with real-time data. In view of this, a new approach is proposed for calculating the real-time global ionospheric map only based on the current stations with real-time data. This new approach is developed on the basis of the post-processing and the one-day predicted global ionospheric map from our research group. The performance of the proposed approach is tested by the current global stations with the real-time data and the test results are also compared with the IGS-released final global ionospheric map products.
Scenario generation for stochastic optimization problems via the sparse grid method
Chen, Michael; Mehrotra, Sanjay; Papp, David
2015-04-19
We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less
NASA Astrophysics Data System (ADS)
Kucera, Paul; Steinson, Martin
2017-04-01
Accurate and reliable real-time monitoring and dissemination of observations of surface weather conditions is critical for a variety of societal applications. Applications that provide local and regional information about temperature, precipitation, moisture, and winds, for example, are important for agriculture, water resource monitoring, health, and monitoring of hazard weather conditions. In many regions of the World, surface weather stations are sparsely located and/or of poor quality. Existing stations have often been sited incorrectly, not well-maintained, and have limited communications established at the site for real-time monitoring. The University Corporation for Atmospheric Research (UCAR)/National Center for Atmospheric Research (NCAR), with support from USAID, has started an initiative to develop and deploy low-cost weather instrumentation in sparsely observed regions of the world. The project is focused on improving weather observations for environmental monitoring and early warning alert systems on a regional to global scale. Instrumentation that has been developed use innovative new technologies such as 3D printers, Raspberry Pi computing systems, and wireless communications. The goal of the project is to make the weather station designs, software, and processing tools an open community resource. The weather stations can be built locally by agencies, through educational institutions, and residential communities as a citizen effort to augment existing networks to improve detection of natural hazards for disaster risk reduction. The presentation will provide an overview of the open source weather station technology and evaluation of sensor observations for the initial networks that have been deployed in Africa.
NASA Astrophysics Data System (ADS)
Casson, David; Werner, Micha; Weerts, Albrecht; Schellekens, Jaap; Solomatine, Dimitri
2017-04-01
Hydrological modelling in the Canadian Sub-Arctic is hindered by the limited spatial and temporal coverage of local meteorological data. Local watershed modelling often relies on data from a sparse network of meteorological stations with a rough density of 3 active stations per 100,000 km2. Global datasets hold great promise for application due to more comprehensive spatial and extended temporal coverage. A key objective of this study is to demonstrate the application of global datasets and data assimilation techniques for hydrological modelling of a data sparse, Sub-Arctic watershed. Application of available datasets and modelling techniques is currently limited in practice due to a lack of local capacity and understanding of available tools. Due to the importance of snow processes in the region, this study also aims to evaluate the performance of global SWE products for snowpack modelling. The Snare Watershed is a 13,300 km2 snowmelt driven sub-basin of the Mackenzie River Basin, Northwest Territories, Canada. The Snare watershed is data sparse in terms of meteorological data, but is well gauged with consistent discharge records since the late 1970s. End of winter snowpack surveys have been conducted every year from 1978-present. The application of global re-analysis datasets from the EU FP7 eartH2Observe project are investigated in this study. Precipitation data are taken from Multi-Source Weighted-Ensemble Precipitation (MSWEP) and temperature data from Watch Forcing Data applied to European Reanalysis (ERA)-Interim data (WFDEI). GlobSnow-2 is a global Snow Water Equivalent (SWE) measurement product funded by the European Space Agency (ESA) and is also evaluated over the local watershed. Downscaled precipitation, temperature and potential evaporation datasets are used as forcing data in a distributed version of the HBV model implemented in the WFLOW framework. Results demonstrate the successful application of global datasets in local watershed modelling, but that validation of actual frozen precipitation and snowpack conditions is very difficult. The distributed hydrological model shows good streamflow simulation performance based on statistical model evaluation techniques. Results are also promising for inter-annual variability, spring snowmelt onset and time to peak flows. It is expected that data assimilation of stream flow using an Ensemble Kalman Filter will further improve model performance. This study shows that global re-analysis datasets hold great potential for understanding the hydrology and snowpack dynamics of the expansive and data sparse sub-Arctic. However, global SWE products will require further validation and algorithm improvements, particularly over boreal forest and lake-rich regions.
Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns
2015-03-01
method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another
NASA Astrophysics Data System (ADS)
Kyselý, Jan; Plavcová, Eva
2010-12-01
The study compares daily maximum (Tmax) and minimum (Tmin) temperatures in two data sets interpolated from irregularly spaced meteorological stations to a regular grid: the European gridded data set (E-OBS), produced from a relatively sparse network of stations available in the European Climate Assessment and Dataset (ECA&D) project, and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt). We show that large differences exist between the two gridded data sets, particularly for Tmin. The errors tend to be larger in tails of the distributions. In winter, temperatures below the 10% quantile of Tmin, which is still far from the very tail of the distribution, are too warm by almost 2°C in E-OBS on average. A large bias is found also for the diurnal temperature range. Comparison with simple average series from stations in two regions reveals that differences between GriSt and the station averages are minor relative to differences between E-OBS and either of the two data sets. The large deviations between the two gridded data sets affect conclusions concerning validation of temperature characteristics in regional climate model (RCM) simulations. The bias of the E-OBS data set and limitations with respect to its applicability for evaluating RCMs stem primarily from (1) insufficient density of information from station observations used for the interpolation, including the fact that the stations available may not be representative for a wider area, and (2) inconsistency between the radii of the areal average values in high-resolution RCMs and E-OBS. Further increases in the amount and quality of station data available within ECA&D and used in the E-OBS data set are essentially needed for more reliable validation of climate models against recent climate on a continental scale.
Benefit of Complete State Monitoring For GPS Realtime Applications With Geo++ Gnsmart
NASA Astrophysics Data System (ADS)
Wübbena, G.; Schmitz, M.; Bagge, A.
Today, the demand for precise positioning at the cm-level in realtime is worldwide growing. An indication for this is the number of operational RTK network installa- tions, which use permanent reference station networks to derive corrections for dis- tance dependent GPS errors and to supply corrections to RTK users in realtime. Gen- erally, the inter-station distances in RTK networks are selected at several tens of km in range and operational installations cover areas of up to 50000 km x km. However, the separation of the permanent reference stations can be increased to sev- eral hundred km, while a correct modeling of all error components is applied. Such networks can be termed as sparse RTK networks, which cover larger areas with a reduced number of stations. The undifferenced GPS observable is best suited for this task estimating the complete state of a permanent GPS network in a dynamic recursive Kalman filter. A rigorous adjustment of all simultaneous reference station data is re- quired. The sparse network design essentially supports the state estimation through its large spatial extension. The benefit of the approach and its state modeling of all GPS error components is a successful ambiguity resolution in realtime over long distances. The above concepts are implemented in the operational GNSMART (GNSS State Monitoring and Representation Technique) software of Geo++. It performs a state monitoring of all error components at the mm-level, because for RTK networks this accuracy is required to sufficiently represent the distance dependent errors for kine- matic applications. One key issue of the modeling is the estimation of clocks and hard- ware delays in the undifferenced approach. This pre-requisite subsequently allows for the precise separation and modeling of all other error components. Generally most of the estimated parameters are considered as nuisance parameters with respect to pure positioning tasks. As the complete state vector of GPS errors is available in a GPS realtime network, additional information besides position can be derived e.g. regional precise satellite clocks, orbits, total ionospheric electron content, tropospheric water vapor distribution, and also dynamic reference station movements. The models of GNSMART are designed to work with regional, continental or even global data. Results from GNSMART realtime networks with inter-station distances of several hundred km are presented to demonstrate the benefits of the operational implemented concepts.
Deploying temporary networks for upscaling of sparse network stations
NASA Astrophysics Data System (ADS)
Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Kelly, Victoria; Hall, Mark; Palecki, Michael A.; Temimi, Marouane
2016-10-01
Soil observations networks at the national scale play an integral role in hydrologic modeling, drought assessment, agricultural decision support, and our ability to understand climate change. Understanding soil moisture variability is necessary to apply these measurements to model calibration, business and consumer applications, or even human health issues. The installation of soil moisture sensors as sparse, national networks is necessitated by limited financial resources. However, this results in the incomplete sampling of the local heterogeneity of soil type, vegetation cover, topography, and the fine spatial distribution of precipitation events. To this end, temporary networks can be installed in the areas surrounding a permanent installation within a sparse network. The temporary networks deployed in this study provide a more representative average at the 3 km and 9 km scales, localized about the permanent gauge. The value of such temporary networks is demonstrated at test sites in Millbrook, New York and Crossville, Tennessee. The capacity of a single U.S. Climate Reference Network (USCRN) sensor set to approximate the average of a temporary network at the 3 km and 9 km scales using a simple linear scaling function is tested. The capacity of a temporary network to provide reliable estimates with diminishing numbers of sensors, the temporal stability of those networks, and ultimately, the relationship of the variability of those networks to soil moisture conditions at the permanent sensor are investigated. In this manner, this work demonstrates the single-season installation of a temporary network as a mechanism to characterize the soil moisture variability at a permanent gauge within a sparse network.
Comparison of dew point temperature estimation methods in Southwestern Georgia
Marcus D. Williams; Scott L. Goodrick; Andrew Grundstein; Marshall Shepherd
2015-01-01
Recent upward trends in acres irrigated have been linked to increasing near-surface moisture. Unfortunately, stations with dew point data for monitoring near-surface moisture are sparse. Thus, models that estimate dew points from more readily observed data sources are useful. Daily average dew temperatures were estimated and evaluated at 14 stations in...
A performance study of sparse Cholesky factorization on INTEL iPSC/860
NASA Technical Reports Server (NTRS)
Zubair, M.; Ghose, M.
1992-01-01
The problem of Cholesky factorization of a sparse matrix has been very well investigated on sequential machines. A number of efficient codes exist for factorizing large unstructured sparse matrices. However, there is a lack of such efficient codes on parallel machines in general, and distributed machines in particular. Some of the issues that are critical to the implementation of sparse Cholesky factorization on a distributed memory parallel machine are ordering, partitioning and mapping, load balancing, and ordering of various tasks within a processor. Here, we focus on the effect of various partitioning schemes on the performance of sparse Cholesky factorization on the Intel iPSC/860. Also, a new partitioning heuristic for structured as well as unstructured sparse matrices is proposed, and its performance is compared with other schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Huiqiao; Yang, Yi; Tang, Xiangyang
2015-06-15
Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, whichmore » are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality for advanced clinical applications wherein only unevenly distributed sparse views are available. Research Grants: W81XWH-12-1-0138 (DoD), Sinovision Technologies.« less
On the feasibility of real-time mapping of the geoelectric field across North America
Love, Jeffrey J.; Rigler, E. Joshua; Kelbert, Anna; Finn, Carol A.; Bedrosian, Paul A.; Balch, Christopher C.
2018-06-08
A review is given of the present feasibility for accurately mapping geoelectric fields across North America in near-realtime by modeling geomagnetic monitoring and magnetotelluric survey data. Should this capability be successfully developed, it could inform utility companies of magnetic-storm interference on electric-power-grid systems. That real-time mapping of geoelectric fields is a challenge is reflective of (1) the spatiotemporal complexity of geomagnetic variation, especially during magnetic storms, (2) the sparse distribution of ground-based geomagnetic monitoring stations that report data in realtime, (3) the spatial complexity of three-dimensional solid-Earth impedance, and (4) the geographically incomplete state of continental-scale magnetotelluric surveys.
NASA Astrophysics Data System (ADS)
Xie, J.; Ni, S.; Chu, R.; Xia, Y.
2017-12-01
Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 second, especially in early days of global seismic network. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC/TS in southern California, USA as an example, the 26 s PL signal can be easily observed in the ambient Noise Cross-correlation Function (NCF) between GSC/TS and a remote station. The variation of travel-time of this 26 s signal in the NCF is used to infer clock error. A drastic clock error is detected during June, 1992. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of ±25 s. Using 26 s PL source, the clock can be validated for historical records of sparsely distributed stations, where usual NCF of short period microseism (<20 s) might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. The location change of the 26 s PL source may influence the measured clock drift, using regional stations with stable clock, we estimate the possible location change of the source.
Geographic patterns and dynamics of Alaskan climate interpolated from a sparse station record
Fleming, Michael D.; Chapin, F. Stuart; Cramer, W.; Hufford, Gary L.; Serreze, Mark C.
2000-01-01
Data from a sparse network of climate stations in Alaska were interpolated to provide 1-km resolution maps of mean monthly temperature and precipitation-variables that are required at high spatial resolution for input into regional models of ecological processes and resource management. The interpolation model is based on thin-plate smoothing splines, which uses the spatial data along with a digital elevation model to incorporate local topography. The model provides maps that are consistent with regional climatology and with patterns recognized by experienced weather forecasters. The broad patterns of Alaskan climate are well represented and include latitudinal and altitudinal trends in temperature and precipitation and gradients in continentality. Variations within these broad patterns reflect both the weakening and reduction in frequency of low-pressure centres in their eastward movement across southern Alaska during the summer, and the shift of the storm tracks into central and northern Alaska in late summer. Not surprisingly, apparent artifacts of the interpolated climate occur primarily in regions with few or no stations. The interpolation model did not accurately represent low-level winter temperature inversions that occur within large valleys and basins. Along with well-recognized climate patterns, the model captures local topographic effects that would not be depicted using standard interpolation techniques. This suggests that similar procedures could be used to generate high-resolution maps for other high-latitude regions with a sparse density of data.
NASA Astrophysics Data System (ADS)
Xie, Jun; Ni, Sidao; Chu, Risheng; Xia, Yingjie
2018-01-01
Accurate seismometer clock plays an important role in seismological studies including earthquake location and tomography. However, some seismic stations may have clock drift larger than 1 s (e.g. GSC in 1992), especially in early days of global seismic networks. The 26 s Persistent Localized (PL) microseism event in the Gulf of Guinea sometime excites strong and coherent signals, and can be used as repeating source for assessing stability of seismometer clocks. Taking station GSC, PAS and PFO in the TERRAscope network as an example, the 26 s PL signal can be easily observed in the ambient noise cross-correlation function between these stations and a remote station OBN with interstation distance about 9700 km. The travel-time variation of this 26 s signal in the ambient noise cross-correlation function is used to infer clock error. A drastic clock error is detected during June 1992 for station GSC, but not found for station PAS and PFO. This short-term clock error is confirmed by both teleseismic and local earthquake records with a magnitude of 25 s. Averaged over the three stations, the accuracy of the ambient noise cross-correlation function method with the 26 s source is about 0.3-0.5 s. Using this PL source, the clock can be validated for historical records of sparsely distributed stations, where the usual ambient noise cross-correlation function of short-period (<20 s) ambient noise might be less effective due to its attenuation over long interstation distances. However, this method suffers from cycling problem, and should be verified by teleseismic/local P waves. Further studies are also needed to investigate whether the 26 s source moves spatially and its effects on clock drift detection.
Lu, Z.; Kwoun, Oh-Ig
2008-01-01
Detailed analysis of C-band European Remote Sensing 1 and 2 (ERS-1/ERS-2) and Radarsat-1 interferometric synthetic aperture radar (InSAR) imagery was conducted to study water-level changes of coastal wetlands of southeastern Louisiana. Radar backscattering and InSAR coherence suggest that the dominant radar backscattering mechanism for swamp forest and saline marsh is double-bounce backscattering, implying that InSAR images can be used to estimate water-level changes with unprecedented spatial details. On the one hand, InSAR images suggest that water-level changes over the study site can be dynamic and spatially heterogeneous and cannot be represented by readings from sparsely distributed gauge stations. On the other hand, InSAR phase measurements are disconnected by structures and other barriers and require absolute water-level measurements from gauge stations or other sources to convert InSAR phase values to absolute water-level changes. ?? 2006 IEEE.
DOT National Transportation Integrated Search
2018-02-02
Traffic congestion at arterial intersections and freeway bottlenecks degrades the air quality and threatens the public health. Conventionally, air pollutants are monitored by sparsely-distributed Quality Assurance Air Monitoring Sites. Sparse mobile ...
2014-09-30
underwater acoustic communication technologies for autonomous distributed underwater networks , through innovative signal processing, coding, and...4. TITLE AND SUBTITLE Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and...coding: 3) OFDM modulated dynamic coded cooperation in underwater acoustic channels; 3 Localization, Networking , and Testbed: 4) On-demand
Tectonic Implications of Intermediate-depth Earthquakes Beneath the Northeast Caribbean
NASA Astrophysics Data System (ADS)
Mejia, H.; Pulliam, J.; Huerfano, V.; Polanco Rivera, E.
2016-12-01
The Caribbean-North American plate boundary transitions from normal subduction beneath the Lesser Antilles to oblique subduction at Hispaniola before becoming exclusively transform at Cuba. In the Greater Antilles, large earthquakes occur all along the plate boundary at shallow depths but intermediate-depth earthquakes (50-200 km focal depth) occur almost uniquely beneath eastern Hispaniola. Previous studies have suggested that regional tectonics may be dominated by, for example, opposing subducting slabs, tearing of the subducting North American slab, or "slab push" by the NA slab. In addition, the Bahamas Platform, located north of Hispaniola, is likely causing compressive stresses and clockwise rotation of the island. A careful examination of focal mechanisms of intermediate-depth earthquakes could clarify regional tectonics but seismic stations in the region have historically been sparse, so constraints on earthquake depths and focal mechanisms have been poor. In response, fifteen broadband sensors were deployed in the Dominican Republic in 2014, increasing the number of stations to twenty-two. To determine the roles earthquakes play in regional tectonics, a event catalog was created joining data from our stations and other regional stations for which event depths are greater than 50 km and magnitudes are greater than 3.5. All events have been relocated and focal mechanisms are presented for as many events as possible. Multiple probable fault planes are computed for each event. Compressive (P) and tensional (T) axes, from fault planes, are plotted in 3-dimensions with density distribution contours determined of each axis. Examining relationships between axes distributions and events helps constrain tectonic stresses at intermediate-depths beneath eastern Hispaniola. A majority of events show primary compressive axes oriented in a north-south direction, likely produced by collision with the Bahamas Platform.
BIRD: A general interface for sparse distributed memory simulators
NASA Technical Reports Server (NTRS)
Rogers, David
1990-01-01
Kanerva's sparse distributed memory (SDM) has now been implemented for at least six different computers, including SUN3 workstations, the Apple Macintosh, and the Connection Machine. A common interface for input of commands would both aid testing of programs on a broad range of computer architectures and assist users in transferring results from research environments to applications. A common interface also allows secondary programs to generate command sequences for a sparse distributed memory, which may then be executed on the appropriate hardware. The BIRD program is an attempt to create such an interface. Simplifying access to different simulators should assist developers in finding appropriate uses for SDM.
Detecting Earthquakes over a Seismic Network using Single-Station Similarity Measures
NASA Astrophysics Data System (ADS)
Bergen, Karianne J.; Beroza, Gregory C.
2018-03-01
New blind waveform-similarity-based detection methods, such as Fingerprint and Similarity Thresholding (FAST), have shown promise for detecting weak signals in long-duration, continuous waveform data. While blind detectors are capable of identifying similar or repeating waveforms without templates, they can also be susceptible to false detections due to local correlated noise. In this work, we present a set of three new methods that allow us to extend single-station similarity-based detection over a seismic network; event-pair extraction, pairwise pseudo-association, and event resolution complete a post-processing pipeline that combines single-station similarity measures (e.g. FAST sparse similarity matrix) from each station in a network into a list of candidate events. The core technique, pairwise pseudo-association, leverages the pairwise structure of event detections in its network detection model, which allows it to identify events observed at multiple stations in the network without modeling the expected move-out. Though our approach is general, we apply it to extend FAST over a sparse seismic network. We demonstrate that our network-based extension of FAST is both sensitive and maintains a low false detection rate. As a test case, we apply our approach to two weeks of continuous waveform data from five stations during the foreshock sequence prior to the 2014 Mw 8.2 Iquique earthquake. Our method identifies nearly five times as many events as the local seismicity catalog (including 95% of the catalog events), and less than 1% of these candidate events are false detections.
Hierarchical Bayesian sparse image reconstruction with application to MRFM.
Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves
2009-09-01
This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.
Sparse modeling of spatial environmental variables associated with asthma
Chang, Timothy S.; Gangnon, Ronald E.; Page, C. David; Buckingham, William R.; Tandias, Aman; Cowan, Kelly J.; Tomasallo, Carrie D.; Arndt, Brian G.; Hanrahan, Lawrence P.; Guilbert, Theresa W.
2014-01-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin’s Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5–50 years over a three-year period. Each patient’s home address was geocoded to one of 3,456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin’s geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. PMID:25533437
Sparse modeling of spatial environmental variables associated with asthma.
Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W
2015-02-01
Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.
Quresh S. Latif; Martha M. Ellis; Victoria A. Saab; Kim Mellen-McLean
2017-01-01
Sparsely distributed species attract conservation concern, but insufficient information on population trends challenges conservation and funding prioritization. Occupancy-based monitoring is attractive for these species, but appropriate sampling design and inference depend on particulars of the study system. We employed spatially explicit simulations to identify...
Sparsely sampling the sky: Regular vs. random sampling
NASA Astrophysics Data System (ADS)
Paykari, P.; Pires, S.; Starck, J.-L.; Jaffe, A. H.
2015-09-01
Aims: The next generation of galaxy surveys, aiming to observe millions of galaxies, are expensive both in time and money. This raises questions regarding the optimal investment of this time and money for future surveys. In a previous work, we have shown that a sparse sampling strategy could be a powerful substitute for the - usually favoured - contiguous observation of the sky. In our previous paper, regular sparse sampling was investigated, where the sparse observed patches were regularly distributed on the sky. The regularity of the mask introduces a periodic pattern in the window function, which induces periodic correlations at specific scales. Methods: In this paper, we use a Bayesian experimental design to investigate a "random" sparse sampling approach, where the observed patches are randomly distributed over the total sparsely sampled area. Results: We find that in this setting, the induced correlation is evenly distributed amongst all scales as there is no preferred scale in the window function. Conclusions: This is desirable when we are interested in any specific scale in the galaxy power spectrum, such as the matter-radiation equality scale. As the figure of merit shows, however, there is no preference between regular and random sampling to constrain the overall galaxy power spectrum and the cosmological parameters.
Statistical prediction with Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Rogers, David
1989-01-01
A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near- or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with genetic algorithms, and a method for improving the capacity of SDM even when used as an associative memory.
Immunological memory is associative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, D.J.; Forrest, S.; Perelson, A.S.
1996-12-31
The purpose of this paper is to show that immunological memory is an associative and robust memory that belongs to the class of sparse distributed memories. This class of memories derives its associative and robust nature by sparsely sampling the input space and distributing the data among many independent agents. Other members of this class include a model of the cerebellar cortex and Sparse Distributed Memory (SDM). First we present a simplified account of the immune response and immunological memory. Next we present SDM, and then we show the correlations between immunological memory and SDM. Finally, we show how associativemore » recall in the immune response can be both beneficial and detrimental to the fitness of an individual.« less
Learning to read aloud: A neural network approach using sparse distributed memory
NASA Technical Reports Server (NTRS)
Joglekar, Umesh Dwarkanath
1989-01-01
An attempt to solve a problem of text-to-phoneme mapping is described which does not appear amenable to solution by use of standard algorithmic procedures. Experiments based on a model of distributed processing are also described. This model (sparse distributed memory (SDM)) can be used in an iterative supervised learning mode to solve the problem. Additional improvements aimed at obtaining better performance are suggested.
NASA Astrophysics Data System (ADS)
Napoli, V.; Yoo, S. H.; Russell, D. R.
2017-12-01
To improve discrimination of small explosions and earthquakes, we developed a new magnitude scale based on the standard Ms:mb discrimination method. In place of 20 second Ms measurements we developed a unified Rayleigh and Love wave magnitude scale (MsU) that is designed to maximize available information from single stations and then combine magnitude estimates into network averages. Additionally, in place of mb(P) measurements we developed an mb(P-Coda) magnitude scale as the properties of the coda make sparse network mb(P-Coda) more robust and less variable than network mb(P) estimates. A previous mb:MsU study conducted in 2013 in the Korean Peninsula shows that the use of MsU in place of standard 20 second Ms, leads to increased population separation and reduced scattering. The goals of a combined mb(P-coda):MsU scale are reducing scatter, ensuring applicability at small magnitudes with sparse networks, and improving the overall distribution for mb:Ms earthquake and explosion populations. To test this method we are calculating mb(P-coda)and MsU for a catalog earthquakes located in and near the Korean Peninsula, for the six North Korean nuclear tests (4.1 < mb < 6.3) and for the 3 aftershocks to date that occurred after the sixth test (2.6 < ML < 4.0). Compared to the previous 2013 study, we expect to see greater separation in the populations and less scattering with the inclusion of mb(P-coda) and with the implementation of additional filters for MsU to improve signal-to-noise levels; this includes S-transform filtering for polarization and off-azimuth signal reduction at regional distances. As we are expanding our database of mb(P-coda):MsU measurements in the Korean Peninsula to determine the earthquake and explosion distribution, this research will address the limitations and potential for discriminating small magnitude events using sparse networks.
Ding, Weifu; Zhang, Jiangshe; Leung, Yee
2016-10-01
In this paper, we predict air pollutant concentration using a feedforward artificial neural network inspired by the mechanism of the human brain as a useful alternative to traditional statistical modeling techniques. The neural network is trained based on sparse response back-propagation in which only a small number of neurons respond to the specified stimulus simultaneously and provide a high convergence rate for the trained network, in addition to low energy consumption and greater generalization. Our method is evaluated on Hong Kong air monitoring station data and corresponding meteorological variables for which five air quality parameters were gathered at four monitoring stations in Hong Kong over 4 years (2012-2015). Our results show that our training method has more advantages in terms of the precision of the prediction, effectiveness, and generalization of traditional linear regression algorithms when compared with a feedforward artificial neural network trained using traditional back-propagation.
Pandey, G.R.; Cayan, D.R.; Dettinger, M.D.; Georgakakos, K.P.
2000-01-01
A hybrid (physical-statistical) scheme is developed to resolve the finescale distribution of daily precipitation over complex terrain. The scheme generates precipitation by combining information from the upper-air conditions and from sparsely distributed station measurements; thus, it proceeds in two steps. First, an initial estimate of the precipitation is made using a simplified orographic precipitation model. It is a steady-state, multilayer, and two-dimensional model following the concepts of Rhea. The model is driven by the 2.5?? ?? 2.5?? gridded National Oceanic and Atmospheric Administration-National Centers for Environmental Prediction upper-air profiles, and its parameters are tuned using the observed precipitation structure of the region. Precipitation is generated assuming a forced lifting of the air parcels as they cross the mountain barrier following a straight trajectory. Second, the precipitation is adjusted using errors between derived precipitation and observations from nearby sites. The study area covers the northern half of California, including coastal mountains, central valley, and the Sierra Nevada. The model is run for a 5-km rendition of terrain for days of January-March over the period of 1988-95. A jackknife analysis demonstrates the validity of the approach. The spatial and temporal distributions of the simulated precipitation field agree well with the observed precipitation. Further, a mapping of model performance indices (correlation coefficients, model bias, root-mean-square error, and threat scores) from an array of stations from the region indicates that the model performs satisfactorily in resolving daily precipitation at 5-km resolution.
1982-10-27
are buried within * a much larger, special purpose package. We regret such omissions, but to have reached the practi- tioners in each of the diverse...sparse matrix (form PAQ ) 4. Method of solution: Distribution count sort 5. Programming language: FORTRAN g Precision: Single and double precision 7
Detecting earthquakes over a seismic network using single-station similarity measures
NASA Astrophysics Data System (ADS)
Bergen, Karianne J.; Beroza, Gregory C.
2018-06-01
New blind waveform-similarity-based detection methods, such as Fingerprint and Similarity Thresholding (FAST), have shown promise for detecting weak signals in long-duration, continuous waveform data. While blind detectors are capable of identifying similar or repeating waveforms without templates, they can also be susceptible to false detections due to local correlated noise. In this work, we present a set of three new methods that allow us to extend single-station similarity-based detection over a seismic network; event-pair extraction, pairwise pseudo-association, and event resolution complete a post-processing pipeline that combines single-station similarity measures (e.g. FAST sparse similarity matrix) from each station in a network into a list of candidate events. The core technique, pairwise pseudo-association, leverages the pairwise structure of event detections in its network detection model, which allows it to identify events observed at multiple stations in the network without modeling the expected moveout. Though our approach is general, we apply it to extend FAST over a sparse seismic network. We demonstrate that our network-based extension of FAST is both sensitive and maintains a low false detection rate. As a test case, we apply our approach to 2 weeks of continuous waveform data from five stations during the foreshock sequence prior to the 2014 Mw 8.2 Iquique earthquake. Our method identifies nearly five times as many events as the local seismicity catalogue (including 95 per cent of the catalogue events), and less than 1 per cent of these candidate events are false detections.
Thermal infrared remote sensing of water temperature in riverine landscapes
Handcock, Rebecca N.; Torgersen, Christian E.; Cherkauer, Keith A.; Gillespie, Alan R.; Klement, Tockner; Faux, Russell N.; Tan, Jing; Carbonneau, Patrice E.; Piégay, Hervé
2012-01-01
Water temperature in riverine landscapes is an important regional indicator of water quality that is influenced by both ground- and surface-water inputs, and indirectly by land use in the surrounding watershed (Brown and Krygier, 1970; Beschta et al., 1987; Chen et al., 1998; Poole and Berman, 2001).Coldwater fishes such as salmon and trout are sensitive to elevated water temperature; therefore, water temperature must meet management guidelines and quality standards, which aim to create a healthy environment for endangered populations (McCullough et al., 2009). For example, in the USA, the Environmental Protection Agency (EPA) has established water quality standards to identify specific temperature criteria to protect coldwater fishes (Environmental Protection Agency, 2003). Trout and salmon can survive in cool-water refugia even when temperatures at other measurement locations are at or above the recommended maximums (Ebersole et al., 2001; Baird and Krueger, 2003; High et al., 2006). Spatially extensive measurements of water temperature are necessary to locate these refugia, to identify the location of ground- and surface-water inputs to the river channel, and to identify thermal pollution sources. Regional assessment of water temperature in streams and rivers has been limited by sparse sampling in both space and time. Water temperature has typically been measured using a network of widely distributed instream gages, which record the temporal change of the bulk, or kinetic, temperature of the water (Tk) at specific locations. For example, the State of Washington (USA) recorded water quality conditions at 76 stations within the Puget Lowlands eco region, which contains 12,721 km of streams and rivers (Washington Department of Ecology, 1998). Such gages are sparsely distributed, are typically located only in larger streams and rivers, and give limited information about the spatial distribution of water temperature.
Thermal infrared remote sensing of water temperature in riverine landscapes: Chapter 5
Carbonneau, Rebecca N.; Piégay, Hervé; Handcock, R.N; Torgersen, Christian E.; Cherkauer, K.A; Gillespie, A.R; Tockner, K; Faux, R. N.; Tan, Jing
2012-01-01
Water temperature in riverine landscapes is an important regional indicator of water quality that is influenced by both ground- and surface-water inputs, and indirectly by land use in the surrounding watershed (Brown and Krygier, 1970; Beschta et al., 1987; Chen et al., 1998; Poole and Berman, 2001). Coldwater fishes such as salmon and trout are sensitive to elevated water temperature; therefore, water temperature must meet management guidelines and quality standards, which aim to create a healthy environment for endangered populations (McCullough et al., 2009). For example, in the USA, the Environmental Protection Agency (EPA) has established water quality standards to identify specific temperature criteria to protect coldwater fishes (Environmental Protection Agency, 2003). Trout and salmon can survive in cool-water refugia even when temperatures at other measurement locations are at or above the recommended maximums (Ebersole et al., 2001; Baird and Krueger, 2003; High et al., 2006). Spatially extensive measurements of water temperature are necessary to locate these refugia, to identify the location of ground- and surface-water inputs to the river channel, and to identify thermal pollution sources. Regional assessment of water temperature in streams and rivers has been limited by sparse sampling in both space and time. Water temperature has typically been measured using a network of widely distributed instream gages, which record the temporal change of the bulk, or kinetic, temperature of the water (Tk) at specific locations. For example, the State of Washington (USA) recorded water quality conditions at 76 stations within the Puget Lowlands eco region, which contains 12,721 km of streams and rivers (Washington Department of Ecology, 1998). Such gages are sparsely distributed, are typically located only in larger streams and rivers, and give limited information about the spatial distribution of water temperature (Cherkauer et al., 2005).
Investigation of wall-bounded turbulence over sparsely distributed roughness
NASA Astrophysics Data System (ADS)
Placidi, Marco; Ganapathisubramani, Bharath
2011-11-01
The effects of sparsely distributed roughness elements on the structure of a turbulent boundary layer are examined by performing a series of Particle Image Velocimetry (PIV) experiments in a wind tunnel. From the literature, the best way to characterise a rough wall, especially one where the density of roughness elements is sparse, is unclear. In this study, rough surfaces consisting of sparsely and uniformly distributed LEGO® blocks are used. Five different patterns are adopted in order to examine the effects of frontal solidity (λf, frontal area of the roughness elements per unit wall-parallel area), plan solidity (λp, plan area of roughness elements per unit wall-parallel area) and the geometry of the roughness element (square and cylindrical elements), on the turbulence structure. The Karman number, Reτ , has been matched, at the value of approximately 2300, in order to compare across the different cases. In the talk, we will present detailed analysis of mean and rms velocity profiles, Reynolds stresses and quadrant decomposition.
Two demonstrators and a simulator for a sparse, distributed memory
NASA Technical Reports Server (NTRS)
Brown, Robert L.
1987-01-01
Described are two programs demonstrating different aspects of Kanerva's Sparse, Distributed Memory (SDM). These programs run on Sun 3 workstations, one using color, and have straightforward graphically oriented user interfaces and graphical output. Presented are descriptions of the programs, how to use them, and what they show. Additionally, this paper describes the software simulator behind each program.
An empirical investigation of sparse distributed memory using discrete speech recognition
NASA Technical Reports Server (NTRS)
Danforth, Douglas G.
1990-01-01
Presented here is a step by step analysis of how the basic Sparse Distributed Memory (SDM) model can be modified to enhance its generalization capabilities for classification tasks. Data is taken from speech generated by a single talker. Experiments are used to investigate the theory of associative memories and the question of generalization from specific instances.
Communication requirements of sparse Cholesky factorization with nested dissection ordering
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1989-01-01
Load distribution schemes for minimizing the communication requirements of the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems are presented. The total data traffic in factoring an n x n sparse symmetric positive definite matrix representing an n-vertex regular two-dimensional grid graph using n exp alpha, alpha not greater than 1, processors are shown to be O(n exp 1 + alpha/2). It is O(n), when n exp alpha, alpha not smaller than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal.
Distributed memory compiler design for sparse problems
NASA Technical Reports Server (NTRS)
Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema
1991-01-01
A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.
NASA Technical Reports Server (NTRS)
Kanerva, P.
1986-01-01
To determine the relation of the sparse, distributed memory to other architectures, a broad review of the literature was made. The memory is called a pattern memory because they work with large patterns of features (high-dimensional vectors). A pattern is stored in a pattern memory by distributing it over a large number of storage elements and by superimposing it over other stored patterns. A pattern is retrieved by mathematical or statistical reconstruction from the distributed elements. Three pattern memories are discussed.
Deploying temporary networks for upscaling of sparse network stations
USDA-ARS?s Scientific Manuscript database
Soil observations networks at the national scale play an integral role in hydrologic modeling, drought assessment, agricultural decision support, and our ability to understand climate change. Understanding soil moisture variability is necessary to apply these measurements to model calibration, busin...
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
New shape models of asteroids reconstructed from sparse-in-time photometry
NASA Astrophysics Data System (ADS)
Durech, Josef; Hanus, Josef; Vanco, Radim; Oszkiewicz, Dagmara Anna
2015-08-01
Asteroid physical parameters - the shape, the sidereal rotation period, and the spin axis orientation - can be reconstructed from the disk-integrated photometry either dense (classical lightcurves) or sparse in time by the lightcurve inversion method. We will review our recent progress in asteroid shape reconstruction from sparse photometry. The problem of finding a unique solution of the inverse problem is time consuming because the sidereal rotation period has to be found by scanning a wide interval of possible periods. This can be efficiently solved by splitting the period parameter space into small parts that are sent to computers of volunteers and processed in parallel. We will show how this approach of distributed computing works with currently available sparse photometry processed in the framework of project Asteroids@home. In particular, we will show the results based on the Lowell Photometric Database. The method produce reliable asteroid models with very low rate of false solutions and the pipelines and codes can be directly used also to other sources of sparse photometry - Gaia data, for example. We will present the distribution of spin axis of hundreds of asteroids, discuss the dependence of the spin obliquity on the size of an asteroid,and show examples of spin-axis distribution in asteroid families that confirm the Yarkovsky/YORP evolution scenario.
Kanerva's sparse distributed memory with multiple hamming thresholds
NASA Technical Reports Server (NTRS)
Pohja, Seppo; Kaski, Kimmo
1992-01-01
If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.
Simulating the Permafrost Distribution on the Seward Peninsula, Alaska
NASA Astrophysics Data System (ADS)
Busey, R.; Hinzman, L. D.; Yoshikawa, K.; Liston, G. E.
2005-12-01
Permafrost extent has been estimated using an equivalent latitude / elevation model based upon good climate, terrain and soil property data. This research extends a previously developed model to a relatively data sparse region. We are applying the general equivalent attitude model developed for Caribou-Poker Creeks Research Watershed over the much larger area of the Seward Peninsula, Alaska. This region of sub-Arctic Alaska is a proxy for a warmer Arctic due to the broad expanses of tussock tundra, invading shrubs and fragile permafrost with average temperatures just below freezing. The equivalent latitude model combines elevation, slope, and aspect with snow cover, where the snow cover distribution was defined using MicroMet and SnowModel. Source data for the distributed snow model came from meteorological stations across the Seward Peninsula from the National Weather Service, SNOTEL, RAWS, and our own stations. Simulations of permafrost extent will enable us to compare the current distribution to that existing during past climates and estimate the future state of permafrost on the Seward Peninsula. The broadest impacts to the terrestrial arctic regions will result through consequent effects of changing permafrost structure and extent. As the climate differentially warms in summer and winter, the permafrost will become warmer, the active layer (the layer of soil above the permafrost that annually experiences freeze and thaw) will become thicker, the lower boundary of permafrost will become shallower and permafrost extent will decrease in area. These simple structural changes will affect every aspect of the surface water and energy balances. As permafrost extent decreases, there is more infiltration to groundwater. This has significant impacts on large and small scales.
Representation-Independent Iteration of Sparse Data Arrays
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.
Development of GIS-based Wind Potential Map of Makkah Province, Saudi Arabia
NASA Astrophysics Data System (ADS)
Nayyar, Z. A.; Zaigham, N. A.; Aburizaiza, O. S.; Mahar, G. A.; Eusufi, S. N.
2011-12-01
Global energy scenario is changing drastically toward decline, as new major discoveries of fossil fuel are not coming up significantly on regional basis. In case of Saudi Arabia, one of the largest fossil fuel producers, the major oil fields have started exhausting significantly as revealed from the literature research study. Considering the future energy crisis, different other renewable options presently have became imperative to be consider anticipating for the national development. Wind energy in one of them. The development of wind energy technology requires the baseline data relevant to the wind trends and their potentials. Under the present study, an attempt has been made to develop wind power density map of the Makkah Province of Saudi Arabia based on the meteorological data collected at different sparsely located weather stations. GIS application has provided a good option to interpolate the gap areas between the sparsely located weather recording stations. This paper describe the methodology and results of the present study.
Trace Element Cycling in Lithogenic Particles at Station ALOHA
NASA Astrophysics Data System (ADS)
Morton, P. L.; Weisend, R.; Landing, W. M.; Fitzsimmons, J. N.; Hayes, C. T.; Boyle, E. A.
2014-12-01
Trace element cycling in marine particles is influenced by atmospheric deposition, vertical export, biological uptake and remineralization, scavenging, and lateral transport processes. To investigate the cycling of lithogenic particles in the central North Pacific Ocean, surface and vertical profile samples of marine suspended particulate matter (SPM) were collected July-August 2012 during the HOE-DYLAN cruises at Station ALOHA. In the late summer, atmospheric dust inputs from the Gobi desert (which peak during the spring, April-May) were sparse, as indicated by low surface particulate Ti (pTi) concentrations. In contrast, surface pAl concentrations did not follow pTi trends as expected, but appear to be dominated by scavenging/uptake of dissolved Al during diatom blooms. Surface pMn concentrations were low, but vertical profiles of pMn and pMn/pTi reveal a strong sedimentary source at 200 m, originating from the Hawaiian continental shelf through a combination of redox mobilization and resuspension processes. The redox active elements Ce and Co can have chemistries similar to that of Mn, but in these samples the pCe and pCo distributions were distinct from Mn and each other in both surface trends and vertical profiles. Surface pREE (e.g., La, Ce, Pr) were highest during the earliest sampling events and quickly decreased to consistently low concentrations, while vertical distributions were characterized by scavenging onto biotic particles and mid-depth inputs. The surface particulate Co trend is similar to those of pAl and pP, while the pCo vertical profiles reflect surface enrichment but low concentrations and little variability at depth. A second, complementary poster is also being presented which examines the biological influence over particulate trace element cycling (Weisend et al., "Particulate Trace Element Cycling in a Diatom Bloom at Station ALOHA").
Return probabilities and hitting times of random walks on sparse Erdös-Rényi graphs.
Martin, O C; Sulc, P
2010-03-01
We consider random walks on random graphs, focusing on return probabilities and hitting times for sparse Erdös-Rényi graphs. Using the tree approach, which is expected to be exact in the large graph limit, we show how to solve for the distribution of these quantities and we find that these distributions exhibit a form of self-similarity.
Distributed Compressive Sensing
2009-01-01
example, smooth signals are sparse in the Fourier basis, and piecewise smooth signals are sparse in a wavelet basis [8]; the commercial coding standards MP3...including wavelets [8], Gabor bases [8], curvelets [35], etc., are widely used for representation and compression of natural signals, images, and...spikes and the sine waves of a Fourier basis, or the Fourier basis and wavelets . Signals that are sparsely represented in frames or unions of bases can
Regional model-based computerized ionospheric tomography using GPS measurements: IONOLAB-CIT
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Orhan; Arikan, Feza
2015-10-01
Three-dimensional imaging of the electron density distribution in the ionosphere is a crucial task for investigating the ionospheric effects. Dual-frequency Global Positioning System (GPS) satellite signals can be used to estimate the slant total electron content (STEC) along the propagation path between a GPS satellite and ground-based receiver station. However, the estimated GPS-STEC is very sparse and highly nonuniformly distributed for obtaining reliable 3-D electron density distributions derived from the measurements alone. Standard tomographic reconstruction techniques are not accurate or reliable enough to represent the full complexity of variable ionosphere. On the other hand, model-based electron density distributions are produced according to the general trends of ionosphere, and these distributions do not agree with measurements, especially for geomagnetically active hours. In this study, a regional 3-D electron density distribution reconstruction method, namely, IONOLAB-CIT, is proposed to assimilate GPS-STEC into physical ionospheric models. The proposed method is based on an iterative optimization framework that tracks the deviations from the ionospheric model in terms of F2 layer critical frequency and maximum ionization height resulting from the comparison of International Reference Ionosphere extended to Plasmasphere (IRI-Plas) model-generated STEC and GPS-STEC. The suggested tomography algorithm is applied successfully for the reconstruction of electron density profiles over Turkey, during quiet and disturbed hours of ionosphere using Turkish National Permanent GPS Network.
Chiang, Andrea; Dreger, Douglas S.; Ford, Sean R.; ...
2014-07-08
Here in this study, we investigate the 14 September 1988 U.S.–Soviet Joint Verification Experiment nuclear test at the Semipalatinsk test site in eastern Kazakhstan and two nuclear explosions conducted less than 10 years later at the Chinese Lop Nor test site. These events were very sparsely recorded by stations located within 1600 km, and in each case only three or four stations were available in the regional distance range. We have utilized a regional distance seismic waveform method fitting long-period, complete, three-component waveforms jointly with first-motion observations from regional stations and teleseismic arrays. The combination of long-period waveforms and first-motionmore » observations provides a unique discrimination of these sparsely recorded events in the context of the Hudson et al. (1989) source-type diagram. We demonstrate through a series of jackknife tests and sensitivity analyses that the source type of the explosions is well constrained. One event, a 1996 Lop Nor shaft explosion, displays large Love waves and possibly reversed Rayleigh waves at one station, indicative of a large F-factor. We show the combination of long-period waveforms and P-wave first motions are able to discriminate this event as explosion-like and distinct from earthquakes and collapses. We further demonstrate the behavior of network sensitivity solutions for models of tectonic release and spall-based tensile damage over a range of F-factors and K-factors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Andrea; Dreger, Douglas S.; Ford, Sean R.
Here in this study, we investigate the 14 September 1988 U.S.–Soviet Joint Verification Experiment nuclear test at the Semipalatinsk test site in eastern Kazakhstan and two nuclear explosions conducted less than 10 years later at the Chinese Lop Nor test site. These events were very sparsely recorded by stations located within 1600 km, and in each case only three or four stations were available in the regional distance range. We have utilized a regional distance seismic waveform method fitting long-period, complete, three-component waveforms jointly with first-motion observations from regional stations and teleseismic arrays. The combination of long-period waveforms and first-motionmore » observations provides a unique discrimination of these sparsely recorded events in the context of the Hudson et al. (1989) source-type diagram. We demonstrate through a series of jackknife tests and sensitivity analyses that the source type of the explosions is well constrained. One event, a 1996 Lop Nor shaft explosion, displays large Love waves and possibly reversed Rayleigh waves at one station, indicative of a large F-factor. We show the combination of long-period waveforms and P-wave first motions are able to discriminate this event as explosion-like and distinct from earthquakes and collapses. We further demonstrate the behavior of network sensitivity solutions for models of tectonic release and spall-based tensile damage over a range of F-factors and K-factors.« less
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1988-01-01
In Kanerva's Sparse Distributed Memory, writing to and reading from the memory are done in relation to spheres in an n-dimensional binary vector space. Thus it is important to know how many points are in the intersection of two spheres in this space. Two proofs are given of Wang's formula for spheres of unequal radii, and an integral approximation for the intersection in this case.
Regional influences on reconstructed global mean sea level
NASA Astrophysics Data System (ADS)
Natarov, Svetlana I.; Merrifield, Mark A.; Becker, Janet M.; Thompson, Phillip R.
2017-04-01
Reconstructions of global mean sea level (GMSL) based on tide gauge measurements tend to exhibit common multidecadal rate fluctuations over the twentieth century. GMSL rate changes may result from physical drivers, such as changes in radiative forcing or land water storage. Alternatively, these fluctuations may represent artifacts due to sampling limitations inherent in the historical tide gauge network. In particular, a high percentage of tide gauges used in reconstructions, especially prior to the 1950s, are from Europe and North America in the North Atlantic region. Here a GMSL reconstruction based on the reduced space optimal interpolation algorithm is deconstructed, with the contributions of individual tide gauge stations quantified and assessed regionally. It is demonstrated that the North Atlantic region has a disproportionate influence on reconstructed GMSL rate fluctuations prior to the 1950s, notably accounting for a rate minimum in the 1920s and contributing to a rate maximum in the 1950s. North Atlantic coastal sea level fluctuations related to wind-driven ocean volume redistribution likely contribute to these estimated GMSL rate inflections. The findings support previous claims that multidecadal rate changes in GMSL reconstructions are likely related to the geographic distribution of tide gauge stations within a sparse global network.
Approximate method of variational Bayesian matrix factorization/completion with sparse prior
NASA Astrophysics Data System (ADS)
Kawasumi, Ryota; Takeda, Koujin
2018-05-01
We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.
2015-03-26
DEMANDED PARTS DISSERTATION Gregory H. Gehret AFIT-ENS-DS-15-M- 256 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE...protection in the United States. AFIT-ENS-DS-15-M- 256 ADVANCING COST-EFFECTIVE READINESS BY IMPROVING THE SUPPLY CHAIN MANAGEMENT OF SPARSE...RELEASE; DISTRIBUTION UNLIMITED. AFIT-ENS-DS-15-M- 256 ADVANCING COST-EFFECTIVE READINESS BY IMPROVING THE SUPPLY CHAIN MANAGEMENT OF SPARSE
Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P; Gee, James C
2009-01-01
We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities.
Zheng, Yuanjie; Grossman, Murray; Awate, Suyash P.; Gee, James C.
2013-01-01
We propose to use the sparseness property of the gradient probability distribution to estimate the intensity nonuniformity in medical images, resulting in two novel automatic methods: a non-parametric method and a parametric method. Our methods are easy to implement because they both solve an iteratively re-weighted least squares problem. They are remarkably accurate as shown by our experiments on images of different imaged objects and from different imaging modalities. PMID:20426191
Generative models for discovering sparse distributed representations.
Hinton, G E; Ghahramani, Z
1997-01-01
We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. PMID:9304685
2016-09-01
is to fit empirical Beta distributions to observed data, and then to use a randomization approach to make inferences on the difference between...a Ridit analysis on the often sparse data sets in many Flying Qualities applicationsi. The method of this paper is to fit empirical Beta ...One such measure is the discrete- probability-distribution version of the (squared) ‘Hellinger Distance’ (Yang & Le Cam , 2000) 2(, ) = 1
A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging
Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2017-01-01
This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583
Dimension-Factorized Range Migration Algorithm for Regularly Distributed Array Imaging
Guo, Qijia; Wang, Jie; Chang, Tianying
2017-01-01
The two-dimensional planar MIMO array is a popular approach for millimeter wave imaging applications. As a promising practical alternative, sparse MIMO arrays have been devised to reduce the number of antenna elements and transmitting/receiving channels with predictable and acceptable loss in image quality. In this paper, a high precision three-dimensional imaging algorithm is proposed for MIMO arrays of the regularly distributed type, especially the sparse varieties. Termed the Dimension-Factorized Range Migration Algorithm, the new imaging approach factorizes the conventional MIMO Range Migration Algorithm into multiple operations across the sparse dimensions. The thinner the sparse dimensions of the array, the more efficient the new algorithm will be. Advantages of the proposed approach are demonstrated by comparison with the conventional MIMO Range Migration Algorithm and its non-uniform fast Fourier transform based variant in terms of all the important characteristics of the approaches, especially the anti-noise capability. The computation cost is analyzed as well to evaluate the efficiency quantitatively. PMID:29113083
Cross-domain expression recognition based on sparse coding and transfer learning
NASA Astrophysics Data System (ADS)
Yang, Yong; Zhang, Weiyi; Huang, Yong
2017-05-01
Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.
Greedy Sparse Approaches for Homological Coverage in Location Unaware Sensor Networks
2017-12-08
GlobalSIP); 2013 Dec; Austin , TX . p. 595– 598. 33. Farah C, Schwaner F, Abedi A, Worboys M. Distributed homology algorithm to detect topological events...ARL-TR-8235•DEC 2017 US Army Research Laboratory Greedy Sparse Approaches for Homological Coverage in Location-Unaware Sensor Net- works by Terrence...8235•DEC 2017 US Army Research Laboratory Greedy Sparse Approaches for Homological Coverage in Location-Unaware Sensor Net- works by Terrence J Moore
4D computerized ionospheric tomography by using GPS measurements and IRI-Plas model
NASA Astrophysics Data System (ADS)
Tuna, Hakan; Arikan, Feza; Arikan, Orhan
2016-07-01
Ionospheric imaging is an important subject in ionospheric studies. GPS based TEC measurements provide very accurate information about the electron density values in the ionosphere. However, since the measurements are generally very sparse and non-uniformly distributed, computation of 3D electron density estimation from measurements alone is an ill-defined problem. Model based 3D electron density estimations provide physically feasible distributions. However, they are not generally compliant with the TEC measurements obtained from GPS receivers. In this study, GPS based TEC measurements and an ionosphere model known as International Reference Ionosphere Extended to Plasmasphere (IRI-Plas) are employed together in order to obtain a physically accurate 3D electron density distribution which is compliant with the real measurements obtained from a GPS satellite - receiver network. Ionospheric parameters input to the IRI-Plas model are perturbed in the region of interest by using parametric perturbation models such that the synthetic TEC measurements calculated from the resultant 3D electron density distribution fit to the real TEC measurements. The problem is considered as an optimization problem where the optimization parameters are the parameters of the parametric perturbation models. Proposed technique is applied over Turkey, on both calm and storm days of the ionosphere. Results show that the proposed technique produces 3D electron density distributions which are compliant with IRI-Plas model, GPS TEC measurements and ionosonde measurements. The effect of the GPS receiver station number on the performance of the proposed technique is investigated. Results showed that 7 GPS receiver stations in a region as large as Turkey is sufficient for both calm and storm days of the ionosphere. Since the ionization levels in the ionosphere are highly correlated in time, the proposed technique is extended to the time domain by applying Kalman based tracking and smoothing approaches onto the obtained results. Combining Kalman methods with the proposed 3D CIT technique creates a robust 4D ionospheric electron density estimation model, and has the advantage of decreasing the computational cost of the proposed method. Results applied on both calm and storm days of the ionosphere show that, new technique produces more robust solutions especially when the number of GPS receiver stations in the region is small. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
A view of Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Denning, P. J.
1986-01-01
Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.
Augmented l1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm. Revision 1
2012-10-17
nonzero and sampled from the standard Gaussian distribution (for Figure 2) or the Bernoulli distribution (for Figure 3). Both tests had the same sensing...dual variable y(k) Figure 3: Convergence of primal and dual variables of three algorithms on Bernoulli sparse x0 was the slowest. Besides the obvious...slower convergence than the final stage. Comparing the results of two tests, the convergence was faster on the Bernoulli sparse signal than the
Notes on implementation of sparsely distributed memory
NASA Technical Reports Server (NTRS)
Keeler, J. D.; Denning, P. J.
1986-01-01
The Sparsely Distributed Memory (SDM) developed by Kanerva is an unconventional memory design with very interesting and desirable properties. The memory works in a manner that is closely related to modern theories of human memory. The SDM model is discussed in terms of its implementation in hardware. Two appendices discuss the unconventional approaches of the SDM: Appendix A treats a resistive circuit for fast, parallel address decoding; and Appendix B treats a systolic array for high throughput read and write operations.
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
Li, Ruipeng; Saad, Yousef
2017-08-01
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
Low-Rank Correction Methods for Algebraic Domain Decomposition Preconditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ruipeng; Saad, Yousef
This study presents a parallel preconditioning method for distributed sparse linear systems, based on an approximate inverse of the original matrix, that adopts a general framework of distributed sparse matrices and exploits domain decomposition (DD) and low-rank corrections. The DD approach decouples the matrix and, once inverted, a low-rank approximation is applied by exploiting the Sherman--Morrison--Woodbury formula, which yields two variants of the preconditioning methods. The low-rank expansion is computed by the Lanczos procedure with reorthogonalizations. Numerical experiments indicate that, when combined with Krylov subspace accelerators, this preconditioner can be efficient and robust for solving symmetric sparse linear systems. Comparisonsmore » with pARMS, a DD-based parallel incomplete LU (ILU) preconditioning method, are presented for solving Poisson's equation and linear elasticity problems.« less
NASA Astrophysics Data System (ADS)
Kim, D.; Lee, H.; Jung, H. C.; Beighley, E.; Laraque, A.; Tshimanga, R.; Alsdorf, D. E.
2016-12-01
Rivers and wetlands are very important for ecological habitats, and it plays a key role in providing a source of greenhouse gases (CO2 and CH4). The floodplains ecosystems depend on the process between the vegetation and flood characteristics. The water level is a prerequisite to an understanding of terrestrial water storage and discharge. Despite the lack of in situ data over the Congo Basin, which is the world's third largest in size ( 3.7 million km2), and second only to the Amazon River in discharge ( 40,500 m3 s-1 annual average between 1902 and 2015 in the main Brazzaville-Kinshasa gauging station), the surface water level dynamics in the wetlands have been successfully estimated using satellite altimetry, backscattering coefficients (σ0) from Synthetic Aperture Radar (SAR) images and, interferometric SAR technique. However, the water level estimation of the Congo River remains poorly quantified due to the sparse orbital spacing of radar altimeters. Hence, we essentially have limited information only over the sparsely distributed the so-called "virtual stations". The backscattering coefficients from SAR images have been successfully used to distinguish different vegetation types, to monitor flood conditions, and to access soil moistures over the wetlands. However, σ0 has not been used to measure the water level changes over the open river because of very week return signal due to specular scattering. In this study, we have discovered that changes in σ0 over the Congo River occur mainly due to the water level changes in the river with the existence of the water plants (macrophytes, emergent plants, and submersed plant), depending on the rising and falling stage inside the depression of the "Cuvette Centrale". We expand the finding into generating the multi-temporal water level maps over the Congo River using PALSAR σ0, Envisat altimetry, and Landsat Normalized Difference Vegetation Index (NDVI) data. We also present preliminary estimates of the river discharge using the water level maps.
A bandwidth-efficient service for local information dissemination in sparse to dense roadways.
Garcia-Lozano, Estrella; Campo, Celeste; Garcia-Rubio, Carlos; Cortes-Martin, Alberto; Rodriguez-Carrion, Alicia; Noriega-Vivas, Patricia
2013-07-05
Thanks to the research on Vehicular Ad Hoc Networks (VANETs), we will be able to deploy applications on roadways that will contribute to energy efficiency through a better planning of long trips. With this goal in mind, we have designed a gas/charging station advertising system, which takes advantage of the broadcast nature of the network. We have found that reducing the number of total sent packets is important, as it allows for a better use of the available bandwidth. We have designed improvements for a distance-based flooding scheme, so that it can support the advertising application with good results in sparse to dense roadway scenarios.
A Bandwidth-Efficient Service for Local Information Dissemination in Sparse to Dense Roadways
Garcia-Lozano, Estrella; Campo, Celeste; Garcia-Rubio, Carlos; Cortes-Martin, Alberto; Rodriguez-Carrion, Alicia; Noriega-Vivas, Patricia
2013-01-01
Thanks to the research on Vehicular Ad Hoc Networks (VANETs), we will be able to deploy applications on roadways that will contribute to energy efficiency through a better planning of long trips. With this goal in mind, we have designed a gas/charging station advertising system, which takes advantage of the broadcast nature of the network. We have found that reducing the number of total sent packets is important, as it allows for a better use of the available bandwidth. We have designed improvements for a distance-based flooding scheme, so that it can support the advertising application with good results in sparse to dense roadway scenarios. PMID:23881130
Objective sea level pressure analysis for sparse data areas
NASA Technical Reports Server (NTRS)
Druyan, L. M.
1972-01-01
A computer procedure was used to analyze the pressure distribution over the North Pacific Ocean for eleven synoptic times in February, 1967. Independent knowledge of the central pressures of lows is shown to reduce the analysis errors for very sparse data coverage. The application of planned remote sensing of sea-level wind speeds is shown to make a significant contribution to the quality of the analysis especially in the high gradient mid-latitudes and for sparse coverage of conventional observations (such as over Southern Hemisphere oceans). Uniform distribution of the available observations of sea-level pressure and wind velocity yields results far superior to those derived from a random distribution. A generalization of the results indicates that the average lower limit for analysis errors is between 2 and 2.5 mb based on the perfect specification of the magnitude of the sea-level pressure gradient from a known verification analysis. A less than perfect specification will derive from wind-pressure relationships applied to satellite observed wind speeds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kansa, E.J.; Axelrod, M.C.; Kercher, J.R.
1994-05-01
Our current research into the response of natural ecosystems to a hypothesized climatic change requires that we have estimates of various meteorological variables on a regularly spaced grid of points on the surface of the earth. Unfortunately, the bulk of the world`s meteorological measurement stations is located at airports that tend to be concentrated on the coastlines of the world or near populated areas. We can also see that the spatial density of the station locations is extremely non-uniform with the greatest density in the USA, followed by Western Europe. Furthermore, the density of airports is rather sparse in desertmore » regions such as the Sahara, the Arabian, Gobi, and Australian deserts; likewise the density is quite sparse in cold regions such as Antarctica Northern Canada, and interior northern Russia. The Amazon Basin in Brazil has few airports. The frequency of airports is obviously related to the population centers and the degree of industrial development of the country. We address the following problem here. Given values of meteorological variables, such as maximum monthly temperature, measured at the more than 5,500 airport stations, interpolate these values onto a regular grid of terrestrial points spaced by one degree in both latitude and longitude. This is known as the scattered data problem.« less
NASA Astrophysics Data System (ADS)
Thio, Hong Kie; Song, Xi; Saikia, Chandan K.; Helmberger, Donald V.; Woods, Bradley B.
1999-01-01
We present a study of regional earthquakes in the western Mediterranean geared toward the development of methodologies and path calibrations for source characterization using regional broadband stations. The results of this study are useful for the monitoring and discrimination of seismic events under a comprehensive test ban treaty, as well as the routine analysis of seismicity and seismic hazard using a sparse array of stations. The area consists of several contrasting geological provinces with distinct seismic properties, which complicates the modeling of seismic wave propagation. We started by analyzing surface wave group velocities throughout the region and developed a preliminary model for each of the major geological provinces. We found variations of crustal thickness ranging from 45 km under the Atlas and Betic mountains and 37 km under the Saharan shield, to 20 km for the oceanic crust of the western Mediterranean Sea, which is consistent with earlier works. Throughout most of the region, the upper mantle velocities are low which is typical for tectonically active regions. The most complex areas in terms of wave propagation are the Betic Cordillera in southern Spain and its north African counterparts, the Rif and Tell Atlas mountains, as well as the Alboran Sea, between Spain and Morocco. The complexity of the wave propagation in these regions is probably due to the sharp velocity contrasts between the oceanic and continental regions as well as the the existence of deep sedimentary basins that have a very strong influence on the surface wave dispersion. We used this preliminary regionalized velocity model to correct the surface wave source spectra for propagation effects which we then inverted for source mechanism. We found that this method, which is in use in many parts of the world, works very well, provided that data from several stations are available. In order to study the events in the region using very few broadband stations or even a single station, we developed a hybrid inversion method which combines Pnl waveforms synthesized with the traditional body wave methods, with surface waves that are computed using normal modes. This procedure facilitates the inclusion of laterally varying structure in the Green's functions for the surface waves and allows us to determine source mechanisms for many of the larger earthquakes (M > 4) throughout the region with just one station. We compared our results with those available from other methods and found that they agree quite well. The epicentral depths that we have obtained from regional waveforms are consistent with observed teleseismic depth phases, as far as they are available. We also show that the particular upper mantle structure under the region causes the various Pn and Sn phases to be impulsive, which makes them a useful tool for depth determination as well. Thus we conclude that with proper calibration of the seismic structure in the region and high-quality broadband data, it is now possible to characterize and study events in this region, both with respect to mechanism and depth, with a limited distribution of regional broadband stations.
BI-sparsity pursuit for robust subspace recovery
Bian, Xiao; Krim, Hamid
2015-09-01
Here, the success of sparse models in computer vision and machine learning in many real-world applications, may be attributed in large part, to the fact that many high dimensional data are distributed in a union of low dimensional subspaces. The underlying structure may, however, be adversely affected by sparse errors, thus inducing additional complexity in recovering it. In this paper, we propose a bi-sparse model as a framework to investigate and analyze this problem, and provide as a result , a novel algorithm to recover the union of subspaces in presence of sparse corruptions. We additionally demonstrate the effectiveness ofmore » our method by experiments on real-world vision data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisorymore » Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).« less
Application of a sparseness constraint in multivariate curve resolution - Alternating least squares.
Hugelier, Siewert; Piqueras, Sara; Bedia, Carmen; de Juan, Anna; Ruckebusch, Cyril
2018-02-13
The use of sparseness in chemometrics is a concept that has increased in popularity. The advantage is, above all, a better interpretability of the results obtained. In this work, sparseness is implemented as a constraint in multivariate curve resolution - alternating least squares (MCR-ALS), which aims at reproducing raw (mixed) data by a bilinear model of chemically meaningful profiles. In many cases, the mixed raw data analyzed are not sparse by nature, but their decomposition profiles can be, as it is the case in some instrumental responses, such as mass spectra, or in concentration profiles linked to scattered distribution maps of powdered samples in hyperspectral images. To induce sparseness in the constrained profiles, one-dimensional and/or two-dimensional numerical arrays can be fitted using a basis of Gaussian functions with a penalty on the coefficients. In this work, a least squares regression framework with L 0 -norm penalty is applied. This L 0 -norm penalty constrains the number of non-null coefficients in the fit of the array constrained without having an a priori on the number and their positions. It has been shown that the sparseness constraint induces the suppression of values linked to uninformative channels and noise in MS spectra and improves the location of scattered compounds in distribution maps, resulting in a better interpretability of the constrained profiles. An additional benefit of the sparseness constraint is a lower ambiguity in the bilinear model, since the major presence of null coefficients in the constrained profiles also helps to limit the solutions for the profiles in the counterpart matrix of the MCR bilinear model. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhang, L; Liu, X J
2016-06-03
With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.
Perceptually controlled doping for audio source separation
NASA Astrophysics Data System (ADS)
Mahé, Gaël; Nadalin, Everton Z.; Suyama, Ricardo; Romano, João MT
2014-12-01
The separation of an underdetermined audio mixture can be performed through sparse component analysis (SCA) that relies however on the strong hypothesis that source signals are sparse in some domain. To overcome this difficulty in the case where the original sources are available before the mixing process, the informed source separation (ISS) embeds in the mixture a watermark, which information can help a further separation. Though powerful, this technique is generally specific to a particular mixing setup and may be compromised by an additional bitrate compression stage. Thus, instead of watermarking, we propose a `doping' method that makes the time-frequency representation of each source more sparse, while preserving its audio quality. This method is based on an iterative decrease of the distance between the distribution of the signal and a target sparse distribution, under a perceptual constraint. We aim to show that the proposed approach is robust to audio coding and that the use of the sparsified signals improves the source separation, in comparison with the original sources. In this work, the analysis is made only in instantaneous mixtures and focused on voice sources.
EIT Imaging Regularization Based on Spectral Graph Wavelets.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut
2017-09-01
The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.
Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff
2017-01-01
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
EPR oximetry in three spatial dimensions using sparse spin distribution
NASA Astrophysics Data System (ADS)
Som, Subhojit; Potter, Lee C.; Ahmad, Rizwan; Vikram, Deepti S.; Kuppusamy, Periannan
2008-08-01
A method is presented to use continuous wave electron paramagnetic resonance imaging for rapid measurement of oxygen partial pressure in three spatial dimensions. A particulate paramagnetic probe is employed to create a sparse distribution of spins in a volume of interest. Information encoding location and spectral linewidth is collected by varying the spatial orientation and strength of an applied magnetic gradient field. Data processing exploits the spatial sparseness of spins to detect voxels with nonzero spin and to estimate the spectral linewidth for those voxels. The parsimonious representation of spin locations and linewidths permits an order of magnitude reduction in data acquisition time, compared to four-dimensional tomographic reconstruction using traditional spectral-spatial imaging. The proposed oximetry method is experimentally demonstrated for a lithium octa- n-butoxy naphthalocyanine (LiNc-BuO) probe using an L-band EPR spectrometer.
NASA Astrophysics Data System (ADS)
Mitterer-Hoinkes, Susanna; Lehning, Michael; Phillips, Marcia; Sailer, Rudolf
2013-04-01
The area-wide distribution of permafrost is sparsely known in mountainous terrain (e.g. Alps). Permafrost monitoring can only be based on point or small scale measurements such as boreholes, active rock glaciers, BTS measurements or geophysical measurements. To get a better understanding of permafrost distribution, it is necessary to focus on modeling permafrost temperatures and permafrost distribution patterns. A lot of effort on these topics has been already expended using different kinds of models. In this study, the evolution of subsurface temperatures over successive years has been modeled at the location Ritigraben borehole (Mattertal, Switzerland) by using the one-dimensional snow cover model SNOWPACK. The model needs meteorological input and in our case information on subsurface properties. We used meteorological input variables of the automatic weather station Ritigraben (2630 m) in combination with the automatic weather station Saas Seetal (2480 m). Meteorological data between 2006 and 2011 on an hourly basis were used to drive the model. As former studies showed, the snow amount and the snow cover duration have a great influence on the thermal regime. Low snow heights allow for deeper penetration of low winter temperatures into the ground, strong winters with a high amount of snow attenuate this effect. In addition, variations in subsurface conditions highly influence the temperature regime. Therefore, we conducted sensitivity runs by defining a series of different subsurface properties. The modeled subsurface temperature profiles of Ritigraben were then compared to the measured temperatures in the Ritigraben borehole. This allows a validation of the influence of subsurface properties on the temperature regime. As expected, the influence of the snow cover is stronger than the influence of sub-surface material properties, which are significant, however. The validation presented here serves to prepare a larger spatial simulation with the complex hydro-meteorological 3-dimensional model Alpine 3D, which is based on a distributed application of SNOWPACK.
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope
A new scheduling algorithm for parallel sparse LU factorization with static pivoting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grigori, Laura; Li, Xiaoye S.
2002-08-20
In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.
The dark matter of galaxy voids
NASA Astrophysics Data System (ADS)
Sutter, P. M.; Lavaux, Guilhem; Wandelt, Benjamin D.; Weinberg, David H.; Warren, Michael S.
2014-03-01
How do observed voids relate to the underlying dark matter distribution? To examine the spatial distribution of dark matter contained within voids identified in galaxy surveys, we apply Halo Occupation Distribution models representing sparsely and densely sampled galaxy surveys to a high-resolution N-body simulation. We compare these galaxy voids to voids found in the halo distribution, low-resolution dark matter and high-resolution dark matter. We find that voids at all scales in densely sampled surveys - and medium- to large-scale voids in sparse surveys - trace the same underdensities as dark matter, but they are larger in radius by ˜20 per cent, they have somewhat shallower density profiles and they have centres offset by ˜ 0.4Rv rms. However, in void-to-void comparison we find that shape estimators are less robust to sampling, and the largest voids in sparsely sampled surveys suffer fragmentation at their edges. We find that voids in galaxy surveys always correspond to underdensities in the dark matter, though the centres may be offset. When this offset is taken into account, we recover almost identical radial density profiles between galaxies and dark matter. All mock catalogues used in this work are available at http://www.cosmicvoids.net.
NASA Astrophysics Data System (ADS)
Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.
2011-12-01
Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently than traditional MCMC.
Hydrologic characteristics of freshwater mussel habitat: novel insights from modeled flows
Drew, C. Ashton; Eddy, Michele; Kwak, Thomas J.; Cope, W. Gregory; Augspurger, Tom
2018-01-01
The ability to model freshwater stream habitat and species distributions is limited by the spatially sparse flow data available from long-term gauging stations. Flow data beyond the immediate vicinity of gauging stations would enhance our ability to explore and characterize hydrologic habitat suitability. The southeastern USA supports high aquatic biodiversity, but threats, such as landuse alteration, climate change, conflicting water-resource demands, and pollution, have led to the imperilment and legal protection of many species. The ability to distinguish suitable from unsuitable habitat conditions, including hydrologic suitability, is a key criterion for successful conservation and restoration of aquatic species. We used the example of the critically endangered Tar River Spinymussel (Parvaspina steinstansana) and associated species to demonstrate the value of modeled flow data (WaterFALL™) to generate novel insights into population structure and testable hypotheses regarding hydrologic suitability. With ordination models, we: 1) identified all catchments with potentially suitable hydrology, 2) identified 2 distinct hydrologic environments occupied by the Tar River Spinymussel, and 3) estimated greater hydrological habitat niche breadth of assumed surrogate species associates at the catchment scale. Our findings provide the first demonstrated application of complete, continuous, regional modeled hydrologic data to freshwater mussel distribution and management. This research highlights the utility of modeling and data-mining methods to facilitate further exploration and application of such modeled environmental conditions to inform aquatic species management. We conclude that such an approach can support landscape-scale management decisions that require spatial information at fine resolution (e.g., enhanced National Hydrology Dataset catchments) and broad extent (e.g., multiple river basins).
Kim, Steve M; Ganguli, Surya; Frank, Loren M
2012-08-22
Hippocampal place cells convey spatial information through a combination of spatially selective firing and theta phase precession. The way in which this information influences regions like the subiculum that receive input from the hippocampus remains unclear. The subiculum receives direct inputs from area CA1 of the hippocampus and sends divergent output projections to many other parts of the brain, so we examined the firing patterns of rat subicular neurons. We found a substantial transformation in the subicular code for space from sparse to dense firing rate representations along a proximal-distal anatomical gradient: neurons in the proximal subiculum are more similar to canonical, sparsely firing hippocampal place cells, whereas neurons in the distal subiculum have higher firing rates and more distributed spatial firing patterns. Using information theory, we found that the more distributed spatial representation in the subiculum carries, on average, more information about spatial location and context than the sparse spatial representation in CA1. Remarkably, despite the disparate firing rate properties of subicular neurons, we found that neurons at all proximal-distal locations exhibit robust theta phase precession, with similar spiking oscillation frequencies as neurons in area CA1. Our findings suggest that the subiculum is specialized to compress sparse hippocampal spatial codes into highly informative distributed codes suitable for efficient communication to other brain regions. Moreover, despite this substantial compression, the subiculum maintains finer scale temporal properties that may allow it to participate in oscillatory phase coding and spike timing-dependent plasticity in coordination with other regions of the hippocampal circuit.
Analysis, tuning and comparison of two general sparse solvers for distributed memory computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amestoy, P.R.; Duff, I.S.; L'Excellent, J.-Y.
2000-06-30
We describe the work performed in the context of a Franco-Berkeley funded project between NERSC-LBNL located in Berkeley (USA) and CERFACS-ENSEEIHT located in Toulouse (France). We discuss both the tuning and performance analysis of two distributed memory sparse solvers (superlu from Berkeley and mumps from Toulouse) on the 512 processor Cray T3E from NERSC (Lawrence Berkeley National Laboratory). This project gave us the opportunity to improve the algorithms and add new features to the codes. We then quite extensively analyze and compare the two approaches on a set of large problems from real applications. We further explain the main differencesmore » in the behavior of the approaches on artificial regular grid problems. As a conclusion to this activity report, we mention a set of parallel sparse solvers on which this type of study should be extended.« less
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
Discriminative Bayesian Dictionary Learning for Classification.
Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal
2016-12-01
We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.
Locating Local Earthquakes Using Single 3-Component Broadband Seismological Data
NASA Astrophysics Data System (ADS)
Das, S. B.; Mitra, S.
2015-12-01
We devised a technique to locate local earthquakes using single 3-component broadband seismograph and analyze the factors governing the accuracy of our result. The need for devising such a technique arises in regions of sparse seismic network. In state-of-the-art location algorithms, a minimum of three station recordings are required for obtaining well resolved locations. However, the problem arises when an event is recorded by less than three stations. This may be because of the following reasons: (a) down time of stations in a sparse network; (b) geographically isolated regions with limited logistic support to setup large network; (c) regions of insufficient economy for financing multi-station network and (d) poor signal-to-noise ratio for smaller events at most stations, except the one in its closest vicinity. Our technique provides a workable solution to the above problematic scenarios. However, our methodology is strongly dependent on the velocity model of the region. Our method uses a three step processing: (a) ascertain the back-azimuth of the event from the P-wave particle motion recorded on the horizontal components; (b) estimate the hypocentral distance using the S-P time; and (c) ascertain the emergent angle from the vertical and radial components. Once this is obtained, one can ray-trace through the 1-D velocity model to estimate the hypocentral location. We test our method on synthetic data, which produces results with 99% precision. With observed data, the accuracy of our results are very encouraging. The precision of our results depend on the signal-to-noise ratio (SNR) and choice of the right band-pass filter to isolate the P-wave signal. We used our method on minor aftershocks (3 < mb < 4) of the 2011 Sikkim earthquake using data from the Sikkim Himalayan network. Location of these events highlight the transverse strike-slip structure within the Indian plate, which was observed from source mechanism study of the mainshock and larger aftershocks.
NASA Astrophysics Data System (ADS)
Mulumba, J.-L.; Delvaux, D.
2012-04-01
Seismic hazard assessment and mitigation of catastrophes are primarily based on the identification and characterization of seismically active zones. These tasks still rely heavily on the existing knowledge of the seismic activity over the longest possible time period. The first seismic network in Equatorial Africa (IRSAC network) was operated from the Lwiro scientific base on the western shores of Lake Kivu between 1953 and 1963. Before this installation, the historical record of seismic activity in Central Africa is sparse. Even for the relatively short period concerned, spanning only 50-60 years, the historical record is far from being complete. A first attempt has been made by Herrinckx (1959) who compiled a list 960 felt seisms recorded at the meteorological stations between 1915 and 1954 in Congo, Rwanda and Burundi. They were used to draw a density map of felt seisms per square degree. We completed this data base by exploiting the meteorological archives and any available historical report to enlarge the database which now reaches 1513 entries between 1900 and 1959. These entries have been exanimate in order to identify possible historical seismic events. Those are defined by 3 or more quasi-simultaneous records observed over a relatively short distance (a few degrees of latitude/longitude) within a short time difference (few hours). A preliminary list of 115 possible historical seisms has been obtained, identified by 3 to 15 different stations. The proposed location is taken as the average latitude and longitude of the stations where the felt seisms were recorded. Some of the most important ones are associated to aftershocks that have been felt at some stations after the main shocks. The most recent felt seisms have been also recorded instrumentally, which helps to validate the procedure followed. The main difficulties are the magnitude estimation and the possible spatial incompleteness of the recording of felt seism evidence at the margin of the observation network. The distribution of these historical felt seisms mach the distribution of the instrumental epicenters. The results obtained may be used to complete the existing catalogues of historical seismicity. Herrinckx, P. (1959). Séismicité du Congo belge. Compilation des seismes observés aux stations climatologiques entre 1909 et 1954. Académie royale des Sciences coloniales. Classe des Sciences naturelles et médicales. Mémoire in8°. Nouvelle série, 11(5), 1-55
Computer Sciences and Data Systems, volume 1
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.
Effects of partitioning and scheduling sparse matrix factorization on communication and load balance
NASA Technical Reports Server (NTRS)
Venugopal, Sesh; Naik, Vijay K.
1991-01-01
A block based, automatic partitioning and scheduling methodology is presented for sparse matrix factorization on distributed memory systems. Using experimental results, this technique is analyzed for communication and load imbalance overhead. To study the performance effects, these overheads were compared with those obtained from a straightforward 'wrap mapped' column assignment scheme. All experimental results were obtained using test sparse matrices from the Harwell-Boeing data set. The results show that there is a communication and load balance tradeoff. The block based method results in lower communication cost whereas the wrap mapped scheme gives better load balance.
Power Enhancement in High Dimensional Cross-Sectional Tests
Fan, Jianqing; Liao, Yuan; Yao, Jiawei
2016-01-01
We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846
Sensory-evoked perturbations of locomotor activity by sparse sensory input: a computational study
Brownstone, Robert M.
2015-01-01
Sensory inputs from muscle, cutaneous, and joint afferents project to the spinal cord, where they are able to affect ongoing locomotor activity. Activation of sensory input can initiate or prolong bouts of locomotor activity depending on the identity of the sensory afferent activated and the timing of the activation within the locomotor cycle. However, the mechanisms by which afferent activity modifies locomotor rhythm and the distribution of sensory afferents to the spinal locomotor networks have not been determined. Considering the many sources of sensory inputs to the spinal cord, determining this distribution would provide insights into how sensory inputs are integrated to adjust ongoing locomotor activity. We asked whether a sparsely distributed set of sensory inputs could modify ongoing locomotor activity. To address this question, several computational models of locomotor central pattern generators (CPGs) that were mechanistically diverse and generated locomotor-like rhythmic activity were developed. We show that sensory inputs restricted to a small subset of the network neurons can perturb locomotor activity in the same manner as seen experimentally. Furthermore, we show that an architecture with sparse sensory input improves the capacity to gate sensory information by selectively modulating sensory channels. These data demonstrate that sensory input to rhythm-generating networks need not be extensively distributed. PMID:25673740
Nonlinear spike-and-slab sparse coding for interpretable image encoding.
Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.
Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding
Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg
2015-01-01
Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947
NASA Technical Reports Server (NTRS)
Lichten, S. M.
1991-01-01
Data from the Global Positioning System (GPS) were used to determine precise polar motion estimates. Conservatively calculated formal errors of the GPS least squares solution are approx. 10 cm. The GPS estimates agree with independently determined polar motion values from very long baseline interferometry (VLBI) at the 5 cm level. The data were obtained from a partial constellation of GPS satellites and from a sparse worldwide distribution of ground stations. The accuracy of the GPS estimates should continue to improve as more satellites and ground receivers become operational, and eventually a near real time GPS capability should be available. Because the GPS data are obtained and processed independently from the large radio antennas at the Deep Space Network (DSN), GPS estimation could provide very precise measurements of Earth orientation for calibration of deep space tracking data and could significantly relieve the ever growing burden on the DSN radio telescopes to provide Earth platform calibrations.
DISTRIBUTIONAL CHANGES AND POPULATION STATUS FOR AMPHIBIANS IN THE EASTERN MOJAVE DESERT
A number of amphibian species historically inhabited sparsely distributed wetlands in the Mojave Desert of western North America, habitats that have been dramatically altered or eliminated as a result of human activities. The population status and distributional changes for amphi...
NASA Astrophysics Data System (ADS)
Su, Wei; Zhou, Ti; Zhang, Peng; Zhou, Hong; Li, Hui
2018-01-01
Some biological surfaces were proved to have excellent anti-wear performance. Being inspired, Nd:YAG pulsed laser was used to create striated biomimetic laser hardening tracks on medium carbon steel samples. Dry sliding wear tests biomimetic samples were performed to investigate specific influence of distribution of laser hardening tracks on sliding wear resistance of biomimetic samples. After comparing wear weight loss of biomimetic samples, quenched sample and untreated sample, it can be suggested that the sample covered with dense laser tracks (3.5 mm spacing) has lower wear weight loss than the one covered with sparse laser tracks (4.5 mm spacing); samples distributed with only dense laser tracks or sparse laser tracks (even distribution) were proved to have better wear resistance than samples distributed with both dense and sparse tracks (uneven distribution). Wear mechanisms indicate that laser track and exposed substrate of biomimetic sample can be regarded as hard zone and soft zone respectively. Inconsecutive striated hard regions, on the one hand, can disperse load into small branches, on the other hand, will hinder sliding abrasives during wear. Soft regions with small range are beneficial in consuming mechanical energy and storing lubricative oxides, however, soft zone with large width (>0.5 mm) will be harmful to abrasion resistance of biomimetic sample because damages and material loss are more obvious on surface of soft phase. As for the reason why samples with even distributed bionic laser tracks have better wear resistance, it can be explained by the fact that even distributed laser hardening tracks can inhibit severe worn of local regions, thus sliding process can be more stable and wear extent can be alleviated as well.
On the feasibility of measuring urban air pollution by wireless distributed sensor networks.
Moltchanov, Sharon; Levy, Ilan; Etzion, Yael; Lerner, Uri; Broday, David M; Fishbain, Barak
2015-01-01
Accurate evaluation of air pollution on human-wellbeing requires high-resolution measurements. Standard air quality monitoring stations provide accurate pollution levels but due to their sparse distribution they cannot capture the highly resolved spatial variations within cities. Similarly, dedicated field campaigns can use tens of measurement devices and obtain highly dense spatial coverage but normally deployment has been limited to short periods of no more than few weeks. Nowadays, advances in communication and sensory technologies enable the deployment of dense grids of wireless distributed air monitoring nodes, yet their sensor ability to capture the spatiotemporal pollutant variability at the sub-neighborhood scale has never been thoroughly tested. This study reports ambient measurements of gaseous air pollutants by a network of six wireless multi-sensor miniature nodes that have been deployed in three urban sites, about 150 m apart. We demonstrate the network's capability to capture spatiotemporal concentration variations at an exceptional fine resolution but highlight the need for a frequent in-situ calibration to maintain the consistency of some sensors. Accordingly, a procedure for a field calibration is proposed and shown to improve the system's performance. Overall, our results support the compatibility of wireless distributed sensor networks for measuring urban air pollution at a sub-neighborhood spatial resolution, which suits the requirement for highly spatiotemporal resolved measurements at the breathing-height when assessing exposure to urban air pollution. Copyright © 2014 Elsevier B.V. All rights reserved.
AZTEC. Parallel Iterative method Software for Solving Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.; Shadid, J.; Tuminaro, R.
1995-07-01
AZTEC is an interactive library that greatly simplifies the parrallelization process when solving the linear systems of equations Ax=b where A is a user supplied n X n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. AZTEC is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparse unstructured matricesmore » for parallel solutions.« less
On Edge Exchangeable Random Graphs
NASA Astrophysics Data System (ADS)
Janson, Svante
2017-06-01
We study a recent model for edge exchangeable random graphs introduced by Crane and Dempsey; in particular we study asymptotic properties of the random simple graph obtained by merging multiple edges. We study a number of examples, and show that the model can produce dense, sparse and extremely sparse random graphs. One example yields a power-law degree distribution. We give some examples where the random graph is dense and converges a.s. in the sense of graph limit theory, but also an example where a.s. every graph limit is the limit of some subsequence. Another example is sparse and yields convergence to a non-integrable generalized graphon defined on (0,∞).
NASA Astrophysics Data System (ADS)
Galiatsatos, P. G.; Tennyson, J.
2012-11-01
The most time consuming step within the framework of the UK R-matrix molecular codes is that of the diagonalization of the inner region Hamiltonian matrix (IRHM). Here we present the method that we follow to speed up this step. We use shared memory machines (SMM), distributed memory machines (DMM), the OpenMP directive based parallel language, the MPI function based parallel language, the sparse matrix diagonalizers ARPACK and PARPACK, a variation for real symmetric matrices of the official coordinate sparse matrix format and finally a parallel sparse matrix-vector product (PSMV). The efficient application of the previous techniques rely on two important facts: the sparsity of the matrix is large enough (more than 98%) and in order to get back converged results we need a small only part of the matrix spectrum.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.
Han, Changcai; Yang, Jinsheng
2017-10-30
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network
Han, Changcai; Yang, Jinsheng
2017-01-01
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155
NASA Astrophysics Data System (ADS)
de Wachter, E.; Haefele, A.; Kaempfer, N.; Ka, S.; Oh, J.
2009-04-01
The University of Bern operates two ground based microwave radiometers to measure the water vapour content in the stratosphere and mesosphere. One instrument is located nearby Bern [47°N, 7°E], Switzerland, and has been providing data since 2002 to the "Network for the Detection of Atmospheric Composition Change", NDACC, as well as to the European project GEOmon. The second radiometer has been operational in Seoul [37°N, 126°E], S-Korea, starting November 2006. Both instruments provide water vapour profiles in the altitude range 25 to 70 km. Long-term measurements of middle atmospheric water vapour by ground-based microwave instruments are sparse. These instruments provide long-term stability and high time resolution, so are in this sense ideal for short time-scale variability studies, monitoring long-term trends and validation of satellites. An analysis between these 2-year overlapping datasets of the European and Asian continent can provide valuable input on the distribution of wave patterns. In this study, we present the measurement characteristics of the instruments, and validate our data with water vapour profiles from the Aura/MLS instrument. In addition, we investigate correlations between these two midlatitudinal stations, gathering information on the spatial distribution of water vapour, particularly for pressures from 1 to 0.03 hPa.
On evaluating the robustness of spatial-proximity-based regionalization methods
NASA Astrophysics Data System (ADS)
Lebecherel, Laure; Andréassian, Vazken; Perrin, Charles
2016-08-01
In absence of streamflow data to calibrate a hydrological model, its parameters are to be inferred by a regionalization method. In this technical note, we discuss a specific class of regionalization methods, those based on spatial proximity, which transfers hydrological information (typically calibrated parameter sets) from neighbor gauged stations to the target ungauged station. The efficiency of any spatial-proximity-based regionalization method will depend on the density of the available streamgauging network, and the purpose of this note is to discuss how to assess the robustness of the regionalization method (i.e., its resilience to an increasingly sparse hydrometric network). We compare two options: (i) the random hydrometrical reduction (HRand) method, which consists in sub-sampling the existing gauging network around the target ungauged station, and (ii) the hydrometrical desert method (HDes), which consists in ignoring the closest gauged stations. Our tests suggest that the HDes method should be preferred, because it provides a more realistic view on regionalization performance.
The effects of missing data on global ozone estimates
NASA Technical Reports Server (NTRS)
Drewry, J. W.; Robbins, J. L.
1981-01-01
The effects of missing data and model truncation on estimates of the global mean, zonal distribution, and global distribution of ozone are considered. It is shown that missing data can introduce biased estimates with errors that are not accounted for in the accuracy calculations of empirical modeling techniques. Data-fill techniques are introduced and used for evaluating error bounds and constraining the estimate in areas of sparse and missing data. It is found that the accuracy of the global mean estimate is more dependent on data distribution than model size. Zonal features can be accurately described by 7th order models over regions of adequate data distribution. Data variance accounted for by higher order models appears to represent climatological features of columnar ozone rather than pure error. Data-fill techniques can prevent artificial feature generation in regions of sparse or missing data without degrading high order estimates over dense data regions.
Pole-Like Road Furniture Detection in Sparse and Unevenly Distributed Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Li, F.; Lehtomäki, M.; Oude Elberink, S.; Vosselman, G.; Puttonen, E.; Kukko, A.; Hyyppä, J.
2018-05-01
Pole-like road furniture detection received much attention due to its traffic functionality in recent years. In this paper, we develop a framework to detect pole-like road furniture from sparse mobile laser scanning data. The framework is carried out in four steps. The unorganised point cloud is first partitioned. Then above ground points are clustered and roughly classified after removing ground points. A slicing check in combination with cylinder masking is proposed to extract pole-like road furniture candidates. Pole-like road furniture are obtained after occlusion analysis in the last stage. The average completeness and correctness of pole-like road furniture in sparse and unevenly distributed mobile laser scanning data was above 0.83. It is comparable to the state of art in the field of pole-like road furniture detection in mobile laser scanning data of good quality and is potentially of practical use in the processing of point clouds collected by autonomous driving platforms.
Data traffic reduction schemes for sparse Cholesky factorizations
NASA Technical Reports Server (NTRS)
Naik, Vijay K.; Patrick, Merrell L.
1988-01-01
Load distribution schemes are presented which minimize the total data traffic in the Cholesky factorization of dense and sparse, symmetric, positive definite matrices on multiprocessor systems with local and shared memory. The total data traffic in factoring an n x n sparse, symmetric, positive definite matrix representing an n-vertex regular 2-D grid graph using n (sup alpha), alpha is equal to or less than 1, processors are shown to be O(n(sup 1 + alpha/2)). It is O(n(sup 3/2)), when n (sup alpha), alpha is equal to or greater than 1, processors are used. Under the conditions of uniform load distribution, these results are shown to be asymptotically optimal. The schemes allow efficient use of up to O(n) processors before the total data traffic reaches the maximum value of O(n(sup 3/2)). The partitioning employed within the scheme, allows a better utilization of the data accessed from shared memory than those of previously published methods.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
NASA Astrophysics Data System (ADS)
Schlömer, Antje; Geissler, Wolfram H.; Jokat, Wilfried; Jegen, Marion
2017-12-01
Earthquake locations along the southern Mid-Atlantic Ridge have large uncertainties due to the sparse distribution of permanent seismological stations in and around the South Atlantic Ocean. Most of the earthquakes are associated with plate tectonic processes related to the formation of new oceanic lithosphere, as they are located close to the ridge axis or in the immediate vicinity of transform faults. A local seismological network of ocean-bottom seismometers and land stations on and around the archipelago of Tristan da Cunha allowed for the first time a local earthquake survey for 1 year. We relate intraplate seismicity within the African oceanic plate segment north of the island partly to extensional stresses induced by a bordering large transform fault and to the existence of the Tristan mantle plume. The temporal propagation of earthquakes within the segment reflects the prevailing stress field. The strong extensional stresses in addition with the plume weaken the lithosphere and might hint at an incipient ridge jump. An apparently aseismic zone coincides with the proposed location of the Tristan conduit in the upper mantle southwest of the islands. The margins of this zone describe the transition between the ductile and the surrounding brittle regime. Moreover, we observe seismicity close to the islands of Tristan da Cunha and nearby seamounts, which we relate to ongoing tectono-magmatic activity.
Utilizing Multiple Datasets for Snow Cover Mapping
NASA Technical Reports Server (NTRS)
Tait, Andrew B.; Hall, Dorothy K.; Foster, James L.; Armstrong, Richard L.
1999-01-01
Snow-cover maps generated from surface data are based on direct measurements, however they are prone to interpolation errors where climate stations are sparsely distributed. Snow cover is clearly discernable using satellite-attained optical data because of the high albedo of snow, yet the surface is often obscured by cloud cover. Passive microwave (PM) data is unaffected by clouds, however, the snow-cover signature is significantly affected by melting snow and the microwaves may be transparent to thin snow (less than 3cm). Both optical and microwave sensors have problems discerning snow beneath forest canopies. This paper describes a method that combines ground and satellite data to produce a Multiple-Dataset Snow-Cover Product (MDSCP). Comparisons with current snow-cover products show that the MDSCP draws together the advantages of each of its component products while minimizing their potential errors. Improved estimates of the snow-covered area are derived through the addition of two snow-cover classes ("thin or patchy" and "high elevation" snow cover) and from the analysis of the climate station data within each class. The compatibility of this method for use with Moderate Resolution Imaging Spectroradiometer (MODIS) data, which will be available in 2000, is also discussed. With the assimilation of these data, the resolution of the MDSCP would be improved both spatially and temporally and the analysis would become completely automated.
The Budget Guide to Seismic Network Management
NASA Astrophysics Data System (ADS)
Hagerty, M. T.; Ebel, J. E.
2007-05-01
Regardless of their size, there are certain tasks that all seismic networks must perform, including data collection and processing, earthquake location, information dissemination, and quality control. Small seismic networks are unlikely to possess the resources -- manpower and money -- required to do much in-house development. Fortunately, there are a lot of free or inexpensive software solutions available that are able to perform many of the required tasks. Often the available solutions are all-in-one turnkey packages designed and developed for much larger seismic networks, and the cost of adapting them to a smaller network must be weighed against the ease with which other, non-seismic software can be adapted to the same task. We describe here the software and hardware choices we have made for the New England Seismic Network (NESN), a sparse regional seismic network responsible for monitoring and reporting all seismicity within the New England region in the northeastern U.S. We have chosen to use a cost-effective approach to monitoring using free, off-the-shelf solutions where available (e.g., Earthworm, HYP2000) and modifying freeware solutions when it is easier than trying to adapt a large, complicated package. We have selected for use software that is: free, likely to receive continued support from the seismic or, preferably, larger internet community, and modular. Modularity is key to our design because it ensures that if one component of our processing system becomes obsolete, we can insert a suitable replacement with few modifications to the other modules. Our automated event detection, identification and location system is based on a wavelet transform analysis of station data that arrive continuously via TCP/IP transmission over the internet. Our system for interactive analyst review of seismic events and remote system monitoring utilizes a combination of Earthworm modules, Perl cgi-bin scripts, Java, and native Unix commands and can now be carried out via internet browser from anywhere in the world. With our current communication and processing system we are able to achieve a monitoring threshold of about M2.0 for most New England, in spite of high cultural noise and sparse station distribution, and maintain an extremely high rate of data recovery, for minimal cost.
Schönbrodt-Stitt, Sarah; Bosch, Anna; Behrens, Thorsten; Hartmann, Heike; Shi, Xuezheng; Scholten, Thomas
2013-10-01
In densely populated countries like China, clean water is one of the most challenging issues of prospective politics and environmental planning. Water pollution and eutrophication by excessive input of nitrogen and phosphorous from nonpoint sources is mostly linked to soil erosion from agricultural land. In order to prevent such water pollution by diffuse matter fluxes, knowledge about the extent of soil loss and the spatial distribution of hot spots of soil erosion is essential. In remote areas such as the mountainous regions of the upper and middle reaches of the Yangtze River, rainfall data are scarce. Since rainfall erosivity is one of the key factors in soil erosion modeling, e.g., expressed as R factor in the Revised Universal Soil Loss Equation model, a methodology is needed to spatially determine rainfall erosivity. Our study aims at the approximation and spatial regionalization of rainfall erosivity from sparse data in the large (3,200 km(2)) and strongly mountainous catchment of the Xiangxi River, a first order tributary to the Yangtze River close to the Three Gorges Dam. As data on rainfall were only obtainable in daily records for one climate station in the central part of the catchment and five stations in its surrounding area, we approximated rainfall erosivity as R factors using regression analysis combined with elevation bands derived from a digital elevation model. The mean annual R factor (R a) amounts for approximately 5,222 MJ mm ha(-1) h(-1) a(-1). With increasing altitudes, R a rises up to maximum 7,547 MJ mm ha(-1) h(-1) a(-1) at an altitude of 3,078 m a.s.l. At the outlet of the Xiangxi catchment erosivity is at minimum with approximate R a=1,986 MJ mm ha(-1) h(-1) a(-1). The comparison of our results with R factors from high-resolution measurements at comparable study sites close to the Xiangxi catchment shows good consistance and allows us to calculate grid-based R a as input for a spatially high-resolution and area-specific assessment of soil erosion risk.
Lg-Wave Cross Correlation and Epicentral Double-Difference Location in and near China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaff, David P.; Richards, Paul G.; Slinkard, Megan
In this paper, we perform epicentral relocations for a broad area using cross-correlation measurements made on Lg waves recorded at regional distances on a sparse station network. Using a two-step procedure (pairwise locations and cluster locations), we obtain final locations for 5623 events—3689 for all of China from 1985 to 2005 and 1934 for the Wenchuan area from May to August 2008. These high-quality locations comprise 20% of a starting catalog for all of China and 25% of a catalog for Wenchuan. Of the 1934 events located for Wenchuan, 1662 (86%) were newly detected. The final locations explain the residualsmore » 89 times better than the catalog locations for all of China (3.7302–0.0417 s) and 32 times better than the catalog locations for Wenchuan (0.8413–0.0267 s). The average semimajor axes of the 95% confidence ellipses are 420 m for all of China and 370 m for Wenchuan. The average azimuthal gaps are 205° for all of China and 266° for Wenchuan. 98% of the station distances for all of China are over 200 km. The mean and maximum station distances are 898 and 2174 km. The robustness of our location estimates and various trade-offs and sensitivities is explored with different inversion parameters for the location, such as starting locations for iterative solutions and which singular values to include. Finally, our results provide order-of-magnitude improvements in locations for event clusters, using waveforms from a very sparse far-regional network for which data are openly available.« less
Lg-Wave Cross Correlation and Epicentral Double-Difference Location in and near China
Schaff, David P.; Richards, Paul G.; Slinkard, Megan; ...
2018-03-20
In this paper, we perform epicentral relocations for a broad area using cross-correlation measurements made on Lg waves recorded at regional distances on a sparse station network. Using a two-step procedure (pairwise locations and cluster locations), we obtain final locations for 5623 events—3689 for all of China from 1985 to 2005 and 1934 for the Wenchuan area from May to August 2008. These high-quality locations comprise 20% of a starting catalog for all of China and 25% of a catalog for Wenchuan. Of the 1934 events located for Wenchuan, 1662 (86%) were newly detected. The final locations explain the residualsmore » 89 times better than the catalog locations for all of China (3.7302–0.0417 s) and 32 times better than the catalog locations for Wenchuan (0.8413–0.0267 s). The average semimajor axes of the 95% confidence ellipses are 420 m for all of China and 370 m for Wenchuan. The average azimuthal gaps are 205° for all of China and 266° for Wenchuan. 98% of the station distances for all of China are over 200 km. The mean and maximum station distances are 898 and 2174 km. The robustness of our location estimates and various trade-offs and sensitivities is explored with different inversion parameters for the location, such as starting locations for iterative solutions and which singular values to include. Finally, our results provide order-of-magnitude improvements in locations for event clusters, using waveforms from a very sparse far-regional network for which data are openly available.« less
2016-05-01
large but correlated noise and signal interference (i.e., low -rank interference). Another contribution is the implementation of deep learning...representation, low rank, deep learning 52 Tung-Duong Tran-Luu 301-394-3082Unclassified Unclassified Unclassified UU ii Approved for public release; distribution...Classification of Acoustic Transients 6 3.2 Joint Sparse Representation with Low -Rank Interference 7 3.3 Simultaneous Group-and-Joint Sparse Representation
Application distribution model and related security attacks in VANET
NASA Astrophysics Data System (ADS)
Nikaein, Navid; Kanti Datta, Soumya; Marecar, Irshad; Bonnet, Christian
2013-03-01
In this paper, we present a model for application distribution and related security attacks in dense vehicular ad hoc networks (VANET) and sparse VANET which forms a delay tolerant network (DTN). We study the vulnerabilities of VANET to evaluate the attack scenarios and introduce a new attacker`s model as an extension to the work done in [6]. Then a VANET model has been proposed that supports the application distribution through proxy app stores on top of mobile platforms installed in vehicles. The steps of application distribution have been studied in detail. We have identified key attacks (e.g. malware, spamming and phishing, software attack and threat to location privacy) for dense VANET and two attack scenarios for sparse VANET. It has been shown that attacks can be launched by distributing malicious applications and injecting malicious codes to On Board Unit (OBU) by exploiting OBU software security holes. Consequences of such security attacks have been described. Finally, countermeasures including the concepts of sandbox have also been presented in depth.
NASA Astrophysics Data System (ADS)
Hyman, J. D.; Aldrich, G.; Viswanathan, H.; Makedonska, N.; Karra, S.
2016-08-01
We characterize how different fracture size-transmissivity relationships influence flow and transport simulations through sparse three-dimensional discrete fracture networks. Although it is generally accepted that there is a positive correlation between a fracture's size and its transmissivity/aperture, the functional form of that relationship remains a matter of debate. Relationships that assume perfect correlation, semicorrelation, and noncorrelation between the two have been proposed. To study the impact that adopting one of these relationships has on transport properties, we generate multiple sparse fracture networks composed of circular fractures whose radii follow a truncated power law distribution. The distribution of transmissivities are selected so that the mean transmissivity of the fracture networks are the same and the distributions of aperture and transmissivity in models that include a stochastic term are also the same. We observe that adopting a correlation between a fracture size and its transmissivity leads to earlier breakthrough times and higher effective permeability when compared to networks where no correlation is used. While fracture network geometry plays the principal role in determining where transport occurs within the network, the relationship between size and transmissivity controls the flow speed. These observations indicate DFN modelers should be aware that breakthrough times and effective permeabilities can be strongly influenced by such a relationship in addition to fracture and network statistics.
NASA Astrophysics Data System (ADS)
Hyman, J.; Aldrich, G. A.; Viswanathan, H. S.; Makedonska, N.; Karra, S.
2016-12-01
We characterize how different fracture size-transmissivity relationships influence flow and transport simulations through sparse three-dimensional discrete fracture networks. Although it is generally accepted that there is a positive correlation between a fracture's size and its transmissivity/aperture, the functional form of that relationship remains a matter of debate. Relationships that assume perfect correlation, semi-correlation, and non-correlation between the two have been proposed. To study the impact that adopting one of these relationships has on transport properties, we generate multiple sparse fracture networks composed of circular fractures whose radii follow a truncated power law distribution. The distribution of transmissivities are selected so that the mean transmissivity of the fracture networks are the same and the distributions of aperture and transmissivity in models that include a stochastic term are also the same.We observe that adopting a correlation between a fracture size and its transmissivity leads to earlier breakthrough times and higher effective permeability when compared to networks where no correlation is used. While fracture network geometry plays the principal role in determining where transport occurs within the network, the relationship between size and transmissivity controls the flow speed. These observations indicate DFN modelers should be aware that breakthrough times and effective permeabilities can be strongly influenced by such a relationship in addition to fracture and network statistics.
Massively parallel sparse matrix function calculations with NTPoly
NASA Astrophysics Data System (ADS)
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Xu, Richard Yi Da; Luo, Xiangfeng
2018-05-01
Sparse nonnegative matrix factorization (SNMF) aims to factorize a data matrix into two optimized nonnegative sparse factor matrices, which could benefit many tasks, such as document-word co-clustering. However, the traditional SNMF typically assumes the number of latent factors (i.e., dimensionality of the factor matrices) to be fixed. This assumption makes it inflexible in practice. In this paper, we propose a doubly sparse nonparametric NMF framework to mitigate this issue by using dependent Indian buffet processes (dIBP). We apply a correlation function for the generation of two stick weights associated with each column pair of factor matrices while still maintaining their respective marginal distribution specified by IBP. As a consequence, the generation of two factor matrices will be columnwise correlated. Under this framework, two classes of correlation function are proposed: 1) using bivariate Beta distribution and 2) using Copula function. Compared with the single IBP-based NMF, this paper jointly makes two factor matrices nonparametric and sparse, which could be applied to broader scenarios, such as co-clustering. This paper is seen to be much more flexible than Gaussian process-based and hierarchial Beta process-based dIBPs in terms of allowing the two corresponding binary matrix columns to have greater variations in their nonzero entries. Our experiments on synthetic data show the merits of this paper compared with the state-of-the-art models in respect of factorization efficiency, sparsity, and flexibility. Experiments on real-world data sets demonstrate the efficiency of this paper in document-word co-clustering tasks.
A distributed planning concept for Space Station payload operations
NASA Technical Reports Server (NTRS)
Hagopian, Jeff; Maxwell, Theresa; Reed, Tracey
1994-01-01
The complex and diverse nature of the payload operations to be performed on the Space Station requires a robust and flexible planning approach. The planning approach for Space Station payload operations must support the phased development of the Space Station, as well as the geographically distributed users of the Space Station. To date, the planning approach for manned operations in space has been one of centralized planning to the n-th degree of detail. This approach, while valid for short duration flights, incurs high operations costs and is not conducive to long duration Space Station operations. The Space Station payload operations planning concept must reduce operations costs, accommodate phased station development, support distributed users, and provide flexibility. One way to meet these objectives is to distribute the planning functions across a hierarchy of payload planning organizations based on their particular needs and expertise. This paper presents a planning concept which satisfies all phases of the development of the Space Station (manned Shuttle flights, unmanned Station operations, and permanent manned operations), and the migration from centralized to distributed planning functions. Identified in this paper are the payload planning functions which can be distributed and the process by which these functions are performed.
NASA Astrophysics Data System (ADS)
Ghotbi, Saba; Sotoudeheian, Saeed; Arhami, Mohammad
2016-09-01
Satellite remote sensing products of AOD from MODIS along with appropriate meteorological parameters were used to develop statistical models and estimate ground-level PM10. Most of previous studies obtained meteorological data from synoptic weather stations, with rather sparse spatial distribution, and used it along with 10 km AOD product to develop statistical models, applicable for PM variations in regional scale (resolution of ≥10 km). In the current study, meteorological parameters were simulated with 3 km resolution using WRF model and used along with the rather new 3 km AOD product (launched in 2014). The resulting PM statistical models were assessed for a polluted and largely variable urban area, Tehran, Iran. Despite the critical particulate pollution problem, very few PM studies were conducted in this area. The issue of rather poor direct PM-AOD associations existed, due to different factors such as variations in particles optical properties, in addition to bright background issue for satellite data, as the studied area located in the semi-arid areas of Middle East. Statistical approach of linear mixed effect (LME) was used, and three types of statistical models including single variable LME model (using AOD as independent variable) and multiple variables LME model by using meteorological data from two sources, WRF model and synoptic stations, were examined. Meteorological simulations were performed using a multiscale approach and creating an appropriate physic for the studied region, and the results showed rather good agreements with recordings of the synoptic stations. The single variable LME model was able to explain about 61%-73% of daily PM10 variations, reflecting a rather acceptable performance. Statistical models performance improved through using multivariable LME and incorporating meteorological data as auxiliary variables, particularly by using fine resolution outputs from WRF (R2 = 0.73-0.81). In addition, rather fine resolution for PM estimates was mapped for the studied city, and resulting concentration maps were consistent with PM recordings at the existing stations.
Automatic classification of seismic events within a regional seismograph network
NASA Astrophysics Data System (ADS)
Tiira, Timo; Kortström, Jari; Uski, Marja
2015-04-01
A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.
NASA Astrophysics Data System (ADS)
Salamalikis, V.; Argiriou, A. A.; Dotsika, E.
2016-03-01
In this paper the periodic patterns of the isotopic composition of precipitation (δ18O) for 22 stations located around Central Europe are investigated through sinusoidal models and wavelet analysis over a 23 years period (1980/01-2002/12). The seasonal distribution of δ18O follows the temporal variability of air temperature providing seasonal amplitudes ranging from 0.94‰ to 4.47‰; the monthly isotopic maximum is observed in July. The isotopic amplitude reflects the geographical dependencies of the isotopic composition of precipitation providing higher values when moving inland. In order to describe the dominant oscillation modes included in δ18O time series, the Morlet Continuous Wavelet Transform is evaluated. The main periodicity is represented at 12-months (annual periodicity) where the wavelet power is mainly concentrated. Stations (i.e. Cuxhaven, Trier, etc.) with limited seasonal isotopic effect provide sparse wavelet power areas at the annual periodicity mode explaining the fact that precipitation has a complex isotopic fingerprint that cannot be examined solely by the seasonality effect. Since temperature is the main contributor of the isotopic variability in mid-latitudes, the isotope-temperature effect is also investigated. The isotope-temperature slope ranges from 0.11‰/°C to 0.47‰/°C with steeper values observed at the southernmost stations of the study area. Bivariate wavelet analysis is applied in order to determine the correlation and the slope of the δ18O - temperature relationship over the time-frequency plane. High coherencies are detected at the annual periodicity mode. The time-frequency slope is calculated at the annual periodicity mode ranging from 0.45‰/°C to 0.83‰/°C with higher values at stations that show a more distinguishable seasonal isotopic behavior. Generally the slope fluctuates around a mean value but in certain cases (sites with low seasonal effect) abrupt slope changes are derived and the slope becomes strongly unstable.
Margin based ontology sparse vector learning algorithm and applied in biology science.
Gao, Wei; Qudair Baig, Abdul; Ali, Haidar; Sajjad, Wasim; Reza Farahani, Mohammad
2017-01-01
In biology field, the ontology application relates to a large amount of genetic information and chemical information of molecular structure, which makes knowledge of ontology concepts convey much information. Therefore, in mathematical notation, the dimension of vector which corresponds to the ontology concept is often very large, and thus improves the higher requirements of ontology algorithm. Under this background, we consider the designing of ontology sparse vector algorithm and application in biology. In this paper, using knowledge of marginal likelihood and marginal distribution, the optimized strategy of marginal based ontology sparse vector learning algorithm is presented. Finally, the new algorithm is applied to gene ontology and plant ontology to verify its efficiency.
Bi Sparsity Pursuit: A Paradigm for Robust Subspace Recovery
2016-09-27
16. SECURITY CLASSIFICATION OF: The success of sparse models in computer vision and machine learning is due to the fact that, high dimensional data...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Signal recovery, Sparse learning , Subspace modeling REPORT DOCUMENTATION PAGE 11...vision and machine learning is due to the fact that, high dimensional data is distributed in a union of low dimensional subspaces in many real-world
Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2015-12-01
The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.
The challenge of precise orbit determination for STSAT-2C using extremely sparse SLR data
NASA Astrophysics Data System (ADS)
Kim, Young-Rok; Park, Eunseo; Kucharski, Daniel; Lim, Hyung-Chul; Kim, Byoungsoo
2016-03-01
The Science and Technology Satellite (STSAT)-2C is the first Korean satellite equipped with a laser retro-reflector array for satellite laser ranging (SLR). SLR is the only on-board tracking source for precise orbit determination (POD) of STSAT-2C. However, POD for the STSAT-2C is a challenging issue, as the laser measurements of the satellite are extremely sparse, largely due to the inaccurate two-line element (TLE)-based orbit predictions used by the SLR tracking stations. In this study, POD for the STSAT-2C using extremely sparse SLR data is successfully implemented, and new laser-based orbit predictions are obtained. The NASA/GSFC GEODYN II software and seven-day arcs are used for the SLR data processing of two years of normal points from March 2013 to May 2015. To compensate for the extremely sparse laser tracking, the number of estimation parameters are minimized, and only the atmospheric drag coefficients are estimated with various intervals. The POD results show that the weighted root mean square (RMS) post-fit residuals are less than 10 m, and the 3D day boundaries vary from 30 m to 3 km. The average four-day orbit overlaps are less than 20/330/20 m for the radial/along-track/cross-track components. The quality of the new laser-based prediction is verified by SLR observations, and the SLR residuals show better results than those of previous TLE-based predictions. This study demonstrates that POD for the STSAT-2C can be successfully achieved against extreme sparseness of SLR data, and the results can deliver more accurate predictions.
Tipton, John; Hooten, Mevin B.; Goring, Simon
2017-01-01
Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temperature surfaces from sparse and noisy measurements recorded at historical United States military forts and other observer stations from 1820 to 1894. One common method for reconstructing the paleoclimate from proxy data is principal component regression (PCR). With PCR, one learns a statistical relationship between the paleoclimate proxy data and a set of climate observations that are used as patterns for potential reconstruction scenarios. We explore PCR in a Bayesian hierarchical framework, extending classical PCR in a variety of ways. First, we model the latent principal components probabilistically, accounting for measurement error in the observational data. Next, we extend our method to better accommodate outliers that occur in the proxy data. Finally, we explore alternatives to the truncation of lower-order principal components using different regularization techniques. One fundamental challenge in paleoclimate reconstruction efforts is the lack of out-of-sample data for predictive validation. Cross-validation is of potential value, but is computationally expensive and potentially sensitive to outliers in sparse data scenarios. To overcome the limitations that a lack of out-of-sample records presents, we test our methods using a simulation study, applying proper scoring rules including a computationally efficient approximation to leave-one-out cross-validation using the log score to validate model performance. The result of our analysis is a spatially explicit reconstruction of spatio-temporal temperature from a very sparse historical record.
NASA Astrophysics Data System (ADS)
Orović, Irena; Stanković, Srdjan; Amin, Moeness
2013-05-01
A modified robust two-dimensional compressive sensing algorithm for reconstruction of sparse time-frequency representation (TFR) is proposed. The ambiguity function domain is assumed to be the domain of observations. The two-dimensional Fourier bases are used to linearly relate the observations to the sparse TFR, in lieu of the Wigner distribution. We assume that a set of available samples in the ambiguity domain is heavily corrupted by an impulsive type of noise. Consequently, the problem of sparse TFR reconstruction cannot be tackled using standard compressive sensing optimization algorithms. We introduce a two-dimensional L-statistics based modification into the transform domain representation. It provides suitable initial conditions that will produce efficient convergence of the reconstruction algorithm. This approach applies sorting and weighting operations to discard an expected amount of samples corrupted by noise. The remaining samples serve as observations used in sparse reconstruction of the time-frequency signal representation. The efficiency of the proposed approach is demonstrated on numerical examples that comprise both cases of monocomponent and multicomponent signals.
The Cortex Transform as an image preprocessor for sparse distributed memory: An initial study
NASA Technical Reports Server (NTRS)
Olshausen, Bruno; Watson, Andrew
1990-01-01
An experiment is described which was designed to evaluate the use of the Cortex Transform as an image processor for Sparse Distributed Memory (SDM). In the experiment, a set of images were injected with Gaussian noise, preprocessed with the Cortex Transform, and then encoded into bit patterns. The various spatial frequency bands of the Cortex Transform were encoded separately so that they could be evaluated based on their ability to properly cluster patterns belonging to the same class. The results of this study indicate that by simply encoding the low pass band of the Cortex Transform, a very suitable input representation for the SDM can be achieved.
ROPE: Recoverable Order-Preserving Embedding of Natural Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widemann, David P.; Wang, Eric X.; Thiagarajan, Jayaraman J.
We present a novel Recoverable Order-Preserving Embedding (ROPE) of natural language. ROPE maps natural language passages from sparse concatenated one-hot representations to distributed vector representations of predetermined fixed length. We use Euclidean distance to return search results that are both grammatically and semantically similar. ROPE is based on a series of random projections of distributed word embeddings. We show that our technique typically forms a dictionary with sufficient incoherence such that sparse recovery of the original text is possible. We then show how our embedding allows for efficient and meaningful natural search and retrieval on Microsoft’s COCO dataset and themore » IMDB Movie Review dataset.« less
NASA Astrophysics Data System (ADS)
Passarelli, Luigi; Cesca, Simone; Heryandoko, Nova; Lopez Comino, Jose Angel; Strollo, Angelo; Rivalta, Eleonora; Rohadi, Supryianto; Dahm, Torsten; Milkereit, Claus
2017-04-01
Magmatic unrest is challenging to detect when monitoring is sparse and there is little knowledge about the volcano. This is especially true for long-dormant volcanoes. Geophysical observables like seismicity, deformation, temperature and gas emission are reliable indicators of ongoing volcanic unrest caused by magma movements. Jailolo volcano is a Holocene volcano belonging to the Halmahera volcanic arc in the Northern Moluccas Islands, Indonesia. Global databases of volcanic eruptions have no records of its eruptive activity and no geological investigation has been carried out to better assess the past eruptive activity at Jailolo. It probably sits on the northern rim of an older caldera which now forms the Jailolo bay. Hydrothermal activity is intense with several hot-springs and steaming ground spots around the Jailolo volcano. In November 2015 an energetic seismic swarm started and lasted until late February 2016 with four earthquakes with M>5 recorded by global seismic networks. At the time of the swarm no close geophysical monitoring network was available around Jailolo volcano except for a broadband station at 30km distant. We installed last summer a local dense multi-parametric monitoring network with 36 seismic stations, 6 GPS and 2 gas monitoring stations around Jailolo volcano. We revised the focal mechanisms of the larger events and used single station location methods in order to exploit the little information available at the time of the swarm activity. We also combined the old sparse data with our local dense network. Migration of hypocenters and inversion of the local stress field derived by focal mechanisms analysis indicate that the Nov-Feb seismicity swarm may be related to a magmatic intrusion at shallow depth. Data from our dense network confirms ongoing micro-seismic activity underneath Jailolo volcano but there are no indications of new magma intrusion. Our findings indicate that magmatic unrest occurred at Jailolo volcano and call for a revision of the volcanic hazard.
Automatic Management of Parallel and Distributed System Resources
NASA Technical Reports Server (NTRS)
Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.
1990-01-01
Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.
AMPHIBIAN DECLINES AND ENVIRONMENTAL CHANGE IN THE EASTERN "MOJAVE DESERT"
A number of amphibian species historically inhabited sparsely distributed wetlands in the Mojave Desert, USA, habitats that have been dramatically altered or eliminated as a result of human activities. The population status and distribution of amphibians were investigated in a 20...
A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-01-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298
Direct determination of geocenter motion by combining SLR, VLBI, GNSS, and DORIS time series
NASA Astrophysics Data System (ADS)
Wu, X.; Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Gross, R. S.; Heflin, M. B.; Jiang, Y.; Parker, J. W.
2013-12-01
The longest-wavelength surface mass transport includes three degree-one spherical harmonic components involving hemispherical mass exchanges. The mass load causes geocenter motion between the center-of-mass of the total Earth system (CM) and the center-of-figure of the solid Earth surface (CF), and deforms the solid Earth. Estimation of the degree-1 surface mass changes through CM-CF and degree-1 deformation signatures from space geodetic techniques can thus complement GRACE's time-variable gravity data to form a complete change spectrum up to a high resolution. Currently, SLR is considered the most accurate technique for direct geocenter motion determination. By tracking satellite motion from ground stations, SLR determines the motion between CM and the geometric center of its ground network (CN). This motion is then used to approximate CM-CF and subsequently for deriving degree-1 mass changes. However, the SLR network is very sparse and uneven in global distribution. The average number of operational tracking stations is about 20 in recent years. The poor network geometry can have a large CN-CF motion and is not ideal for the determination of CM-CF motion and degree-1 mass changes. We recently realized an experimental Terrestrial Reference Frame (TRF) through station time series using the Kalman filter and the RTS smoother. The TRF has its origin defined at nearly instantaneous CM using weekly SLR measurement time series. VLBI, GNSS and DORIS time series are combined weekly with those of SLR and tied to the geocentric (CM) reference frame through local tie measurements and co-motion constraints on co-located geodetic stations. The unified geocentric time series of the four geodetic techniques provide a much better network geometry for direct geodetic determination of geocenter motion. Results from this direct approach using a 90-station network compares favorably with those obtained from joint inversions of GPS/GRACE data and ocean bottom pressure models. We will also show that a previously identified discrepancy in X-component between direct SLR orbit-tracking and inverse determined geocenter motions is largely reconciled with the new unified network.
NASA Astrophysics Data System (ADS)
Li, J.; Guo, G.; WANG, X.; Chen, Q.
2017-12-01
The northwest Pacific subduction region is an ideal location to study the interaction between the subducting slab and upper mantle discontinuities. Various and complex geometry of the Pacific subducting slab can be well traced downward from the Kuril, Japan and Izu-Bonin trench using seismicity and tomography images (Fukao and Obayashi, 2013). Due to the sparse distribution of seismic stations in the sea, investigation of the deep mantle structure beneath the broad sea regions is very limited. In this study, we applied the well- developed multiple-ScS reverberations method (Wang et al., 2017) to analyze waveforms recorded by the Chinese Regional Seismic Network, the densely distributed temporary seismic array stations installed in east Asia. A map of the topography of the upper mantle discontinuities beneath the broad oceanic regions in northwest Pacific subduction zone is imaged. We also applied the receiver function analysis to waveforms recorded by stations in northeast China and obtain the detailed topography map beneath east Asia continental regions. We then combine the two kinds of topography of upper mantle discontinuities beneath oceanic and continental regions respectively, which are obtained from totally different methods. A careful image matching and spatial correlation is made in the overlapping study regions to calibrate results with different resolution. This is the first time to show systematically a complete view of the topography of the 410-km and 660-km discontinuities beneath the east Asia "Big mantle wedge" (Zhao and Ohtani, 2009) covering the broad oceanic and continental regions in the Northwestern Pacific Subduction zone. Topography pattern of the 660 and 410 is obtained and discussed. Especially we discovered a broad depression of the 410-km discontinuity covering more than 1000 km in lateral, which seems abnormal in the cold subducting tectonic environment. Based on plate tectonic reconstruction studies and HTHP mineral experiments, we argue that the east-retreat trench motion of the subducting Pacific slab might play an important role in the observed broad depression of the 410-km discontinuity.
2-D or not 2-D, that is the question: A Northern California test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayeda, K; Malagnini, L; Phillips, W S
2005-06-06
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions ofmore » approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less
NASA Astrophysics Data System (ADS)
Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan
2018-02-01
The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.
NASA Astrophysics Data System (ADS)
Nyblade, A.; Lloyd, A. J.; Anandakrishnan, S.; Wiens, D. A.; Aster, R. C.; Huerta, A. D.; Wilson, T. J.; Shore, P.; Zhao, D.
2011-12-01
As part of the International Polar Year in Antarctica, 37 seismic stations have been installed across West Antarctica as part of the Polar Earth Observing Network (POLENET). 23 stations form a sparse backbone network of which 21 are co-located on rock sites with a network of continuously recording GPS stations. The remaining 14 stations, in conjunction with 2 backbone stations, form a seismic transect extending from the Ellsworth Mountains across the West Antarctic Rift System (WARS) and into Marie Byrd Land. Here we present preliminary P and S wave velocity models of the upper mantle from regional body wave tomography using P and S travel times from teleseismic events recorded by the seismic transect during the first year (2009-2010) of deployment. Preliminary P wave velocity models consisting of ~3,000 ray paths from 266 events indicate that the upper mantle beneath the Whitmore Mountains is seismically faster than the upper mantle beneath Marie Byrd Land and the WARS. Furthermore, we observe two substantial upper mantle low velocity zones located beneath Marie Byrd Land and near the southern boundary of the WARS.
Application of wavefield compressive sensing in surface wave tomography
NASA Astrophysics Data System (ADS)
Zhan, Zhongwen; Li, Qingyang; Huang, Jianping
2018-06-01
Dense arrays allow sampling of seismic wavefield without significant aliasing, and surface wave tomography has benefitted from exploiting wavefield coherence among neighbouring stations. However, explicit or implicit assumptions about wavefield, irregular station spacing and noise still limit the applicability and resolution of current surface wave methods. Here, we propose to apply the theory of compressive sensing (CS) to seek a sparse representation of the surface wavefield using a plane-wave basis. Then we reconstruct the continuous surface wavefield on a dense regular grid before applying any tomographic methods. Synthetic tests demonstrate that wavefield CS improves robustness and resolution of Helmholtz tomography and wavefield gradiometry, especially when traditional approaches have difficulties due to sub-Nyquist sampling or complexities in wavefield.
TMPA Products 3B42RT & 3B42V6: Evaluation and Application in Qinghai-Tibet Plateau
NASA Astrophysics Data System (ADS)
Hao, Z.; Sun, L.; Wang, J.
2012-04-01
Hydrological researchers in Qinghai-Tibet Plateau tend to be haunted by deficiency of station gauged precipitation data for the sparse and uneven distribution of local meteorological stations. Fortunately, alternative data can be obtained from TRMM (Tropic Rainfall Measurement Mission) satellite. Preliminary evaluation and necessary correction of TRMM satellite rainfall products is required for the sake of reliability and suitability considering that TRMM precipitation is unconventional and natural condition in Qinghai-Tibet Plateau is unusually complicated. 3B42RT and 3B42V6 products from TRMM Multisatellite Precipitation Analysis(TMPA) are evaluated in northeast Qinghai-Tibet Plateau with 50 stations quality-controlled gauged daily precipitation as the benchmark precipitation set. It is found that the RT data overestimates the actual precipitation greatly while V6 only overestimates it slightly. RT data shows different seasonal and inter-annual accuracies. Summer and autumn see better accuracies than winter and spring and wet years see higher accuracies than dry years. Latitude is believed to be an important factor that influences the accuracy of satellite precipitation. Both RT and V6 can reflect the general pattern of the spatial distribution of precipitation even though RT overestimates the quantity greatly. A new parameter, accumulated precipitation weight point (APWP), was introduced to describe the temporal-spatial pattern evolution of precipitation. The APWP of both RT and V6 were moving from south to north in the past decade, but they are all in the west of station gauged precipitation APWP(s).V6 APWP track fit gauged precipitation perfectly while RT APWP track has over-exaggerated legs, indicating that spatial distribution of RT precipitation experienced unreasonable sharp changes. A practical and operational procedure to correct satellite precipitation data is developed. For RT, there are two steps. Step 1, the downscaling, original daily precipitation was multiplied by a ratio of its monthly satellite/station precipitation gauged precipitation. Step2, objective analysis, Barnes/Cressman successive correction as well as Optimal Interpolation was applied to refine the processed daily results. Step 1 is unnecessary for V6 correction. The accuracy of RT can be improved significantly and the spatial details of satellite precipitation can be obtained as much as possible while quite little improvement showed in V6 correction. Besides, the iteration of successive correction should not be more than twice and the ideal influence radius for Optimal Interpolation is R=5. The original/corrected RT and V6 data sets were used as precipitation inputs to drive a newly developed hydrological model DHM-SP in the headwater region of the Yellow river so as to assess their applicability in simulating the daily runoff. V6 simulation result is qualified even though it is uncorrected. The bias in RT is too much to make use of RT as model input directly while quite satisfied results can be derived from corrected RT input. The simulation results of corrected RT are even better than that of station gauged and V6.
NASA Astrophysics Data System (ADS)
Magyar, Andrew
The recent discovery of cells that respond to purely conceptual features of the environment (particular people, landmarks, objects, etc) in the human medial temporal lobe (MTL), has raised many questions about the nature of the neural code in humans. The goal of this dissertation is to develop a novel statistical method based upon maximum likelihood regression which will then be applied to these experiments in order to produce a quantitative description of the coding properties of the human MTL. In general, the method is applicable to any experiments in which a sequence of stimuli are presented to an organism while the binary responses of a large number of cells are recorded in parallel. The central concept underlying the approach is the total probability that a neuron responds to a random stimulus, called the neuronal sparsity. The model then estimates the distribution of response probabilities across the population of cells. Applying the method to single-unit recordings from the human medial temporal lobe, estimates of the sparsity distributions are acquired in four regions: the hippocampus, the entorhinal cortex, the amygdala, and the parahippocampal cortex. The resulting distributions are found to be sparse (large fraction of cells with a low response probability) and highly non-uniform, with a large proportion of ultra-sparse neurons that possess a very low response probability, and a smaller population of cells which respond much more frequently. Rammifications of the results are discussed in relation to the sparse coding hypothesis, and comparisons are made between the statistics of the human medial temporal lobe cells and place cells observed in the rodent hippocampus.
Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD
NASA Astrophysics Data System (ADS)
Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun
2017-12-01
This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.
Structured networks support sparse traveling waves in rodent somatosensory cortex.
Moldakarimov, Samat; Bazhenov, Maxim; Feldman, Daniel E; Sejnowski, Terrence J
2018-05-15
Neurons responding to different whiskers are spatially intermixed in the superficial layer 2/3 (L2/3) of the rodent barrel cortex, where a single whisker deflection activates a sparse, distributed neuronal population that spans multiple cortical columns. How the superficial layer of the rodent barrel cortex is organized to support such distributed sensory representations is not clear. In a computer model, we tested the hypothesis that sensory representations in L2/3 of the rodent barrel cortex are formed by activity propagation horizontally within L2/3 from a site of initial activation. The model explained the observed properties of L2/3 neurons, including the low average response probability in the majority of responding L2/3 neurons, and the existence of a small subset of reliably responding L2/3 neurons. Sparsely propagating traveling waves similar to those observed in L2/3 of the rodent barrel cortex occurred in the model only when a subnetwork of strongly connected neurons was immersed in a much larger network of weakly connected neurons.
Neural networks and MIMD-multiprocessors
NASA Technical Reports Server (NTRS)
Vanhala, Jukka; Kaski, Kimmo
1990-01-01
Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system.
Hyman, Jeffrey De'Haven; Aldrich, Garrett Allen; Viswanathan, Hari S.; ...
2016-08-01
We characterize how different fracture size-transmissivity relationships influence flow and transport simulations through sparse three-dimensional discrete fracture networks. Although it is generally accepted that there is a positive correlation between a fracture's size and its transmissivity/aperture, the functional form of that relationship remains a matter of debate. Relationships that assume perfect correlation, semicorrelation, and noncorrelation between the two have been proposed. To study the impact that adopting one of these relationships has on transport properties, we generate multiple sparse fracture networks composed of circular fractures whose radii follow a truncated power law distribution. The distribution of transmissivities are selected somore » that the mean transmissivity of the fracture networks are the same and the distributions of aperture and transmissivity in models that include a stochastic term are also the same. We observe that adopting a correlation between a fracture size and its transmissivity leads to earlier breakthrough times and higher effective permeability when compared to networks where no correlation is used. While fracture network geometry plays the principal role in determining where transport occurs within the network, the relationship between size and transmissivity controls the flow speed. Lastly, these observations indicate DFN modelers should be aware that breakthrough times and effective permeabilities can be strongly influenced by such a relationship in addition to fracture and network statistics.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, Jeffrey De'Haven; Aldrich, Garrett Allen; Viswanathan, Hari S.
We characterize how different fracture size-transmissivity relationships influence flow and transport simulations through sparse three-dimensional discrete fracture networks. Although it is generally accepted that there is a positive correlation between a fracture's size and its transmissivity/aperture, the functional form of that relationship remains a matter of debate. Relationships that assume perfect correlation, semicorrelation, and noncorrelation between the two have been proposed. To study the impact that adopting one of these relationships has on transport properties, we generate multiple sparse fracture networks composed of circular fractures whose radii follow a truncated power law distribution. The distribution of transmissivities are selected somore » that the mean transmissivity of the fracture networks are the same and the distributions of aperture and transmissivity in models that include a stochastic term are also the same. We observe that adopting a correlation between a fracture size and its transmissivity leads to earlier breakthrough times and higher effective permeability when compared to networks where no correlation is used. While fracture network geometry plays the principal role in determining where transport occurs within the network, the relationship between size and transmissivity controls the flow speed. Lastly, these observations indicate DFN modelers should be aware that breakthrough times and effective permeabilities can be strongly influenced by such a relationship in addition to fracture and network statistics.« less
Cost comparison of competing local distribution systems for communication satellite traffic
NASA Technical Reports Server (NTRS)
Dopfel, F. E.
1979-01-01
The boundaries of market areas which favor various means for distributing communications satellite traffic are considered. The distribution methods considered are: control Earth station with cable access, rooftop Earth stations, Earth station with radio access, and various combinations of these methods. The least cost system for a hypothetical region described by number of users and the average cable access mileage is discussed. The region is characterized by a function which expresses the distribution of users. The results indicate that the least cost distribution is central Earth station with cable access for medium to high density areas of a region combined with rooftop Earth stations or (for higher volumes) radio access for remote users.
NASA Technical Reports Server (NTRS)
Keeler, James D.
1988-01-01
The information capacity of Kanerva's Sparse Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and Hopfield-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns.
The DACCIWA 2016 radiosonde campaign in southern West Africa
NASA Astrophysics Data System (ADS)
Fink, Andreas H.; Maranan, Marlon; Knippertz, Peter; Ngamini, Jean-Blaise; Francis, Sabastine
2017-04-01
Operational upper-air stations are very sparsely distributed over West Africa, resulting in the necessity to enhance radiosonde observations for the DACCIWA (Dynamics-Aerosol-Chemistry-Cloud Interactions in West Africa) experimental period during June-July 2016. Building on the AMMA (African Monsoon - Multidisciplinary Analyses) experience, existing infrastructures, as well as human networks, the upper air network was successfully augmented to a spatial density that is unprecedented for southern West Africa. Altogether, more than 750 experimental radiosondes were launched at seven stations in three countries along the Guinea Coast. From its outset, the DACCIWA radiosonde campaign had three pillars: (a) enhancing soundings at operational or quiescent AMMA radiosonde stations; (b) launching sondes at DACCIWA supersites and two additional DACCIWA field sites; and (c) collecting standard and - if possible - high-resolution data from other operational RS stations. In terms of (a), it was found during preparing recce visits to West Africa, that the AMMA-activated stations of Cotonou (Benin) and Abuja (Nigeria) were operational though almost "invisible" on the World Meteorological Organisation's Global Teleconnection System (GTS). These and other AMMA legacies facilitated the implementation of enhanced, four-times daily soundings at Abidjan (Ivory Coast), Cotonou and Parakou (both Benin). Two well-instrumented DACCIWA ground sites at Kumasi (Ghana) and Savé (Benin) performed 06 UTC soundings, being enhanced to four-times daily ascents during fifteen Intensive Observing Periods (IOPs). In addition, research staff and students from the Karlsruhe Institute of Technology (KIT) and African partners conducted up to five-times daily soundings at Lamto (Ivory Coast) and Accra (Ghana). Almost all of the experimental DACCIWA ascents were submitted to the GTS in real time and assimilated at least at three European numerical weather prediction centres that helped to improve their operational analysis over southern West Africa during June-July 2016. In addition, upper-air data from the Nigerian stations Lagos, Abuja and Kano, not available in international archives, were collected and fed into the DACCIWA database. Instrumental to the success of the DACCIWA radiosonde campaign under challenging logistical constraints was the excellent collaboration and commitment of various African partners. In addition, European and African students worked together in the field to ensure an uninterrupted sounding frequency. The present contribution will describe the network, aspects of the Africa-European team working, the available data and their accessibility for research, as well as some first applications and highlights.
2011-09-01
strain data provided by in-situ strain sensors. The application focus is on the stain data obtained from FBG (Fiber Bragg Grating) sensor arrays...sparsely distributed lines to simulate strain data from FBG (Fiber Bragg Grating) arrays that provide either single-core (axial) or rosette (tri...when the measured strain data are sparse, as it is often the case when FBG sensors are used. For an inverse element without strain-sensor data, the
Estimating the Uncertainty and Predictive Capabilities of Three-Dimensional Earth Models (Postprint)
2012-03-22
www.isc.ac.uk). This global database includes more than 7,000 events whose epicentral location accuracy is known to at least 5 km. GT events with...region, which illustrates the difficulty of validating a model with travel times alone. However, the IASPEI REL database is currently the highest...S (right) paths in the IASPEI REL ground-truth database . Stations are represented by purple triangles and events by gray circles. Note the sparse
DEM generation from contours and a low-resolution DEM
NASA Astrophysics Data System (ADS)
Li, Xinghua; Shen, Huanfeng; Feng, Ruitao; Li, Jie; Zhang, Liangpei
2017-12-01
A digital elevation model (DEM) is a virtual representation of topography, where the terrain is established by the three-dimensional co-ordinates. In the framework of sparse representation, this paper investigates DEM generation from contours. Since contours are usually sparsely distributed and closely related in space, sparse spatial regularization (SSR) is enforced on them. In order to make up for the lack of spatial information, another lower spatial resolution DEM from the same geographical area is introduced. In this way, the sparse representation implements the spatial constraints in the contours and extracts the complementary information from the auxiliary DEM. Furthermore, the proposed method integrates the advantage of the unbiased estimation of kriging. For brevity, the proposed method is called the kriging and sparse spatial regularization (KSSR) method. The performance of the proposed KSSR method is demonstrated by experiments in Shuttle Radar Topography Mission (SRTM) 30 m DEM and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) 30 m global digital elevation model (GDEM) generation from the corresponding contours and a 90 m DEM. The experiments confirm that the proposed KSSR method outperforms the traditional kriging and SSR methods, and it can be successfully used for DEM generation from contours.
Rangelov, Dragan; Müller, Hermann J; Zehetleitner, Michael
2017-05-01
Pop-out search implies that the target is always the first item selected, no matter how many distractors are presented. However, increasing evidence indicates that search is not entirely independent of display density even for pop-out targets: search is slower with sparse (few distractors) than with dense displays (many distractors). Despite its significance, the cause of this anomaly remains unclear. We investigated several mechanisms that could slow down search for pop-out targets. Consistent with the assumption that pop-out targets frequently fail to pop out in sparse displays, we observed greater variability of search duration for sparse displays relative to dense. Computational modeling of the response time distributions also supported the view that pop-out targets fail to pop out in sparse displays. Our findings strongly question the classical assumption that early processing of pop-out targets is independent of the distractors. Rather, the density of distractors critically influences whether or not a stimulus pops out. These results call for new, more reliable measures of pop-out search and potentially a reinterpretation of studies that used relatively sparse displays. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; Huang, Lianjie
2015-01-28
Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less
NASA Astrophysics Data System (ADS)
Lin, H.; Zhang, X.; Wu, X.; Tarnas, J. D.; Mustard, J. F.
2018-04-01
Quantitative analysis of hydrated minerals from hyperspectral remote sensing data is fundamental for understanding Martian geologic process. Because of the difficulties for selecting endmembers from hyperspectral images, a sparse unmixing algorithm has been proposed to be applied to CRISM data on Mars. However, it's challenge when the endmember library increases dramatically. Here, we proposed a new methodology termed Target Transformation Constrained Sparse Unmixing (TTCSU) to accurately detect hydrous minerals on Mars. A new version of target transformation technique proposed in our recent work was used to obtain the potential detections from CRISM data. Sparse unmixing constrained with these detections as prior information was applied to CRISM single-scattering albedo images, which were calculated using a Hapke radiative transfer model. This methodology increases success rate of the automatic endmember selection of sparse unmixing and could get more accurate abundances. CRISM images with well analyzed in Southwest Melas Chasma was used to validate our methodology in this study. The sulfates jarosite was detected from Southwest Melas Chasma, the distribution is consistent with previous work and the abundance is comparable. More validations will be done in our future work.
Medical Image Fusion Based on Feature Extraction and Sparse Representation
Wei, Gao; Zongxi, Song
2017-01-01
As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246
Highly parallel sparse Cholesky factorization
NASA Technical Reports Server (NTRS)
Gilbert, John R.; Schreiber, Robert
1990-01-01
Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.
NASA Astrophysics Data System (ADS)
Eakin, Caroline M.; Rychert, Catherine A.; Harmon, Nicholas
2018-02-01
Mantle anisotropy beneath mid-ocean ridges and oceanic transforms is key to our understanding of seafloor spreading and underlying dynamics of divergent plate boundaries. Observations are sparse, however, given the remoteness of the oceans and the difficulties of seismic instrumentation. To overcome this, we utilize the global distribution of seismicity along transform faults to measure shear wave splitting of over 550 direct S phases recorded at 56 carefully selected seismic stations worldwide. Applying this source-side splitting technique allows for characterization of the upper mantle seismic anisotropy, and therefore the pattern of mantle flow, directly beneath seismically active transform faults. The majority of the results (60%) return nulls (no splitting), while the non-null measurements display clear azimuthal dependency. This is best simply explained by anisotropy with a near vertical symmetry axis, consistent with mantle upwelling beneath oceanic transforms as suggested by numerical models. It appears therefore that the long-term stability of seafloor spreading may be associated with widespread mantle upwelling beneath the transforms creating warm and weak faults that localize strain to the plate boundary.
The Canadian High Arctic Ionospheric Network (CHAIN)
NASA Astrophysics Data System (ADS)
Jayachandran, P. T.; Langley, R. B.; MacDougall, J. W.; Mushini, S. C.; Pokhotelov, D.; Chadwick, R.; Kelly, T.
2009-05-01
Polar cap ionospheric measurements are important for the complete understanding of the various processes in the solar wind - magnetosphere - ionosphere (SW-M-I) system as well as for space weather applications. Currently the polar cap region is lacking high temporal and spatial resolution ionospheric measurements because of the orbit limitations of space-based measurements and the sparse network providing ground- based measurements. Canada has a unique advantage in remedying this shortcoming because it has the most accessible landmass in the high Arctic regions and the Canadian High Arctic Ionospheric Network (CHAIN) is designed to take advantage of Canadian geographic vantage points for a better understanding of the Sun-Earth system. CHAIN is a distributed array of ground-based radio instruments in the Canadian high Arctic. The instruments components of CHAIN are ten high data-rate Global Positioning System ionospheric scintillation and total electron content monitors and six Canadian Advanced Digital Ionosondes. Most of these instruments have been sited within the polar cap region except for two GPS reference stations at lower latitudes. This paper briefly overviews the scientific capabilities, instrument components, and deployment status of CHAIN.
Weather conditions and political party vote share in Dutch national parliament elections, 1971-2010.
Eisinga, Rob; Te Grotenhuis, Manfred; Pelzer, Ben
2012-11-01
Inclement weather on election day is widely seen to benefit certain political parties at the expense of others. Empirical evidence for this weather-vote share hypothesis is sparse however. We examine the effects of rainfall and temperature on share of the votes of eight political parties that participated in 13 national parliament elections, held in the Netherlands from 1971 to 2010. This paper merges the election results for all Dutch municipalities with election-day weather observations drawn from all official weather stations well distributed over the country. We find that the weather parameters affect the election results in a statistically and politically significant way. Whereas the Christian Democratic party benefits from substantial rain (10 mm) on voting day by gaining one extra seat in the 150-seat Dutch national parliament, the left-wing Social Democratic (Labor) and the Socialist parties are found to suffer from cold and wet conditions. Cold (5°C) and rainy (10 mm) election day weather causes the latter parties to lose one or two parliamentary seats.
Weather conditions and political party vote share in Dutch national parliament elections, 1971-2010
NASA Astrophysics Data System (ADS)
Eisinga, Rob; Te Grotenhuis, Manfred; Pelzer, Ben
2012-11-01
Inclement weather on election day is widely seen to benefit certain political parties at the expense of others. Empirical evidence for this weather-vote share hypothesis is sparse however. We examine the effects of rainfall and temperature on share of the votes of eight political parties that participated in 13 national parliament elections, held in the Netherlands from 1971 to 2010. This paper merges the election results for all Dutch municipalities with election-day weather observations drawn from all official weather stations well distributed over the country. We find that the weather parameters affect the election results in a statistically and politically significant way. Whereas the Christian Democratic party benefits from substantial rain (10 mm) on voting day by gaining one extra seat in the 150-seat Dutch national parliament, the left-wing Social Democratic (Labor) and the Socialist parties are found to suffer from cold and wet conditions. Cold (5°C) and rainy (10 mm) election day weather causes the latter parties to lose one or two parliamentary seats.
2012-09-30
Estimation Methods for Underwater OFDM 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. 6) Asynchronous Multiuser...multi-input multi-output ( MIMO ) OFDM is also pursued, where it is shown that the proposed hybrid initialization enables drastically improved receiver...are investigated. 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. This work studies a distributed system with
Mapping visual stimuli to perceptual decisions via sparse decoding of mesoscopic neural activity.
Sajda, Paul
2010-01-01
In this talk I will describe our work investigating sparse decoding of neural activity, given a realistic mapping of the visual scene to neuronal spike trains generated by a model of primary visual cortex (V1). We use a linear decoder which imposes sparsity via an L1 norm. The decoder can be viewed as a decoding neuron (linear summation followed by a sigmoidal nonlinearity) in which there are relatively few non-zero synaptic weights. We find: (1) the best decoding performance is for a representation that is sparse in both space and time, (2) decoding of a temporal code results in better performance than a rate code and is also a better fit to the psychophysical data, (3) the number of neurons required for decoding increases monotonically as signal-to-noise in the stimulus decreases, with as little as 1% of the neurons required for decoding at the highest signal-to-noise levels, and (4) sparse decoding results in a more accurate decoding of the stimulus and is a better fit to psychophysical performance than a distributed decoding, for example one imposed by an L2 norm. We conclude that sparse coding is well-justified from a decoding perspective in that it results in a minimum number of neurons and maximum accuracy when sparse representations can be decoded from the neural dynamics.
Measuring Sparseness in the Brain: Comment on Bowers (2009)
ERIC Educational Resources Information Center
Quian Quiroga, Rodrigo; Kreiman, Gabriel
2010-01-01
Bowers challenged the common view in favor of distributed representations in psychological modeling and the main arguments given against localist and grandmother cell coding schemes. He revisited the results of several single-cell studies, arguing that they do not support distributed representations. We praise the contribution of Bowers (2009) for…
Sparsely-distributed organization of face and limb activations in human ventral temporal cortex
Weiner, Kevin S.; Grill-Spector, Kalanit
2011-01-01
Functional magnetic resonance imaging (fMRI) has identified face- and body part-selective regions, as well as distributed activation patterns for object categories across human ventral temporal cortex (VTC), eliciting a debate regarding functional organization in VTC and neural coding of object categories. Using high-resolution fMRI, we illustrate that face- and limb-selective activations alternate in a series of largely nonoverlapping clusters in lateral VTC along the inferior occipital gyrus (IOG), fusiform gyrus (FG), and occipitotemporal sulcus (OTS). Both general linear model (GLM) and multivoxel pattern (MVP) analyses show that face- and limb-selective activations minimally overlap and that this organization is consistent across experiments and days. We provide a reliable method to separate two face-selective clusters on the middle and posterior FG (mFus and pFus), and another on the IOG using their spatial relation to limb-selective activations and retinotopic areas hV4, VO-1/2, and hMT+. Furthermore, these activations show a gradient of increasing face selectivity and decreasing limb selectivity from the IOG to the mFus. Finally, MVP analyses indicate that there is differential information for faces in lateral VTC (containing weakly- and highly-selective voxels) relative to non-selective voxels in medial VTC. These findings suggest a sparsely-distributed organization where sparseness refers to the presence of several face- and limb-selective clusters in VTC, and distributed refers to the presence of different amounts of information in highly-, weakly-, and non-selective voxels. Consequently, theories of object recognition should consider the functional and spatial constraints of neural coding across a series of nonoverlapping category-selective clusters that are themselves distributed. PMID:20457261
Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru
2016-03-30
As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lehnert, Michal; Geletič, Jan; Husák, Jan; Vysoudil, Miroslav
2015-11-01
The stations of the Metropolitan Station Network in Olomouc (Czech Republic) were assigned to local climatic zones, and the temperature characteristics of the stations were compared. The classification of local climatic zones represents an up-to-date concept for the unification of the characterization of the neighborhoods of climate research sites. This study is one of the first to provide a classification of existing stations within local climate zones. Using a combination of GIS-based analyses and field research, the values of geometric and surface cover properties were calculated, and the stations were subsequently classified into the local climate zones. It turned out that the classification of local climatic zones can be efficiently used for representative documentation of the neighborhood of the climate stations. To achieve a full standardization of the description of the neighborhood of a station, the classification procedures, including the methods used for the processing of spatial data and methods used for the indication of specific local characteristics, must be also standardized. Although the main patterns of temperature differences between the stations with a compact rise, those with an open rise and the stations with no rise or sparsely built areas were evident; the air temperature also showed considerable differences within particular zones. These differences were largely caused by various geometric layout of development and by unstandardized placement of the stations. For the direct comparison of temperatures between zones, particularly those stations which have been placed in such a way that they are as representative as possible for the zone in question should be used in further research.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, W.; Li, J.
2013-12-01
Climate change may alter the spatial distribution, composition, structure, and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate solar radiation absorbed by individual plants for understanding and predicting their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the analytical solutions of random distributions of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and is suitable for ecological models to simulate long-term transient responses of plant communities to climate change.
Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...
2015-07-14
In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergen, Benjamin Karl
2016-08-03
These are slides which are part of the ASC L2 Milestone Review. The following topics are covered: Legion Backend, Distributed-Memory Partitioning, Sparse Data Representations, and MPI-Legion Interoperability.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe
2017-09-01
We propose a sparse Bayesian learning algorithm for improved estimation of white matter fiber parameters from compressed (under-sampled q-space) multi-shell diffusion MRI data. The multi-shell data is represented in a dictionary form using a non-monoexponential decay model of diffusion, based on continuous gamma distribution of diffusivities. The fiber volume fractions with predefined orientations, which are the unknown parameters, form the dictionary weights. These unknown parameters are estimated with a linear un-mixing framework, using a sparse Bayesian learning algorithm. A localized learning of hyperparameters at each voxel and for each possible fiber orientations improves the parameter estimation. Our experiments using synthetic data from the ISBI 2012 HARDI reconstruction challenge and in-vivo data from the Human Connectome Project demonstrate the improvements.
Benefits of rotational ground motions for planetary seismology
NASA Astrophysics Data System (ADS)
Donner, S.; Joshi, R.; Hadziioannou, C.; Nunn, C.; van Driel, M.; Schmelzbach, C.; Wassermann, J. M.; Igel, H.
2017-12-01
Exploring the internal structure of planetary objects is fundamental to understand the evolution of our solar system. In contrast to Earth, planetary seismology is hampered by the limited number of stations available, often just a single one. Classic seismology is based on the measurement of three components of translational ground motion. Its methods are mainly developed for a larger number of available stations. Therefore, the application of classical seismological methods to other planets is very limited. Here, we show that the additional measurement of three components of rotational ground motion could substantially improve the situation. From sparse or single station networks measuring translational and rotational ground motions it is possible to obtain additional information on structure and source. This includes direct information on local subsurface seismic velocities, separation of seismic phases, propagation direction of seismic energy, crustal scattering properties, as well as moment tensor source parameters for regional sources. The potential of this methodology will be highlighted through synthetic forward and inverse modeling experiments.
Improved source inversion from joint measurements of translational and rotational ground motions
NASA Astrophysics Data System (ADS)
Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.
2017-12-01
Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.
2010-09-20
ISS024-E-015121 (20 Sept. 2010) --- Twitchell Canyon Fire in central Utah is featured in this image photographed by an Expedition 24 crew member on the International Space Station (ISS). The Twitchell Canyon Fire near central Utah?s Fishlake National Forest is reported to have an area of approximately 13,383 hectares (approximately 134 square kilometers, or 33,071 acres). This detailed image shows smoke plumes generated by several fire spots close to the southwestern edge of the burned area. The fire was started by a lightning strike on July 20, 2010. Whereas many of the space station images of Earth are looking straight down (nadir), this photograph was exposed at an angle. The space station was located over a point approximately 509 kilometers (316 miles) to the northeast, near the Colorado/Wyoming border, at the time the image was taken on Sept. 20. Southwesterly winds were continuing to extend smoke plumes from the fire to the northeast. While the Twitchell Canyon region is sparsely populated, Interstate Highway 15 is visible at upper left.
Open hardware, low cost, air quality stations for monitoring ozone in coastal area
NASA Astrophysics Data System (ADS)
Lima, Marco; Donzella, Davide; Pintus, Fabio; Fedi, Adriano; Ferrari, Daniele; Massabò, Marco
2014-05-01
Ozone concentrations in urban and coastal area are a great concern for citizens and, consequently regulator. In the last 20 years the Ozone concentration is almost doubled and it has attracted the public attention because of the well know harmful impacts on human health and biosphere in general. Official monitoring networks usually comprise high precision, high accuracy observation stations, usually managed by public administrations and environmental agency; unfortunately due to their high costs of installation and maintenance, the monitoring stations are relatively sparse. This kind of monitoring networks have been recognized to be unsuitable to effectively characterize the high variability of air quality, especially in areas where pollution sources are various and often not static. We present a prototype of a low cost station for air quality monitoring, specifically developed for complementing the official monitoring stations improving the representation of air quality spatial distribution. We focused on a semi-professional product that could guarantee the highest reliability at the lowest possible cost, supported by a consistent infrastructure for data management. We test two type of Ozone sensor electrochemical and metal oxide. This work is integrated in the ACRONET Paradigm ® project: an open-hardware platform strongly oriented on environmental monitoring. All software and hardware sources will be available on the web. Thus, a computer and a small amount of work tools will be sufficient to create new monitoring networks, with the only constraint to share all the data obtained. It will so possible to create a real "sensing community". The prototype is currently able to measure ozone level, temperature and relative humidity, but soon, with the upcoming changes, it will be able also to monitor dust, carbon monoxide and nitrogen dioxide, always through the use of commercial sensors. The sensors are grouped in a compact board that interfaces with a data-logger able to transmit data to a dedicated server through a GPRS module (no ad hoc radio infrastructure needed). Due to the GPRS low latency transmission the data are transmitted in near-real time. The prototype has an independent power supply. The sensors outputs are directly compared with the measurement of the official fixed monitoring stations. We present preliminary tests of a ozone level assessment obtained without laboratory calibration during a first field campaign in Savona (Italy); the preliminary verification and test show reasonable agreement between low cost sensors and fixed monitoring station ozone level trends (low cost sensors detect gas concentration at ppb level). The preliminary results are promising for complementing the fixed official monitoring networks with low-cost sensors.
Using data tagging to improve the performance of Kanerva's sparse distributed memory
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The standard formulation of Kanerva's sparse distributed memory (SDM) involves the selection of a large number of data storage locations, followed by averaging the data contained in those locations to reconstruct the stored data. A variant of this model is discussed, in which the predominant pattern is the focus of reconstruction. First, one architecture is proposed which returns the predominant pattern rather than the average pattern. However, this model will require too much storage for most uses. Next, a hybrid model is proposed, called tagged SDM, which approximates the results of the predominant pattern machine, but is nearly as efficient as Kanerva's original formulation. Finally, some experimental results are shown which confirm that significant improvements in the recall capability of SDM can be achieved using the tagged architecture.
NASA Astrophysics Data System (ADS)
Santarius, John; Navarro, Marcos; Michalak, Matthew; Fancher, Aaron; Kulcinski, Gerald; Bonomo, Richard
2016-10-01
A newly initiated research project will be described that investigates methods for detecting shielded special nuclear materials by combining multi-dimensional neutron sources, forward/adjoint calculations modeling neutron and gamma transport, and sparse data analysis of detector signals. The key tasks for this project are: (1) developing a radiation transport capability for use in optimizing adaptive-geometry, inertial-electrostatic confinement (IEC) neutron source/detector configurations for neutron pulses distributed in space and/or phased in time; (2) creating distributed-geometry, gas-target, IEC fusion neutron sources; (3) applying sparse data and noise reduction algorithms, such as principal component analysis (PCA) and wavelet transform analysis, to enhance detection fidelity; and (4) educating graduate and undergraduate students. Funded by DHS DNDO Project 2015-DN-077-ARI095.
Electric power scheduling - A distributed problem-solving approach
NASA Technical Reports Server (NTRS)
Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.
1990-01-01
Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity.
Dynamic Tsunami Data Assimilation (DTDA) Based on Green's Function: Theory and Application
NASA Astrophysics Data System (ADS)
Wang, Y.; Satake, K.; Gusman, A. R.; Maeda, T.
2017-12-01
Tsunami data assimilation estimates the tsunami arrival time and height at Points of Interest (PoIs) by assimilating tsunami data observed offshore into a numerical simulation, without the need of calculating initial sea surface height at the source (Maeda et al., 2015). The previous tsunami data assimilation has two main problems: one is that it requires quite large calculating time because the tsunami wavefield of the whole interested region is computed continuously; another is that it relies on dense observation network such as Dense Oceanfloor Network system for Earthquakes and Tsunamis (DONET) in Japan or Cascadia Initiative (CI) in North America (Gusman et al., 2016), which is not practical for some area. Here we propose a new approach based on Green's function to speed up the tsunami data assimilation process and to solve the problem of sparse observation: Dynamic Tsunami Data Assimilation (DTDA). If the residual between the observed and calculated tsunami height is not zero, there will be an assimilation response around the station, usually a Gaussian-distributed sea surface displacement. The Green's function Gi,j is defined as the tsunami waveform at j-th grid caused by the propagation of assimilation response at i-th station. Hence, the forecasted waveforms at PoIs are calculated as the superposition of the Green's functions. In case of sparse observation, we could use the aircraft and satellite observations. The previous assimilation approach is not practical because it costs much time to assimilate moving observation, and to compute the tsunami wavefield of the interested region. In contrast, DTDA synthesizes the waveforms quickly as long as the Green's functions are calculated in advance. We apply our method to a hypothetic earthquake off the west coast of Sumatra Island similar to the 2004 Indian Ocean earthquake. Currently there is no dense observation network in that area, making it difficult for the previous assimilation approach. We used DTDA with aircraft and satellite observation above the Indian Ocean, to forecast the tsunami in Sri Lanka, India and Thailand. It shows that DTDA provides reliable tsunami forecasting for these countries, and the tsunami early warning can be issued half an hour before the tsunami arrives to reduce the damage along the coast.
Total recall in distributive associative memories
NASA Technical Reports Server (NTRS)
Danforth, Douglas G.
1991-01-01
Iterative error correction of asymptotically large associative memories is equivalent to a one-step learning rule. This rule is the inverse of the activation function of the memory. Spectral representations of nonlinear activation functions are used to obtain the inverse in closed form for Sparse Distributed Memory, Selected-Coordinate Design, and Radial Basis Functions.
NASA Technical Reports Server (NTRS)
Comiso, Joey C.
1995-01-01
Surface temperature is one of the key variables associated with weather and climate. Accurate measurements of surface air temperatures are routinely made in meteorological stations around the world. Also, satellite data have been used to produce synoptic global temperature distributions. However, not much attention has been paid on temperature distributions in the polar regions. In the polar regions, the number of stations is very sparse. Because of adverse weather conditions and general inaccessibility, surface field measurements are also limited. Furthermore, accurate retrievals from satellite data in the region have been difficult to make because of persistent cloudiness and ambiguities in the discrimination of clouds from snow or ice. Surface temperature observations are required in the polar regions for air-sea-ice interaction studies, especially in the calculation of heat, salinity, and humidity fluxes. They are also useful in identifying areas of melt or meltponding within the sea ice pack and the ice sheets and in the calculation of emissivities of these surfaces. Moreover, the polar regions are unique in that they are the sites of temperature extremes, the location of which is difficult to identify without a global monitoring system. Furthermore, the regions may provide an early signal to a potential climate change because such signal is expected to be amplified in the region due to feedback effects. In cloud free areas, the thermal channels from infrared systems provide surface temperatures at relatively good accuracies. Previous capabilities include the use of the Temperature Humidity Infrared Radiometer (THIR) onboard the Nimbus-7 satellite which was launched in 1978. Current capabilities include the use of the Advance Very High Resolution Radiometer (AVHRR) aboard NOAA satellites. Together, these two systems cover a span of 16 years of thermal infrared data. Techniques for retrieving surface temperatures with these sensors in the polar regions have been developed. Errors have been estimated to range from 1K to 5K mainly due to cloud masking problems. With many additional channels available, it is expected that the EOS-Moderate Resolution Imaging Spectroradiometer (MODIS) will provide an improved characterization of clouds and a good discrimination of clouds from snow or ice surfaces.
A simplified Suomi NPP VIIRS dust detection algorithm
NASA Astrophysics Data System (ADS)
Yang, Yikun; Sun, Lin; Zhu, Jinshan; Wei, Jing; Su, Qinghua; Sun, Wenxiao; Liu, Fangwei; Shu, Meiyan
2017-11-01
Due to the complex characteristics of dust and sparse ground-based monitoring stations, dust monitoring is facing severe challenges, especially in dust storm-prone areas. Aim at constructing a high-precision dust storm detection model, a pixel database, consisted of dusts over a variety of typical feature types such as cloud, vegetation, Gobi and ice/snow, was constructed, and their distributions of reflectance and Brightness Temperatures (BT) were analysed, based on which, a new Simplified Dust Detection Algorithm (SDDA) for the Suomi National Polar-Orbiting Partnership Visible infrared Imaging Radiometer (NPP VIIRS) is proposed. NPP VIIRS images covering the northern China and Mongolian regions, where features serious dust storms, were selected to perform the dust detection experiments. The monitoring results were compared with the true colour composite images, and results showed that most of the dust areas can be accurately detected, except for fragmented thin dusts over bright surfaces. The dust ground-based measurements obtained from the Meteorological Information Comprehensive Analysis and Process System (MICAPS) and the Ozone Monitoring Instrument Aerosol Index (OMI AI) products were selected for comparison purposes. Results showed that the dust monitoring results agreed well in the spatial distribution with OMI AI dust products and the MICAPS ground-measured data with an average high accuracy of 83.10%. The SDDA is relatively robust and can realize automatic monitoring for dust storms.
Parallel pivoting combined with parallel reduction
NASA Technical Reports Server (NTRS)
Alaghband, Gita
1987-01-01
Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.
Mechanical Network in Titin Immunoglobulin from Force Distribution Analysis
Wilmanns, Matthias; Gräter, Frauke
2009-01-01
The role of mechanical force in cellular processes is increasingly revealed by single molecule experiments and simulations of force-induced transitions in proteins. How the applied force propagates within proteins determines their mechanical behavior yet remains largely unknown. We present a new method based on molecular dynamics simulations to disclose the distribution of strain in protein structures, here for the newly determined high-resolution crystal structure of I27, a titin immunoglobulin (IG) domain. We obtain a sparse, spatially connected, and highly anisotropic mechanical network. This allows us to detect load-bearing motifs composed of interstrand hydrogen bonds and hydrophobic core interactions, including parts distal to the site to which force was applied. The role of the force distribution pattern for mechanical stability is tested by in silico unfolding of I27 mutants. We then compare the observed force pattern to the sparse network of coevolved residues found in this family. We find a remarkable overlap, suggesting the force distribution to reflect constraints for the evolutionary design of mechanical resistance in the IG family. The force distribution analysis provides a molecular interpretation of coevolution and opens the road to the study of the mechanism of signal propagation in proteins in general. PMID:19282960
Optimal Couple Projections for Domain Adaptive Sparse Representation-based Classification.
Zhang, Guoqing; Sun, Huaijiang; Porikli, Fatih; Liu, Yazhou; Sun, Quansen
2017-08-29
In recent years, sparse representation based classification (SRC) is one of the most successful methods and has been shown impressive performance in various classification tasks. However, when the training data has a different distribution than the testing data, the learned sparse representation may not be optimal, and the performance of SRC will be degraded significantly. To address this problem, in this paper, we propose an optimal couple projections for domain-adaptive sparse representation-based classification (OCPD-SRC) method, in which the discriminative features of data in the two domains are simultaneously learned with the dictionary that can succinctly represent the training and testing data in the projected space. OCPD-SRC is designed based on the decision rule of SRC, with the objective to learn coupled projection matrices and a common discriminative dictionary such that the between-class sparse reconstruction residuals of data from both domains are maximized, and the within-class sparse reconstruction residuals of data are minimized in the projected low-dimensional space. Thus, the resulting representations can well fit SRC and simultaneously have a better discriminant ability. In addition, our method can be easily extended to multiple domains and can be kernelized to deal with the nonlinear structure of data. The optimal solution for the proposed method can be efficiently obtained following the alternative optimization method. Extensive experimental results on a series of benchmark databases show that our method is better or comparable to many state-of-the-art methods.
A range-based predictive localization algorithm for WSID networks
NASA Astrophysics Data System (ADS)
Liu, Yuan; Chen, Junjie; Li, Gang
2017-11-01
Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.
NASA Astrophysics Data System (ADS)
Turner, D. P.; Jacobson, A. R.; Nemani, R. R.
2013-12-01
The recent development of large spatially-explicit datasets for multiple variables relevant to monitoring terrestrial carbon flux offers the opportunity to estimate the terrestrial land flux using several alternative, potentially complimentary, approaches. Here we developed and compared regional estimates of net ecosystem exchange (NEE) over the Pacific Northwest region of the U.S. using three approaches. In the prognostic modeling approach, the process-based Biome-BGC model was driven by distributed meteorological station data and was informed by Landsat-based coverages of forest stand age and disturbance regime. In the diagnostic modeling approach, the quasi-mechanistic CFLUX model estimated net ecosystem production (NEP) by upscaling eddy covariance flux tower observations. The model was driven by distributed climate data and MODIS FPAR (the fraction of incident PAR that is absorbed by the vegetation canopy). It was informed by coarse resolution (1 km) data about forest stand age. In both the prognostic and diagnostic modeling approaches, emissions estimates for biomass burning, harvested products, and river/stream evasion were added to model-based NEP to get NEE. The inversion model (CarbonTracker) relied on observations of atmospheric CO2 concentration to optimize prior surface carbon flux estimates. The Pacific Northwest is heterogeneous with respect to land cover and forest management, and repeated surveys of forest inventory plots support the presence of a strong regional carbon sink. The diagnostic model suggested a stronger carbon sink than the prognostic model, and a much larger sink that the inversion model. The introduction of Landsat data on disturbance history served to reduce uncertainty with respect to regional NEE in the diagnostic and prognostic modeling approaches. The FPAR data was particularly helpful in capturing the seasonality of the carbon flux using the diagnostic modeling approach. The inversion approach took advantage of a global network of CO2 observation stations, but had difficulty resolving regional fluxes such as that in the PNW given the still sparse nature of the CO2 measurement network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melaina, M.; Bremson, J.; Solo, K.
2013-01-01
The availability of retail stations can be a significant barrier to the adoption of alternative fuel light-duty vehicles in household markets. This is especially the case during early market growth when retail stations are likely to be sparse and when vehicles are dedicated in the sense that they can only be fuelled with a new alternative fuel. For some bi-fuel vehicles, which can also fuel with conventional gasoline or diesel, limited availability will not necessarily limit vehicle sales but can limit fuel use. The impact of limited availability on vehicle purchase decisions is largely a function of geographic coverage andmore » consumer perception. In this paper we review previous attempts to quantify the value of availability and present results from two studies that rely upon distinct methodologies. The first study relies upon stated preference data from a discrete choice survey and the second relies upon a station clustering algorithm and a rational actor value of time framework. Results from the two studies provide an estimate of the discrepancy between stated preference cost penalties and a lower bound on potential revealed cost penalties.« less
Non-convex Statistical Optimization for Sparse Tensor Graphical Model
Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang
2016-01-01
We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459
Particle Size Distributions in Atmospheric Clouds
NASA Technical Reports Server (NTRS)
Paoli, Roberto; Shariff, Karim
2003-01-01
In this note, we derive a transport equation for a spatially integrated distribution function of particles size that is suitable for sparse particle systems, such as in atmospheric clouds. This is done by integrating a Boltzmann equation for a (local) distribution function over an arbitrary but finite volume. A methodology for evolving the moments of the integrated distribution is presented. These moments can be either tracked for a finite number of discrete populations ('clusters') or treated as continuum variables.
Reconstruction of far-field tsunami amplitude distributions from earthquake sources
Geist, Eric L.; Parsons, Thomas E.
2016-01-01
The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.
Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carleton, James Brian; Parks, Michael L.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less
Optical fringe-reflection deflectometry with sparse representation
NASA Astrophysics Data System (ADS)
Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng
2018-05-01
Optical fringe-reflection deflectometry is a surprisingly attractive scratch detection technique for specular surfaces owing to its unparalleled local sensibility. Full-field surface topography is obtained from a measured normal field using gradient integration. However, there may not be an ideal measured gradient field for deflectometry reconstruction in practice. Both the non-integrability condition and various kinds of image noise distributions, which are present in the indirect measured gradient field, may lead to ambiguity about the scratches on specular surfaces. In order to reduce misjudgment of scratches, sparse representation is introduced into the Southwell curl equation for deflectometry. The curl can be represented as a linear combination of the given redundant dictionary for curl and the sparsest solution for gradient refinement. The non-integrability condition and noise permutation can be overcome with sparse representation for gradient refinement. Numerical simulations demonstrate that the accuracy rate of judgment of scratches can be enhanced with sparse representation compared to the standard least-squares integration. Preliminary experiments are performed with the application of practical measured deflectometric data to verify the validity of the algorithm.
Electric power scheduling: A distributed problem-solving approach
NASA Technical Reports Server (NTRS)
Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.
1990-01-01
Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity. The value-driven free-market economic model is such a tool.
Location of Road Emergency Stations in Fars Province, Using Spatial Multi-Criteria Decision Making.
Goli, Ali; Ansarizade, Najmeh; Barati, Omid; Kavosi, Zahra
2015-01-01
To locate the road emergency stations in Fars province based on using spatial multi-criteria decision making (Delphi method). In this study, the criteria affecting the location of road emergency stations have been identified through Delphi method and their importance was determined using Analytical Hierarchical Process (AHP). With regard to the importance of the criteria and by using Geographical Information System (GIS), the appropriateness of the existing stations with the criteria and the way of their distribution has been explored, and the appropriate arenas for creating new emergency stations were determined. In order to investigate the spatial distribution pattern of the stations, Moran's Index was used. The accidents (0.318), placement position (0.235), time (0.198), roads (0.160), and population (0.079) were introduced as the main criteria in location road emergency stations. The findings showed that the distribution of the existing stations was clustering (Moran's I=0.3). Three priorities were introduced for establishing new stations. Some arenas including Abade, north of Eghlid and Khoram bid, and small parts of Shiraz, Farashband, Bavanat, and Kazeroon were suggested as the first priority. GIS is a useful and applicable tool in investigating spatial distribution and geographical accessibility to the setting that provide health care, including emergency stations.
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.; Davis, T. A.
2016-12-01
Seismic tomography inverse problems are among the largest high-dimensional parameter estimation tasks in Earth science. We show how combinatorics and graph theory can be used to analyze the structure of such problems, and to effectively decompose them into smaller ones that can be solved efficiently by means of the least squares method. In combination with recent high performance direct sparse algorithms, this reduction in dimensionality allows for an efficient computation of the model resolution and covariance matrices using limited resources. Furthermore, we show that a new sparse singular value decomposition method can be used to obtain the complete spectrum of the singular values. This procedure provides the means for more objective regularization and further dimensionality reduction of the problem. We apply this methodology to a moderate size, non-linear seismic tomography problem to image the structure of the crust and the upper mantle beneath Japan using local deep earthquakes recorded by the High Sensitivity Seismograph Network stations.
Comparison between sparsely distributed memory and Hopfield-type neural network models
NASA Technical Reports Server (NTRS)
Keeler, James D.
1986-01-01
The Sparsely Distributed Memory (SDM) model (Kanerva, 1984) is compared to Hopfield-type neural-network models. A mathematical framework for comparing the two is developed, and the capacity of each model is investigated. The capacity of the SDM can be increased independently of the dimension of the stored vectors, whereas the Hopfield capacity is limited to a fraction of this dimension. However, the total number of stored bits per matrix element is the same in the two models, as well as for extended models with higher order interactions. The models are also compared in their ability to store sequences of patterns. The SDM is extended to include time delays so that contextual information can be used to cover sequences. Finally, it is shown how a generalization of the SDM allows storage of correlated input pattern vectors.
Sparse distributed memory: understanding the speed and robustness of expert memory
Brogliato, Marcelo S.; Chada, Daniel M.; Linhares, Alexandre
2014-01-01
How can experts, sometimes in exacting detail, almost immediately and very precisely recall memory items from a vast repertoire? The problem in which we will be interested concerns models of theoretical neuroscience that could explain the speed and robustness of an expert's recollection. The approach is based on Sparse Distributed Memory, which has been shown to be plausible, both in a neuroscientific and in a psychological manner, in a number of ways. A crucial characteristic concerns the limits of human recollection, the “tip-of-tongue” memory event—which is found at a non-linearity in the model. We expand the theoretical framework, deriving an optimization formula to solve this non-linearity. Numerical results demonstrate how the higher frequency of rehearsal, through work or study, immediately increases the robustness and speed associated with expert memory. PMID:24808842
Lindsey, Delwin T.; Brainard, David H.; Apicella, Coren L.
2016-01-01
In our empirical and theoretical study of color naming among the Hadza, a Tanzanian hunter-gatherer group, we show that Hadza color naming is sparse (the color appearance of many stimulus tiles was not named), diverse (there was little consensus in the terms for the color appearance of most tiles), and distributed (the universal color categories of world languages are revealed in nascent form within the Hadza language community, when we analyze the patterns of how individual Hadza deploy color terms). Using our Hadza data set, Witzel shows an association between two measures of color naming performance and the chroma of the stimuli. His prediction of which colored tiles will be named with what level of consensus, while interesting, does not alter the validity of our conclusions. PMID:28781734
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Chen, W.; Li, J.
2014-07-01
Climate change may alter the spatial distribution, composition, structure and functions of plant communities. Transitional zones between biomes, or ecotones, are particularly sensitive to climate change. Ecotones are usually heterogeneous with sparse trees. The dynamics of ecotones are mainly determined by the growth and competition of individual plants in the communities. Therefore it is necessary to calculate the solar radiation absorbed by individual plants in order to understand and predict their responses to climate change. In this study, we developed an individual plant radiation model, IPR (version 1.0), to calculate solar radiation absorbed by individual plants in sparse heterogeneous woody plant communities. The model is developed based on geometrical optical relationships assuming that crowns of woody plants are rectangular boxes with uniform leaf area density. The model calculates the fractions of sunlit and shaded leaf classes and the solar radiation absorbed by each class, including direct radiation from the sun, diffuse radiation from the sky, and scattered radiation from the plant community. The solar radiation received on the ground is also calculated. We tested the model by comparing with the results of random distribution of plants. The tests show that the model results are very close to the averages of the random distributions. This model is efficient in computation, and can be included in vegetation models to simulate long-term transient responses of plant communities to climate change. The code and a user's manual are provided as Supplement of the paper.
NASA Astrophysics Data System (ADS)
Sodoudi, Sahar; Schäfer, Kerstin; Grawe, David; Petrik, Ronny; Heinke Schlünzen, K.
2014-05-01
The world's population is projected to increase in the next decades especially in urban areas. Additionally, the living conditions are affected largely by the local urban climate. The urban climate is a complex local system which might change differently than the regional climate. Studying the spatial distribution of air temperature and urban heat island intensity is one of the major concerns in the climate change scenarios. Due to the expected higher frequency of heat waves in the future and the related heat stress, high resolution distribution of air temperature is an important key for urban planning and development. In this study the non-hydrostatic Mesoscale Transport and Fluid Model (METRAS) developed at the University of Hamburg is used to simulate the air temperature for the urban area of Berlin. The forcing data have been derived from the ECMWF reanalysis data. We have used three nested domains (resolution of 4 km, 1 km, 200 m) to simulate the temperature in Berlin. Evaluation of these mesoscale model results is challenging for urban areas, due to the sparse and heterogeneous distribution of meteorological stations and the heterogeneous land cover in urban areas. The Meteorological Institute of the Free University of Berlin organized six measurement campaigns in 2012. Measurements were taken at 31 different routes through Berlin using mobile measurement systems. In comparison with data from permanent weather stations the mobile measurements show a general overestimation of temperature and underestimation of relative humidity values. This may be the result of the different land cover types and places, where the mobile measurements and the stationary measurements were taken. The highly resolved (200 m) simulated air temperature from METRAS has been verified for three different selected summer days in 2012 with different pressure patterns over Berlin. For the model evaluation, the data from the measuring campaign and 34 permanent stations have been used. The results show that METRAS overestimated the cloud water and rain water content on the first two selected days. The air temperature on the first two days has been underestimated by the model due to the reduced incoming radiation, and the strength of the urban heat island has not been reproduced. The mean absolute error is higher during the day time and especially in the city center. The last selected day is a sunny day with light wind from the Northwest. On this day the diurnal temperature variation is well reproduced by the model, although METRAS predicts short showers for several small areas during the afternoon. The showers do not lead to a temperature decrease over the whole city. The mean absolute error is much smaller in comparison with the other days. The temperature peak and the urban heat island are well consistent with observations. The mean absolute error is smaller in the city center and larger over the green areas. The spatial distribution of simulated temperature is in a good agreement with the measurements.
Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter
Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...
2018-04-06
Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less
Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Qiang; Du, Qizhen; Gong, Xufei
Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less
Broday, David M
2017-10-02
The evaluation of the effects of air pollution on public health and human-wellbeing requires reliable data. Standard air quality monitoring stations provide accurate measurements of airborne pollutant levels, but, due to their sparse distribution, they cannot capture accurately the spatial variability of air pollutant concentrations within cities. Dedicated in-depth field campaigns have dense spatial coverage of the measurements but are held for relatively short time periods. Hence, their representativeness is limited. Moreover, the oftentimes integrated measurements represent time-averaged records. Recent advances in communication and sensor technologies enable the deployment of dense grids of Wireless Distributed Environmental Sensor Networks for air quality monitoring, yet their capability to capture urban-scale spatiotemporal pollutant patterns has not been thoroughly examined to date. Here, we summarize our studies on the practicalities of using data streams from sensor nodes for air quality measurement and the required methods to tune the results to different stakeholders and applications. We summarize the results from eight cities across Europe, five sensor technologies-three stationary (with one tested also while moving) and two personal sensor platforms, and eight ambient pollutants. Overall, few sensors showed an exceptional and consistent performance, which can shed light on the fine spatiotemporal urban variability of pollutant concentrations. Stationary sensor nodes were more reliable than personal nodes. In general, the sensor measurements tend to suffer from the interference of various environmental factors and require frequent calibrations. This calls for the development of suitable field calibration procedures, and several such in situ field calibrations are presented.
2017-01-01
The evaluation of the effects of air pollution on public health and human-wellbeing requires reliable data. Standard air quality monitoring stations provide accurate measurements of airborne pollutant levels, but, due to their sparse distribution, they cannot capture accurately the spatial variability of air pollutant concentrations within cities. Dedicated in-depth field campaigns have dense spatial coverage of the measurements but are held for relatively short time periods. Hence, their representativeness is limited. Moreover, the oftentimes integrated measurements represent time-averaged records. Recent advances in communication and sensor technologies enable the deployment of dense grids of Wireless Distributed Environmental Sensor Networks for air quality monitoring, yet their capability to capture urban-scale spatiotemporal pollutant patterns has not been thoroughly examined to date. Here, we summarize our studies on the practicalities of using data streams from sensor nodes for air quality measurement and the required methods to tune the results to different stakeholders and applications. We summarize the results from eight cities across Europe, five sensor technologies-three stationary (with one tested also while moving) and two personal sensor platforms, and eight ambient pollutants. Overall, few sensors showed an exceptional and consistent performance, which can shed light on the fine spatiotemporal urban variability of pollutant concentrations. Stationary sensor nodes were more reliable than personal nodes. In general, the sensor measurements tend to suffer from the interference of various environmental factors and require frequent calibrations. This calls for the development of suitable field calibration procedures, and several such in situ field calibrations are presented. PMID:28974042
2015-06-01
of uniform- versus nonuniform -pattern reconstruction, of transform function used, and of minimum randomly distributed measurements needed to...the radiation-frequency pattern’s reconstruction using uniform and nonuniform randomly distributed samples even though the pattern error manifests...5 Fig. 3 The nonuniform compressive-sensing reconstruction of the radiation
A manual for PARTI runtime primitives
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel
1990-01-01
Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Vicente; Bonney, Matthew; Schroeder, Benjamin
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a classmore » of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10 -4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.« less
A Distributed Hydrological model Forced by DIMP2 Data and the WRF Mesoscale model
NASA Astrophysics Data System (ADS)
Wayand, N. E.
2010-12-01
Forecasted warming over the next century will drastically reduce seasonal snowpack that provides 40% of the world’s drinking water. With increased climate warming, droughts may occur more frequently, which will increase society’s reliance on this same summer snowpack as a water supply. This study aims to reduce driving data errors that lead to poor simulations of snow ablation and accumulation, and streamflow. Results from the Distributed Hydrological Model Intercomparison Project Phase 2 (DMIP2) project using the Distributed Hydrology Soil and Vegetation Model (DHSVM) highlighted the critical need for accurate driving data that distributed models require. Currently, the meteorological driving data for distributed hydrological models commonly rely on interpolation techniques between a network of observational stations, as well as historical monthly means. This method is limited by two significant issues: snowpack is stored at high elevations, where interpolation techniques perform poorly due to sparse observations, and historic climatological means may be unsuitable in a changing climate. Mesoscale models may provide a physically-based approach to supplement surface observations over high-elevation terrain. Initial results have shown that while temperature lapse rates are well represented by multiple mesoscale models, significant precipitation biases are dependent on the particular model microphysics. We evaluate multiple methods of downscaling surface variables from the Weather and Research Forecasting (WRF) model that are then used to drive DHSVM over the North Fork American River basin in California. A comparison between each downscaled driving data set and paired DHSVM results to observations will determine how much improvement in simulated streamflow and snowpack are gained at the expense of each additional degree of downscaling. Our results from DMIP2 will be used as a benchmark for the best available DHSVM run using all available observational data. The findings presented here will help guide watershed managers of the requirements, advantages and limitations of using a distributed hydrological model coupled with various forms of forcing data over mountainous terrain.
NASA Astrophysics Data System (ADS)
Chen, Ming; Guo, Jiming; Li, Zhicai; Zhang, Peng; Wu, Junli; Song, Weiwei
2017-04-01
BDS precision orbit determination is a key content of the BDS application, but the inadequate ground stations and the poor distribution of the network are the main reasons for the low accuracy of BDS precise orbit determination. In this paper, the BDS precise orbit determination results are obtained by using the IGS MGEX stations and the Chinese national reference stations,the accuracy of orbit determination of GEO, IGSO and MEO is 10.3cm, 2.8cm and 3.2cm, and the radial accuracy is 1.6cm,1.9cm and 1.5cm.The influence of ground reference stations distribution on BDS precise orbit determination is studied. The results show that the Chinese national reference stations contribute significantly to the BDS orbit determination, the overlap precision of GEO/IGSO/MEO satellites were improved by 15.5%, 57.5% and 5.3% respectively after adding the Chinese stations.Finally, the results of ODOP(orbit distribution of precision) and SLR are verified. Key words: BDS precise orbit determination; accuracy assessment;Chinese national reference stations;reference stations distribution;orbit distribution of precision
2-D Path Corrections for Local and Regional Coda Waves: A Test of Transportability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayeda, K M; Malagnini, L; Phillips, W S
2005-07-13
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. [2003] has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regionsmore » of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. We will compare performance of 1-D versus 2-D path corrections in a variety of regions. First, the complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Next, we will compare results for the Italian Alps using high frequency data from the University of Genoa. For Northern California, we used the same station and event distribution and compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7 {le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less
A global satellite assisted precipitation climatology
Funk, Christopher C.; Verdin, Andrew P.; Michaelsen, Joel C.; Pedreros, Diego; Husak, Gregory J.; Peterson, P.
2015-01-01
Accurate representations of mean climate conditions, especially in areas of complex terrain, are an important part of environmental monitoring systems. As high-resolution satellite monitoring information accumulates with the passage of time, it can be increasingly useful in efforts to better characterize the earth's mean climatology. Current state-of-the-science products rely on complex and sometimes unreliable relationships between elevation and station-based precipitation records, which can result in poor performance in food and water insecure regions with sparse observation networks. These vulnerable areas (like Ethiopia, Afghanistan, or Haiti) are often the critical regions for humanitarian drought monitoring. Here, we show that long period of record geo-synchronous and polar-orbiting satellite observations provide a unique new resource for producing high resolution (0.05°) global precipitation climatologies that perform reasonably well in data sparse regions. Traditionally, global climatologies have been produced by combining station observations and physiographic predictors like latitude, longitude, elevation, and slope. While such approaches can work well, especially in areas with reasonably dense observation networks, the fundamental relationship between physiographic variables and the target climate variables can often be indirect and spatially complex. Infrared and microwave satellite observations, on the other hand, directly monitor the earth's energy emissions. These emissions often correspond physically with the location and intensity of precipitation. We show that these relationships provide a good basis for building global climatologies. We also introduce a new geospatial modeling approach based on moving window regressions and inverse distance weighting interpolation. This approach combines satellite fields, gridded physiographic indicators, and in situ climate normals. The resulting global 0.05° monthly precipitation climatology, the Climate Hazards Group's Precipitation Climatology version 1 (CHPclim v.1.0,http://dx.doi.org/10.15780/G2159X), is shown to compare favorably with similar global climatology products, especially in areas with complex terrain and low station densities.
A global satellite-assisted precipitation climatology
NASA Astrophysics Data System (ADS)
Funk, C.; Verdin, A.; Michaelsen, J.; Peterson, P.; Pedreros, D.; Husak, G.
2015-10-01
Accurate representations of mean climate conditions, especially in areas of complex terrain, are an important part of environmental monitoring systems. As high-resolution satellite monitoring information accumulates with the passage of time, it can be increasingly useful in efforts to better characterize the earth's mean climatology. Current state-of-the-science products rely on complex and sometimes unreliable relationships between elevation and station-based precipitation records, which can result in poor performance in food and water insecure regions with sparse observation networks. These vulnerable areas (like Ethiopia, Afghanistan, or Haiti) are often the critical regions for humanitarian drought monitoring. Here, we show that long period of record geo-synchronous and polar-orbiting satellite observations provide a unique new resource for producing high-resolution (0.05°) global precipitation climatologies that perform reasonably well in data-sparse regions. Traditionally, global climatologies have been produced by combining station observations and physiographic predictors like latitude, longitude, elevation, and slope. While such approaches can work well, especially in areas with reasonably dense observation networks, the fundamental relationship between physiographic variables and the target climate variables can often be indirect and spatially complex. Infrared and microwave satellite observations, on the other hand, directly monitor the earth's energy emissions. These emissions often correspond physically with the location and intensity of precipitation. We show that these relationships provide a good basis for building global climatologies. We also introduce a new geospatial modeling approach based on moving window regressions and inverse distance weighting interpolation. This approach combines satellite fields, gridded physiographic indicators, and in situ climate normals. The resulting global 0.05° monthly precipitation climatology, the Climate Hazards Group's Precipitation Climatology version 1 (CHPclim v.1.0, doi:10.15780/G2159X), is shown to compare favorably with similar global climatology products, especially in areas with complex terrain and low station densities.
Sparse matrix methods research using the CSM testbed software system
NASA Technical Reports Server (NTRS)
Chu, Eleanor; George, J. Alan
1989-01-01
Research is described on sparse matrix techniques for the Computational Structural Mechanics (CSM) Testbed. The primary objective was to compare the performance of state-of-the-art techniques for solving sparse systems with those that are currently available in the CSM Testbed. Thus, one of the first tasks was to become familiar with the structure of the testbed, and to install some or all of the SPARSPAK package in the testbed. A suite of subroutines to extract from the data base the relevant structural and numerical information about the matrix equations was written, and all the demonstration problems distributed with the testbed were successfully solved. These codes were documented, and performance studies comparing the SPARSPAK technology to the methods currently in the testbed were completed. In addition, some preliminary studies were done comparing some recently developed out-of-core techniques with the performance of the testbed processor INV.
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
47 CFR 73.626 - DTV distributed transmission systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false DTV distributed transmission systems. 73.626... RADIO BROADCAST SERVICES Television Broadcast Stations § 73.626 DTV distributed transmission systems. (a... distributed transmission system (DTS). Except as expressly provided in this section, DTV stations operating a...
Inference of the sparse kinetic Ising model using the decimation method
NASA Astrophysics Data System (ADS)
Decelle, Aurélien; Zhang, Pan
2015-05-01
In this paper we study the inference of the kinetic Ising model on sparse graphs by the decimation method. The decimation method, which was first proposed in Decelle and Ricci-Tersenghi [Phys. Rev. Lett. 112, 070603 (2014), 10.1103/PhysRevLett.112.070603] for the static inverse Ising problem, tries to recover the topology of the inferred system by setting the weakest couplings to zero iteratively. During the decimation process the likelihood function is maximized over the remaining couplings. Unlike the ℓ1-optimization-based methods, the decimation method does not use the Laplace distribution as a heuristic choice of prior to select a sparse solution. In our case, the whole process can be done auto-matically without fixing any parameters by hand. We show that in the dynamical inference problem, where the task is to reconstruct the couplings of an Ising model given the data, the decimation process can be applied naturally into a maximum-likelihood optimization algorithm, as opposed to the static case where pseudolikelihood method needs to be adopted. We also use extensive numerical studies to validate the accuracy of our methods in dynamical inference problems. Our results illustrate that, on various topologies and with different distribution of couplings, the decimation method outperforms the widely used ℓ1-optimization-based methods.
Shope, William G.; ,
1987-01-01
The US Geological Survey is utilizing a national network of more than 1000 satellite data-collection stations, four satellite-relay direct-readout ground stations, and more than 50 computers linked together in a private telecommunications network to acquire, process, and distribute hydrological data in near real-time. The four Survey offices operating a satellite direct-readout ground station provide near real-time hydrological data to computers located in other Survey offices through the Survey's Distributed Information System. The computerized distribution system permits automated data processing and distribution to be carried out in a timely manner under the control and operation of the Survey office responsible for the data-collection stations and for the dissemination of hydrological information to the water-data users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eken, T; Mayeda, K; Hofstetter, A
A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, we found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction for 10 narrow frequency bands ranging between 0.02 to 2.0 Hz. For higher frequencies however, 2-D pathmore » corrections will be necessary and will be the subject of a future study. After calibrating the stations ISP, ISKB, and MALT for local and regional distances, single-station moment-magnitude estimates (M{sub w}) derived from the coda spectra were in excellent agreement with those determined from multi-station waveform modeling inversions of long-period data, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub w} estimates to significantly smaller events which could not otherwise be waveform modeled due to poor signal-to-noise ratio at long periods and sparse station coverage. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less
A manual for PARTI runtime primitives, revision 1
NASA Technical Reports Server (NTRS)
Das, Raja; Saltz, Joel; Berryman, Harry
1991-01-01
Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.
Soil carbon distribution in Alaska in relation to soil-forming factors
Kristofer D. Johnson; Jennifer Harden; A. David McGuire; Norman B. Bliss; James G. Bockheim; Mark Clark; Teresa Nettleton-Hollingsworth; M. Torre Jorgenson; Evan S. Kane; Michelle Mack; Johathan ODonnell; Chien-Lu Ping; Edward A.G. Schuur; Merritt R. Turetsky; David W. Valentine
2011-01-01
The direction and magnitude of soil organic carbon (SOC) changes in response to climate change remain unclear and depend on the spatial distribution of SOC across landscapes. Uncertainties regarding the fate of SOC are greater in high-latitude systems where data are sparse and the soils are affected by sub-zero temperatures. To address these issues in Alaska, a first-...
Monitoring Mountain Meteorology without Much Money (Invited)
NASA Astrophysics Data System (ADS)
Lundquist, J. D.
2009-12-01
Mountains are the water towers of the world, storing winter precipitation in the form of snow until summer, when it can be used for agriculture and cities. However, mountain weather is highly variable, and measurements are sparsely distributed. In order adequately sample snow and climate variables in complex terrain, we need as many measurements as possible. This means that instruments must be inexpensive and relatively simple to deploy. Here, we demonstrate how dime-sized temperature sensors developed for the refrigeration industry can be used to monitor air temperature (using evergreen trees as radiation shields) and snow cover duration (using the diurnal cycle in near-surface soil temperature). Together, these measurements can be used to recreate accumulated snow water equivalent over the prior year. We also demonstrate how buckets of water may be placed under networked acoustic snow depth sensors to provide an index of daily evaporation rates at SNOTEL stations. (a) Temperature sensor sealed for deployment in the soil. (b) Launching a temperature sensor into a tree. (c) Pulley system to keep sensor above the snow. (a) Photo of bucket underneath acoustic snow depth sensor. (b) Water depth in the bucket as calculated by the snow depth sensor and by a pressure sensor inside the bucket.
An alternative design for a sparse distributed memory
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1989-01-01
A new design for a Sparse Distributed Memory, called the selected-coordinate design, is described. As in the original design, there are a large number of memory locations, each of which may be activated by many different addresses (binary vectors) in a very large address space. Each memory location is defined by specifying ten selected coordinates (bit positions in the address vectors) and a set of corresponding assigned values, consisting of one bit for each selected coordinate. A memory location is activated by an address if, for all ten of the locations's selected coordinates, the corresponding bits in the address vector match the respective assigned value bits, regardless of the other bits in the address vector. Some comparative memory capacity and signal-to-noise ratio estimates for the both the new and original designs are given. A few possible hardware embodiments of the new design are described.
Nefedieva, Julia S.; Nefediev, Pavel S.; Sakhnevich, Miroslava B.; Dyachkov, Yuri V.
2015-01-01
Abstract The distribution of millipedes along an altitudinal gradient in the south of Lake Teletskoye, Altai, Russia based on new samples from the Kyga Profile sites, as well as on partly published and freshly revised material (Mikhaljova et al. 2007, 2008, 2014, Nefedieva and Nefediev 2008, Nefediev and Nefedieva 2013, Nefedieva et al. 2014), is established. The millipede diversity is estimated to be at least 15 species and subspecies from 10 genera, 6 families and three orders. The bulk of species diversity is confined both to low- and mid-mountain chern taiga forests and high-mountain shrub tundras, whereas the highest numbers, reaching up to 130 ind./m², is shown in subalpine Pinus sibirica sparse growths. Based on clustering studied localities on species diversity similarity two groups of sites are defined: low-mountain sites and subalpine sparse growths of Pinus sibirica ones. PMID:26257540
Sparse Bayesian Inference and the Temperature Structure of the Solar Corona
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Harry P.; Byers, Jeff M.; Crump, Nicholas A.
Measuring the temperature structure of the solar atmosphere is critical to understanding how it is heated to high temperatures. Unfortunately, the temperature of the upper atmosphere cannot be observed directly, but must be inferred from spectrally resolved observations of individual emission lines that span a wide range of temperatures. Such observations are “inverted” to determine the distribution of plasma temperatures along the line of sight. This inversion is ill posed and, in the absence of regularization, tends to produce wildly oscillatory solutions. We introduce the application of sparse Bayesian inference to the problem of inferring the temperature structure of themore » solar corona. Within a Bayesian framework a preference for solutions that utilize a minimum number of basis functions can be encoded into the prior and many ad hoc assumptions can be avoided. We demonstrate the efficacy of the Bayesian approach by considering a test library of 40 assumed temperature distributions.« less
Sparse distributed memory and related models
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1992-01-01
Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.
Label-free optical imaging of membrane patches for atomic force microscopy
Churnside, Allison B.; King, Gavin M.; Perkins, Thomas T.
2010-01-01
In atomic force microscopy (AFM), finding sparsely distributed regions of interest can be difficult and time-consuming. Typically, the tip is scanned until the desired object is located. This process can mechanically or chemically degrade the tip, as well as damage fragile biological samples. Protein assemblies can be detected using the back-scattered light from a focused laser beam. We previously used back-scattered light from a pair of laser foci to stabilize an AFM. In the present work, we integrate these techniques to optically image patches of purple membranes prior to AFM investigation. These rapidly acquired optical images were aligned to the subsequent AFM images to ~40 nm, since the tip position was aligned to the optical axis of the imaging laser. Thus, this label-free imaging efficiently locates sparsely distributed protein assemblies for subsequent AFM study while simultaneously minimizing degradation of the tip and the sample. PMID:21164738
Sparse orthogonal population representation of spatial context in the retrosplenial cortex.
Mao, Dun; Kandler, Steffen; McNaughton, Bruce L; Bonin, Vincent
2017-08-15
Sparse orthogonal coding is a key feature of hippocampal neural activity, which is believed to increase episodic memory capacity and to assist in navigation. Some retrosplenial cortex (RSC) neurons convey distributed spatial and navigational signals, but place-field representations such as observed in the hippocampus have not been reported. Combining cellular Ca 2+ imaging in RSC of mice with a head-fixed locomotion assay, we identified a population of RSC neurons, located predominantly in superficial layers, whose ensemble activity closely resembles that of hippocampal CA1 place cells during the same task. Like CA1 place cells, these RSC neurons fire in sequences during movement, and show narrowly tuned firing fields that form a sparse, orthogonal code correlated with location. RSC 'place' cell activity is robust to environmental manipulations, showing partial remapping similar to that observed in CA1. This population code for spatial context may assist the RSC in its role in memory and/or navigation.Neurons in the retrosplenial cortex (RSC) encode spatial and navigational signals. Here the authors use calcium imaging to show that, similar to the hippocampus, RSC neurons also encode place cell-like activity in a sparse orthogonal representation, partially anchored to the allocentric cues on the linear track.
NASA Astrophysics Data System (ADS)
Doss, Derek J.; Heiselman, Jon S.; Collins, Jarrod A.; Weis, Jared A.; Clements, Logan W.; Geevarghese, Sunil K.; Miga, Michael I.
2017-03-01
Sparse surface digitization with an optically tracked stylus for use in an organ surface-based image-to-physical registration is an established approach for image-guided open liver surgery procedures. However, variability in sparse data collections during open hepatic procedures can produce disparity in registration alignments. In part, this variability arises from inconsistencies with the patterns and fidelity of collected intraoperative data. The liver lacks distinct landmarks and experiences considerable soft tissue deformation. Furthermore, data coverage of the organ is often incomplete or unevenly distributed. While more robust feature-based registration methodologies have been developed for image-guided liver surgery, it is still unclear how variation in sparse intraoperative data affects registration. In this work, we have developed an application to allow surgeons to study the performance of surface digitization patterns on registration. Given the intrinsic nature of soft-tissue, we incorporate realistic organ deformation when assessing fidelity of a rigid registration methodology. We report the construction of our application and preliminary registration results using four participants. Our preliminary results indicate that registration quality improves as users acquire more experience selecting patterns of sparse intraoperative surface data.
A coarse-to-fine approach for medical hyperspectral image classification with sparse representation
NASA Astrophysics Data System (ADS)
Chang, Lan; Zhang, Mengmeng; Li, Wei
2017-10-01
A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.
Cloud-In-Cell modeling of shocked particle-laden flows at a ``SPARSE'' cost
NASA Astrophysics Data System (ADS)
Taverniers, Soren; Jacobs, Gustaaf; Sen, Oishik; Udaykumar, H. S.
2017-11-01
A common tool for enabling process-scale simulations of shocked particle-laden flows is Eulerian-Lagrangian Particle-Source-In-Cell (PSIC) modeling where each particle is traced in its Lagrangian frame and treated as a mathematical point. Its dynamics are governed by Stokes drag corrected for high Reynolds and Mach numbers. The computational burden is often reduced further through a ``Cloud-In-Cell'' (CIC) approach which amalgamates groups of physical particles into computational ``macro-particles''. CIC does not account for subgrid particle fluctuations, leading to erroneous predictions of cloud dynamics. A Subgrid Particle-Averaged Reynolds-Stress Equivalent (SPARSE) model is proposed that incorporates subgrid interphase velocity and temperature perturbations. A bivariate Gaussian source distribution, whose covariance captures the cloud's deformation to first order, accounts for the particles' momentum and energy influence on the carrier gas. SPARSE is validated by conducting tests on the interaction of a particle cloud with the accelerated flow behind a shock. The cloud's average dynamics and its deformation over time predicted with SPARSE converge to their counterparts computed with reference PSIC models as the number of Gaussians is increased from 1 to 16. This work was supported by AFOSR Grant No. FA9550-16-1-0008.
NASA Astrophysics Data System (ADS)
Rana, Parvez; Vauhkonen, Jari; Junttila, Virpi; Hou, Zhengyang; Gautam, Basanta; Cawkwell, Fiona; Tokola, Timo
2017-12-01
Large-diameter trees (taking DBH > 30 cm to define large trees) dominate the dynamics, function and structure of a forest ecosystem. The aim here was to employ sparse airborne laser scanning (ALS) data with a mean point density of 0.8 m-2 and the non-parametric k-most similar neighbour (k-MSN) to predict tree diameter at breast height (DBH) distributions in a subtropical forest in southern Nepal. The specific objectives were: (1) to evaluate the accuracy of the large-tree fraction of the diameter distribution; and (2) to assess the effect of the number of training areas (sample size, n) on the accuracy of the predicted tree diameter distribution. Comparison of the predicted distributions with empirical ones indicated that the large tree diameter distribution can be derived in a mixed species forest with a RMSE% of 66% and a bias% of -1.33%. It was also feasible to downsize the sample size without losing the interpretability capacity of the model. For large-diameter trees, even a reduction of half of the training plots (n = 250), giving a marginal increase in the RMSE% (1.12-1.97%) was reported compared with the original training plots (n = 500). To be consistent with these outcomes, the sample areas should capture the entire range of spatial and feature variability in order to reduce the occurrence of error.
Galaxy redshift surveys with sparse sampling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chi-Ting; Wullstein, Philipp; Komatsu, Eiichiro
2013-12-01
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., V{sub survey} ∼ 10Gpc{sup 3}) to be covered, and thus tends to be expensive. A ''sparse sampling'' method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, V{sub survey}, we observe only a fraction of the volume. The distribution of observed regions should bemore » chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of V{sub survey} (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by V{sub survey} (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. On the other hand, we show that the two-point correlation function (pair counting) is not affected by sparse sampling. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.« less
Loxley, P N
2017-10-01
The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.
Shape models of asteroids reconstructed from WISE data and sparse photometry
NASA Astrophysics Data System (ADS)
Durech, Josef; Hanus, Josef; Ali-Lagoa, Victor
2017-10-01
By combining sparse-in-time photometry from the Lowell Observatory photometry database with WISE observations, we reconstructed convex shape models for about 700 new asteroids and for other ~850 we derived 'partial' models with unconstrained ecliptic longitude of the spin axis direction. In our approach, the WISE data were treated as reflected light, which enabled us to directly join them with sparse photometry into one dataset that was processed by the lightcurve inversion method. This simplified treatment of thermal infrared data turned out to provide correct results, because in most cases the phase offset between optical and thermal lightcurves was small and the correct sidereal rotation period was determined. The spin and shape parameters derived from only optical data and from a combination of optical and WISE data were very similar. The new models together with those already available in the Database of Asteroid Models from Inversion Techniques (DAMIT) represent a sample of ~1650 asteroids. When including also partial models, the total sample is about 2500 asteroids, which significantly increases the number of models with respect to those that have been available so far. We will show the distribution of spin axes for different size groups and also for several collisional families. These observed distributions in general agree with theoretical expectations proving that smaller asteroids are more affected by YORP/Yarkovsky evolution. In asteroid families, we see a clear bimodal distribution of prograde/retrograde rotation that correlates with the position to the right/left from the center of the family measured by the semimajor axis.
Vertical distribution of the soil microbiota along a successional gradient in a glacier forefield.
Rime, Thomas; Hartmann, Martin; Brunner, Ivano; Widmer, Franco; Zeyer, Josef; Frey, Beat
2015-03-01
Spatial patterns of microbial communities have been extensively surveyed in well-developed soils, but few studies investigated the vertical distribution of micro-organisms in newly developed soils after glacier retreat. We used 454-pyrosequencing to assess whether bacterial and fungal community structures differed between stages of soil development (SSD) characterized by an increasing vegetation cover from barren (vegetation cover: 0%/age: 10 years), sparsely vegetated (13%/60 years), transient (60%/80 years) to vegetated (95%/110 years) and depths (surface, 5 and 20 cm) along the Damma glacier forefield (Switzerland). The SSD significantly influenced the bacterial and fungal communities. Based on indicator species analyses, metabolically versatile bacteria (e.g. Geobacter) and psychrophilic yeasts (e.g. Mrakia) characterized the barren soils. Vegetated soils with higher C, N and root biomass consisted of bacteria able to degrade complex organic compounds (e.g. Candidatus Solibacter), lignocellulolytic Ascomycota (e.g. Geoglossum) and ectomycorrhizal Basidiomycota (e.g. Laccaria). Soil depth only influenced bacterial and fungal communities in barren and sparsely vegetated soils. These changes were partly due to more silt and higher soil moisture in the surface. In both soil ages, the surface was characterized by OTUs affiliated to Phormidium and Sphingobacteriales. In lower depths, however, bacterial and fungal communities differed between SSD. Lower depths of sparsely vegetated soils consisted of OTUs affiliated to Acidobacteria and Geoglossum, whereas depths of barren soils were characterized by OTUs related to Gemmatimonadetes. Overall, plant establishment drives the soil microbiota along the successional gradient but does not influence the vertical distribution of microbiota in recently deglaciated soils. © 2014 John Wiley & Sons Ltd.
Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-09-07
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure.
Cost-effectiveness of the stream-gaging program in North Carolina
Mason, R.R.; Jackson, N.M.
1985-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.
NASA Astrophysics Data System (ADS)
Agurto-Detzel, H.; Font, Y.; Charvis, P.; Ambrois, D.; Cheze, J.; Courboulex, F.; De Barros, L.; Deschamps, A.; Galve, A.; Godano, M.; Laigle, M.; Maron, C.; Martin, X.; Monfret, T.; Oregioni, D.; Peix, F., Sr.; Regnier, M. M.; Yates, B.; Mercerat, D.; Leon Rios, S.; Rietbrock, A.; Acero, W.; Alvarado, A. P.; Gabriela, P.; Ramos, C.; Ruiz, M. C.; Singaucho, J. C.; Vasconez, F.; Viracucha, C.; Beck, S. L.; Lynner, C.; Hoskins, M.; Meltzer, A.; Soto-Cordero, L.; Stachnik, J.
2017-12-01
0n April 2016, a Mw 7.8 megathrust earthquake struck the coast of Ecuador causing vast human and material losses. The earthquake ruptured a 100 km-long segment of the subduction interface between Nazca and South America, spatially coinciding with the 1942 M 7.8 earthquake rupture area. Shortly after the mainshock, an international effort made by institutions from Ecuador, France, UK and USA, deployed a temporary network of +60 land and ocean-bottom seismometers to capture the aftershock sequence for the subsequent year. These stations came to join the local Ecuadorian national network already monitoring in place. Here we benefit from this dataset to produce a suite of automatic locations and a subset of regional moment tensors for high quality events. Over 2900 events were detected for the first month of postseismic activity alone, and a subset of 600 events were manually re-picked and located. Similarly, thousands of aftershocks were detected using the temporary deployment over the following months, with magnitudes ranging between 1 to 7. As expected, moment tensors show mostly thrust faulting at the interface, but we also observe sparse normal and strike-slip faulting at shallow depths in the forearc. The spatial distribution of seismicity delineates the coseismic rupture area, but extends well beyond it over a 300 km long segment. Main features include three seismicity alignments perpendicular to the trench, at the north, center and south of the mainshock rupture. Preliminary results comparing quantitatively the distribution of aftershocks to the distribution of the coseismic rupture show that the bulk of the aftershock seismicity occurs at intermediate levels of coseismic slip, while areas of maximum coseismic slip are mostly devoid of events M>3. Our results shed light on the interface processes occurring mainly during the early post-seismic period of large megathrust earthquakes, and implications on the earthquake cycle.
NASA Astrophysics Data System (ADS)
Roostaee, M.; Deng, Z.
2017-12-01
The states' environmental agencies are required by The Clean Water Act to assess all waterbodies and evaluate potential sources of impairments. Spatial and temporal distributions of water quality parameters are critical in identifying Critical Source Areas (CSAs). However, due to limitations in monetary resources and a large number of waterbodies, available monitoring stations are typically sparse with intermittent periods of data collection. Hence, scarcity of water quality data is a major obstacle in addressing sources of pollution through management strategies. In this study spatiotemporal Bayesian Maximum Entropy method (BME) is employed to model the inherent temporal and spatial variability of measured water quality indicators such as Dissolved Oxygen (DO) concentration for Turkey Creek Watershed. Turkey Creek is located in northern Louisiana and has been listed in 303(d) list for DO impairment since 2014 in Louisiana Water Quality Inventory Reports due to agricultural practices. BME method is proved to provide more accurate estimates than the methods of purely spatial analysis by incorporating space/time distribution and uncertainty in available measured soft and hard data. This model would be used to estimate DO concentration at unmonitored locations and times and subsequently identifying CSAs. The USDA's crop-specific land cover data layers of the watershed were then used to determine those practices/changes that led to low DO concentration in identified CSAs. Primary results revealed that cultivation of corn and soybean as well as urban runoff are main contributing sources in low dissolved oxygen in Turkey Creek Watershed.
When El Nino Rages: How Satellite Data Can Help Water-Stressed Islands
NASA Astrophysics Data System (ADS)
Kruk, M. C.; Sutton, J. R. P.; Luchetti, N.; Wright, E.; Marra, J. J.
2016-02-01
The United States Affiliated Pacific Islands (USAPI) are highly susceptible to extreme precipitation events such as drought and flooding, which directly affect their freshwater availability. Precipitation distribution differs by sub-region, and is predominantly influenced by phases of the El Niño Southern Oscillation (ENSO). Forecasters currently rely on ENSO climatologies from sparse in situ station data to inform their precipitation outlooks. To address this spatial gap, a unique NOAA/NASA collaborative project updated the ENSO-based rainfall climatology for the Exclusive Economic Zones (EEZ's) encompassing Hawaii and the USAPI using NOAA's 15km PERSIANN Climate Data Record. This data provided a 30-year record (1984-2015) of daily precipitation at 0.25° resolution, which was used to calculate monthly, seasonal, and yearly precipitation average. The 478-page satellite-derived reference atlas not only illustrates the long-term average rainfall distribution by month, but also shows the percent departure from average for each three-month season based on the Oceanic Niño Index (ONI) for weak, moderate, and strong ENSO phases. Local weather service offices are already using the atlas to better understand precipitation patterns across their regions, and as such are able to produce more accurate forecasts during different ENSO phases to inform adaptation, conservation, and mitigation options for drought and flooding events. The presentation will showcase the development of the atlas, highlight some of the challenges encountered, and demonstrate how CDRs can be used to inform decision-making.
Development of Innovative Technology to Provide Low-Cost Surface Atmospheric Observations
NASA Astrophysics Data System (ADS)
Kucera, Paul; Steinson, Martin
2016-04-01
Accurate and reliable real-time monitoring and dissemination of observations of surface weather conditions is critical for a variety of societal applications. Applications that provide local and regional information about temperature, precipitation, moisture, and winds, for example, are important for agriculture, water resource monitoring, health, and monitoring of hazard weather conditions. In many regions in Africa (and other global locations), surface weather stations are sparsely located and/or of poor quality. Existing stations have often been sited incorrectly, not well-maintained, and have limited communications established at the site for real-time monitoring. The US National Weather Service (NWS) International Activities Office (IAO) in partnership with University Corporation for Atmospheric Research (UCAR)/National Center for Atmospheric Research (NCAR) and funded by the United States Agency for International Development (USAID) Office of Foreign Disaster Assistance (OFDA) has started an initiative to develop and deploy low-cost weather instrumentation in sparsely observed regions of the world. The goal is to provide observations for environmental monitoring, and early warning alert systems that can be deployed at weather services in developing countries. Instrumentation is being designed using innovative new technologies such as 3D printers, Raspberry Pi computing systems, and wireless communications. The initial effort is focused on designing a surface network using GIS-based tools, deploying an initial network in Zambia, and providing training to Zambia Meteorological Department (ZMD) staff. The presentation will provide an overview of the project concepts, design of the low cost instrumentation, and initial experiences deploying a surface network deployment in Zambia.
Exarchakis, Georgios; Lücke, Jörg
2017-11-01
Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.
Reevaluation of air surveillance station siting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, K.; Jannik, T.
2016-07-06
DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less
Automation of the space station core module power management and distribution system
NASA Technical Reports Server (NTRS)
Weeks, David J.
1988-01-01
Under the Advanced Development Program for Space Station, Marshall Space Flight Center has been developing advanced automation applications for the Power Management and Distribution (PMAD) system inside the Space Station modules for the past three years. The Space Station Module Power Management and Distribution System (SSM/PMAD) test bed features three artificial intelligence (AI) systems coupled with conventional automation software functioning in an autonomous or closed-loop fashion. The AI systems in the test bed include a baseline scheduler/dynamic rescheduler (LES), a load shedding management system (LPLMS), and a fault recovery and management expert system (FRAMES). This test bed will be part of the NASA Systems Autonomy Demonstration for 1990 featuring cooperating expert systems in various Space Station subsystem test beds. It is concluded that advanced automation technology involving AI approaches is sufficiently mature to begin applying the technology to current and planned spacecraft applications including the Space Station.
2011-01-01
and G. Armitage. Dening and evaluating greynets (sparse darknets ). In LCN: Proceedings of the IEEE Conference on Local Computer Networks 30th...analysis of distributed darknet trac. In IMC: Proceedings of the USENIX/ACM Internet Measurement Conference, 2005. Indexing Full Packet Capture Data
Particle Filter Based Tracking in a Detection Sparse Discrete Event Simulation Environment
2007-03-01
obtained by disqualifying a large number of particles. 52 (a) (b) ( c ) Figure 31. Particle Disqualification via Sanitization b...1 B. RESEARCH APPROACH..............................................................................5 C . THESIS ORGANIZATION...38 b. Detection Distribution Sampling............................................43 c . Estimated Position Calculation
Removal of nuisance signals from limited and sparse 1H MRSI data using a union-of-subspaces model.
Ma, Chao; Lam, Fan; Johnson, Curtis L; Liang, Zhi-Pei
2016-02-01
To remove nuisance signals (e.g., water and lipid signals) for (1) H MRSI data collected from the brain with limited and/or sparse (k, t)-space coverage. A union-of-subspace model is proposed for removing nuisance signals. The model exploits the partial separability of both the nuisance signals and the metabolite signal, and decomposes an MRSI dataset into several sets of generalized voxels that share the same spectral distributions. This model enables the estimation of the nuisance signals from an MRSI dataset that has limited and/or sparse (k, t)-space coverage. The proposed method has been evaluated using in vivo MRSI data. For conventional chemical shift imaging data with limited k-space coverage, the proposed method produced "lipid-free" spectra without lipid suppression during data acquisition at 130 ms echo time. For sparse (k, t)-space data acquired with conventional pulses for water and lipid suppression, the proposed method was also able to remove the remaining water and lipid signals with negligible residuals. Nuisance signals in (1) H MRSI data reside in low-dimensional subspaces. This property can be utilized for estimation and removal of nuisance signals from (1) H MRSI data even when they have limited and/or sparse coverage of (k, t)-space. The proposed method should prove useful especially for accelerated high-resolution (1) H MRSI of the brain. © 2015 Wiley Periodicals, Inc.
Space station communications and tracking equipment management/control system
NASA Technical Reports Server (NTRS)
Kapell, M. H.; Seyl, J. W.
1982-01-01
Design details of a communications and tracking (C and T) local area network and the distribution system requirements for the prospective space station are described. The hardware will be constructed of LRUs, including those for baseband, RF, and antenna subsystems. It is noted that the C and T equipment must be routed throughout the station to accommodate growth of the station. Configurations of the C and T modules will therefore be dependent on the function of the space station module where they are located. A block diagram is provided of a sample C and T hardware distribution configuration. A topology and protocol will be needed to accommodate new terminals, wide bandwidths, bidirectional message transmission, and distributed functioning. Consideration will be given to collisions occurring in the data transmission channels.
NASA Astrophysics Data System (ADS)
Wang, Xin; Li, Juan; Chen, Qi-Fu
2017-02-01
The northwest Pacific subduction region is an ideal location to study the interaction between the subducting slab and upper mantle discontinuities. Due to the sparse distribution of seismic stations in the sea, previous studies mostly focus on mantle transition zone (MTZ) structures beneath continents or island arcs, leaving the vast area of the Japan Sea and Okhotsk Sea untouched. In this study, we analyzed multiple-ScS reverberation waves, and a common-reflection-point stacking technique was applied to enhance consistent signals beneath reflection points. A topographic image of the 410 km and 660 km discontinuities is obtained beneath the Japan Sea and adjacent regions. One-dimensional and 3-D velocity models are adapted to obtain the "apparent" and "true" depth. We observe a systematic pattern of depression ( 10-20 km) and elevation ( 5-10 km) of the 660, with the topography being roughly consistent with the shift of the olivine-phase transition boundary caused by the subducting Pacific plate. The behavior of the 410 is more complex. It is generally 5-15 km shallower at the location where the slab penetrates and deepened by 5-10 km oceanward of the slab where a low-velocity anomaly is observed in tomography images. Moreover, we observe a wide distribution of depressed 410 beneath the southern Okhotsk Sea and western Japan Sea. The hydrous wadsleyite boundary caused by the high water content at the top of the MTZ could explain the depression. The long-history trench rollback motion of Pacific slab might be responsible for the widely distributed depression of the 410 ranging upward and landward from the slab.
NASA Astrophysics Data System (ADS)
Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.
2017-10-01
A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.
Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2014-11-30
We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Lessons Learned in over Two Decades of GPS/GNSS Data Center Support
NASA Astrophysics Data System (ADS)
Boler, F. M.; Estey, L. H.; Meertens, C. M.; Maggert, D.
2014-12-01
The UNAVCO Data Center in Boulder, Colorado, curates, archives, and distributes geodesy data and products, mainly GPS/GNSS data from 3,000 permanent stations and 10,000 campaign sites around the globe. Although now having core support from NSF and NASA, the archive began around 1992 as a grass-roots effort of a few UNAVCO staff and community members to preserve data going back to 1986. Open access to this data is generally desired, but the Data Center in fact operates under an evolving suite of data access policies ranging from open access to nondisclosure for special cases. Key to processing this data is having the correct equipment metadata; reliably obtaining this metadata continues to be a challenge, in spite of modern cyberinfrastructure and tools, mostly due to human errors or lack of consistent operator training. New metadata problems surface when trying to design and publish modern Digital Object Identifiers for data sets where PIs, funding sources, and historical project names now need to be corrected and verified for data sets going back almost three decades. Originally, the data was GPS-only based on three signals on two carrier frequencies. Modern GNSS covers GPS modernization (three more signals and one additional carrier) as well as open signals and carriers of additional systems such as GLONASS, Galileo, BeiDou, and QZSS, requiring ongoing adaptive strategies to assess the quality of modern datasets. Also, new scientific uses of these data benefit from higher data rates than was needed for early tectonic applications. In addition, there has been a migration from episodic campaign sites (hence sparse data) to continuously operating stations (hence dense data) over the last two decades. All of these factors make it difficult to realistically plan even simple data center functions such as on-line storage capacity.
Body wave tomography of Iranian Plateau
NASA Astrophysics Data System (ADS)
Alinaghi, A.; Koulakov, I.; Thybo, H.
2004-12-01
The inverse teleseismic tomography approach has been adopted to study the P and S velocity structure of the crust and upper mantle across the Iranian Plateau. The method uses phase readings from earthquakes in a study area as reported by stations at teleseismic and regional distances to compute the velocity anomalies in the area. This use of source-receiver reciprocity allows tomographic studies of regions with sparse distribution of seismic stations, if only the region has sufficient seismicity. The input data for the algorithm are the arrival times of events located in Iran which were taken from the ISC catalogue (1964-1996). All the sources were located anew using a 1D spherical Earth model taking into account variable Moho depth and topography. The inversion provides relocation of events which is done simultaneously with calculation of velocity perturbations. With a series of synthetic tests we demonstrate the power of the algorithm to resolve both fancy and realistic anomalies using available earthquake sources and introducing measurement errors and outliers. The velocity anomalies show that the crust and upper mantle below the Iranian Plateau comprises a low velocity domain between the Arabian Plate and the Caspian Block, in agreement with models of the active Iranian plate trapped between the stable Turan plate in the north and the Arabian shield in the south. Our results show clear evidence of subduction at Makran in the southeastern corner of Iran where the oceanic crust of the Oman Sea subducts underneath the Iranian Plateau, a movement which is mainly aseismic. On the other hand, the subduction and collision of the two plates along the Zagros suture zone is highly seismic and in our images appear less consistent than the Makran region.
Earthquake Activity in the North Greenland Region
NASA Astrophysics Data System (ADS)
Larsen, Tine B.; Dahl-Jensen, Trine; Voss, Peter H.
2017-04-01
Many local and regional earthquakes are recorded on a daily basis in northern Greenland. The majority of the earthquakes originate at the Arctic plate boundary between the Eurasian and the North American plates. Particularly active regions away from the plate boundary are found in NE Greenland and in northern Baffin Bay. The seismograph coverage in the region is sparse with the main seismograph stations located at the military outpost, Stations Nord (NOR), the weather station outpost Danmarkshavn (DAG), Thule Airbase (TULEG), and the former ice core drilling camp (NEEM) in the middle of the Greenland ice sheet. Furthermore, data is available from Alert (ALE), Resolute (RES), and other seismographs in northern Canada as well as from a temporary deployment of BroadBand seismographs along the north coast of Greenland from 2004 to 2007. The recorded earthquakes range in magnitude from less than 2 to a 4.8 event, the largest in NE Greenland, and a 5.7 event, the largest recorded in northern Baffin Bay. The larger events are recorded widely in the region allowing for focal mechanisms to be calculated. Only a few existing focal mechanisms for the region can be found in the ISC bulletin. Two in NE Greenland representing primarily normal faulting and one in Baffin Bay resulting from reverse faulting. New calculations of focal mechanisms for the region will be presented as well as improved hypocenters resulting from analysis involving temporary stations and regional stations that are not included in routine processing.
A general parallel sparse-blocked matrix multiply for linear scaling SCF theory
NASA Astrophysics Data System (ADS)
Challacombe, Matt
2000-06-01
A general approach to the parallel sparse-blocked matrix-matrix multiply is developed in the context of linear scaling self-consistent-field (SCF) theory. The data-parallel message passing method uses non-blocking communication to overlap computation and communication. The space filling curve heuristic is used to achieve data locality for sparse matrix elements that decay with “separation”. Load balance is achieved by solving the bin packing problem for blocks with variable size.With this new method as the kernel, parallel performance of the simplified density matrix minimization (SDMM) for solution of the SCF equations is investigated for RHF/6-31G ∗∗ water clusters and RHF/3-21G estane globules. Sustained rates above 5.7 GFLOPS for the SDMM have been achieved for (H 2 O) 200 with 95 Origin 2000 processors. Scalability is found to be limited by load imbalance, which increases with decreasing granularity, due primarily to the inhomogeneous distribution of variable block sizes.
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse
2013-09-30
underwater acoustic communication technologies for autonomous distributed underwater networks, through innovative signal processing, coding, and navigation...in real enviroments , an offshore testbed has been developed to conduct field experimetns. The testbed consists of four nodes and has been deployed...Leadership by the Connecticut Technology Council. Dr. Zhaohui Wang joined the faculty of the Department of Electrical and Computer Engineering at
Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin
2014-01-01
In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150
NASA Astrophysics Data System (ADS)
Donnellan, A.; Green, J. J.; Bills, B. G.; Goguen, J.; Ansar, A.; Knight, R. L.; Hallet, B.; Scambos, T. A.; Thompson, L. G.; Morin, P. J.
2013-12-01
Mountain glaciers around the world are retreating rapidly, contributing about 20% to present-day sea level rise. Numerous studies have shown that mountain glaciers are sensitive to global environmental change. Temperate-latitude glaciers and snowpack provide water for over 1 billion people. Glaciers are a resource for irrigation and hydroelectric power, but also pose flood and avalanche hazards. Accurate mass balance assessments have been made for only 280 glaciers, yet there are over 130,000 in the World Glacier Inventory. The rate of glacier retreat or advance can be highly variable, is poorly sampled, and inadequately understood. Liquid water from ice front lakes, rain, melt, or sea water and debris from rocks, dust, or pollution interact with glacier ice often leading to an amplification of warming and further melting. Many mountain glaciers undergo rapid and episodic events that greatly change their mass balance or extent but are sparsely documented. Events include calving, outburst floods, opening of crevasses, or iceberg motion. Spaceborne high-resolution spotlight optical imaging provides a means of clarifying the relationship between the health of mountain glaciers and global environmental change. Digital elevation models (DEMs) can be constructed from a series of images from a range of perspectives collected by staring at a target during a satellite overpass. It is possible to collect imagery for 1800 targets per month in the ×56° latitude range, construct high-resolution DEMs, and monitor changes in high detail over time with a high-resolution optical telescope mounted on the International Space Station (ISS). Snow and ice type, age, and maturity can be inferred from different color bands as well as distribution of liquid water. Texture, roughness, albedo, and debris distribution can be estimated by measuring bidirectional reflectance distribution functions (BRDF) and reflectance intensity as a function of viewing angle. The non-sun-synchronous orbit of the ISS results in varying illumination angles and fix-point spotlight imaging results in varying viewing angles, ideal for viewing steep slopes on glaciers and adjacent areas. Rapid events may be observed in progress by correlating changes in images over a single pass or between passes. We present a working design, data acquisition parameters, science objectives, and data processing strategy for a conceptual instrument, MUIR (Mission to Understand Ice Retreat).
NASA Astrophysics Data System (ADS)
Ruthven, R. C.; Ketcham, R. A.; Kelly, E. D.
2015-12-01
Three-dimensional textural analysis of garnet porphyroblasts and electron microprobe analyses can, in concert, be used to pose novel tests that challenge and ultimately increase our understanding of metamorphic crystallization mechanisms. Statistical analysis of high-resolution X-ray computed tomography (CT) data of garnet porphyroblasts tells us the degree of ordering or randomness of garnets, which can be used to distinguish the rate-limiting factors behind their nucleation and growth. Electron microprobe data for cores, rims, and core-to-rim traverses are used as proxies to ascertain porphyroblast nucleation and growth rates, and the evolution of sample composition during crystallization. MnO concentrations in garnet cores serve as a proxy for the relative timing of nucleation, and rim concentrations test the hypothesis that MnO is in equilibrium sample-wide during the final stages of crystallization, and that concentrations have not been greatly altered by intracrystalline diffusion. Crystal size distributions combined with compositional data can be used to quantify the evolution of nucleation rates and sample composition during crystallization. This study focuses on quartzite schists from the Picuris Mountains with heterogeneous garnet distributions consisting of dense and sparse layers. 3D data shows that the sparse layers have smaller, less euhedral garnets, and petrographic observations show that sparse layers have more quartz and less mica than dense layers. Previous studies on rocks with homogeneously distributed garnet have shown that crystallization rates are diffusion-controlled, meaning that they are limited by diffusion of nutrients to growth and nucleation sites. This research extends this analysis to heterogeneous rocks to determine nucleation and growth rates, and test the assumption of rock-wide equilibrium for some major elements, among a set of compositionally distinct domains evolving in mm- to cm-scale proximity under identical P-T conditions.
Quantum key distribution using card, base station and trusted authority
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordholt, Jane E.; Hughes, Richard John; Newell, Raymond Thorson
Techniques and tools for quantum key distribution ("QKD") between a quantum communication ("QC") card, base station and trusted authority are described herein. In example implementations, a QC card contains a miniaturized QC transmitter and couples with a base station. The base station provides a network connection with the trusted authority and can also provide electric power to the QC card. When coupled to the base station, after authentication by the trusted authority, the QC card acquires keys through QKD with a trust authority. The keys can be used to set up secure communication, for authentication, for access control, or formore » other purposes. The QC card can be implemented as part of a smart phone or other mobile computing device, or the QC card can be used as a fillgun for distribution of the keys.« less
Quantum key distribution using card, base station and trusted authority
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordholt, Jane Elizabeth; Hughes, Richard John; Newell, Raymond Thorson
Techniques and tools for quantum key distribution ("QKD") between a quantum communication ("QC") card, base station and trusted authority are described herein. In example implementations, a QC card contains a miniaturized QC transmitter and couples with a base station. The base station provides a network connection with the trusted authority and can also provide electric power to the QC card. When coupled to the base station, after authentication by the trusted authority, the QC card acquires keys through QKD with a trusted authority. The keys can be used to set up secure communication, for authentication, for access control, or formore » other purposes. The QC card can be implemented as part of a smart phone or other mobile computing device, or the QC card can be used as a fillgun for distribution of the keys.« less
Sparse reconstruction localization of multiple acoustic emissions in large diameter pipelines
NASA Astrophysics Data System (ADS)
Dubuc, Brennan; Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
A sparse reconstruction localization method is proposed, which is capable of localizing multiple acoustic emission events occurring closely in time. The events may be due to a number of sources, such as the growth of corrosion patches or cracks. Such acoustic emissions may yield localization failure if a triangulation method is used. The proposed method is implemented both theoretically and experimentally on large diameter thin-walled pipes. Experimental examples are presented, which demonstrate the failure of a triangulation method when multiple sources are present in this structure, while highlighting the capabilities of the proposed method. The examples are generated from experimental data of simulated acoustic emission events. The data corresponds to helical guided ultrasonic waves generated in a 3 m long large diameter pipe by pencil lead breaks on its outer surface. Acoustic emission waveforms are recorded by six sparsely distributed low-profile piezoelectric transducers instrumented on the outer surface of the pipe. The same array of transducers is used for both the proposed and the triangulation method. It is demonstrated that the proposed method is able to localize multiple events occurring closely in time. Furthermore, the matching pursuit algorithm and the basis pursuit densoising approach are each evaluated as potential numerical tools in the proposed sparse reconstruction method.
Jiang, Geng-Ming; Li, Zhao-Liang
2008-11-10
This work intercompared two Bi-directional Reflectance Distribution Function (BRDF) models, the modified Minnaert's model and the RossThick-LiSparse-R model, in the estimation of the directional emissivity in Middle Infra-Red (MIR) channel from the data acquired by the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) onboard the first Meteosat Second Generation (MSG1). The bi-directional reflectances in SEVIRI channel 4 (3.9 microm) were estimated from the combined MIR and Thermal Infra-Red (TIR) data and then were used to estimate the directional emissivity in this channel with aid of the BRDF models. The results show that: (1) Both models can relatively well describe the non-Lambertian reflective behavior of land surfaces in SEVIRI channel 4; (2) The RossThick-LiSparse-R model is better than the modified Minnaert's model in modeling the bi-directional reflectances, and the directional emissivities modeled by the modified Minnaert's model are always lower than the ones obtained by the RossThick-LiSparse-R model with averaged emissivity differences of approximately 0.01 and approximately 0.04 over the vegetated and bare areas, respectively. The use of the RossThick-LiSparse-R model in the estimation of the directional emissivity in MIR channel is recommended.
47 CFR 74.482 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Station identification. 74.482 Section 74.482..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Remote Pickup Broadcast Stations § 74.482 Station identification. (a) Each remote pickup broadcast station shall be identified by the...
47 CFR 74.582 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Station identification. 74.582 Section 74.582..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Aural Broadcast Auxiliary Stations § 74.582 Station identification. (a) Each aural broadcast STL or intercity relay station, when...
47 CFR 74.433 - Temporary authorizations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... identification number of the associated broadcast station or stations, call letters of remote pickup station (if..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Remote Pickup Broadcast Stations § 74.433 Temporary authorizations. (a) Special temporary authority may be granted for remote pickup station...
47 CFR 74.433 - Temporary authorizations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... identification number of the associated broadcast station or stations, call letters of remote pickup station (if..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Remote Pickup Broadcast Stations § 74.433 Temporary authorizations. (a) Special temporary authority may be granted for remote pickup station...
47 CFR 74.433 - Temporary authorizations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... identification number of the associated broadcast station or stations, call letters of remote pickup station (if..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Remote Pickup Broadcast Stations § 74.433 Temporary authorizations. (a) Special temporary authority may be granted for remote pickup station...
47 CFR 74.433 - Temporary authorizations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... identification number of the associated broadcast station or stations, call letters of remote pickup station (if..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Remote Pickup Broadcast Stations § 74.433 Temporary authorizations. (a) Special temporary authority may be granted for remote pickup station...
Murakami, Y; Hashimoto, S; Taniguchi, K; Nagai, M
1999-12-01
To describe the characteristics of monitoring stations for the infectious disease surveillance system in Japan, we compared the distributions of the number of monitoring stations in terms of population, region, size of medical institution, and medical specialty. The distributions of annual number of reported cases in terms of the type of diseases, the size of medical institution, and medical specialty were also compared. We conducted a nationwide survey of the pediatrics stations (16 diseases), ophthalmology stations (3 diseases) and the stations of sexually transmitted diseases (STD) (5 diseases) in Japan. In the survey, we collected the data of monitoring stations and the annual reported cases of diseases. We also collected the data on the population, served by the health center where the monitoring stations existed, from the census. First, we compared the difference between the present number of monitoring stations and the current standard established by the Ministry of Health and Welfare (MHW). Second, we compared the distribution of all medical institutions in Japan and the monitoring stations in terms of the size of the medical institution. Third, we compared the average number of annual reported cases of diseases in terms of the size of medical institution and the medical specialty. In most health centers, the number of monitoring stations achieved the current standard of MHW, while a few health centers had no monitoring station, although they had a large population. Most prefectures also achieved the current standard of MHW, but some prefectures were well below the standard. Among pediatric stations, the sampling proportion of large hospitals was higher than other categories. Among the ophthalmology stations, the sampling proportion of hospitals was higher than other categories. Among the STD stations, the sampling proportion of clinics of obstetrics and gynecology was lower than other categories. Except for some diseases, it made little difference in the average number of annual reported cases of diseases in terms of the type of medical institution. Among STD, there was a great difference in the average number of annual reported cases of diseases in terms of medical specialty.
Veira, Andreas; Jackson, Peter L; Ainslie, Bruce; Fudge, Dennis
2013-07-01
This study investigates the development and application of a simple method to calculate annual and seasonal PM2.5 and PM10 background concentrations in small cities and rural areas. The Low Pollution Sectors and Conditions (LPSC) method is based on existing measured long-term data sets and is designed for locations where particulate matter (PM) monitors are only influenced by local anthropogenic emission sources from particular wind sectors. The LPSC method combines the analysis of measured hourly meteorological data, PM concentrations, and geographical emission source distributions. PM background levels emerge from measured data for specific wind conditions, where air parcel trajectories measured at a monitoring station are assumed to have passed over geographic sectors with negligible local emissions. Seasonal and annual background levels were estimated for two monitoring stations in Prince George, Canada, and the method was also applied to four other small cities (Burns Lake, Houston, Quesnel, Smithers) in northern British Columbia. The analysis showed reasonable background concentrations for both monitoring stations in Prince George, whereas annual PM10 background concentrations at two of the other locations and PM2.5 background concentrations at one other location were implausibly high. For those locations where the LPSC method was successful, annual background levels ranged between 1.8 +/- 0.1 microg/m3 and 2.5 +/- 0.1 microg/m3 for PM2.5 and between 6.3 +/- 0.3 microg/m3 and 8.5 +/- 0.3 microg/m3 for PM10. Precipitation effects and patterns of seasonal variability in the estimated background concentrations were detectable for all locations where the method was successful. Overall the method was dependent on the configuration of local geography and sources with respect to the monitoring location, and may fail at some locations and under some conditions. Where applicable, the LPSC method can provide a fast and cost-efficient way to estimate background PM concentrations for small cities in sparsely populated regions like northern British Columbia. In rural areas like northern British Columbia, particulate matter (PM) monitoring stations are usually located close to emission sources and residential areas in order to assess the PM impact on human health. Thus there is a lack of accurate PM background concentration data that represent PM ambient concentrations in the absence of local emissions. The background calculation method developed in this study uses observed meteorological data as well as local source emission locations and provides annual, seasonal and precipitation-related PM background concentrations that are comparable to literature values for four out of six monitoring stations.
Space Station power distribution and control
NASA Technical Reports Server (NTRS)
Willis, A. H.
1986-01-01
A general description of the Space Station is given with the basic requirements of the power distribution and controls system presented. The dual bus and branch circuit concepts are discussed and a computer control method presented.
Space Station Freedom power management and distribution system design
NASA Technical Reports Server (NTRS)
Teren, Fred
1989-01-01
The design is described of the Space Station Freedom Power Management and Distribution (PMAD) System. In addition, the significant trade studies which were conducted are described, which led to the current PMAD system configuration.
The importance of littoral elevation to the distribution of intertidal species has long been a cornerstone of estuarine ecology and its historical importance to navigation cannot be understated. However, historically, intertidal elevation measurements have been sparse likely due ...
ERIC Educational Resources Information Center
Lum, Lydia
2007-01-01
Around the country, disabled sports are often treated like second-class siblings to their able-bodied counterparts, largely because the latter bring in prestigious tournaments and bowl games, lucrative TV contracts and national exposure for top athletes and coaches. Because disabled people are so sparsely distributed in the general population, it…
47 CFR 74.783 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.783 Station identification. (a) Each low power TV and TV translator station not..., whose signal is being rebroadcast, to identify the translator station by transmitting an easily readable...
The Spatial Coherence of Interannual Temperature Variations in the Antarctic Peninsula
NASA Technical Reports Server (NTRS)
King, John C.; Comiso, Josefino C.; Koblinsky, Chester J. (Technical Monitor)
2002-01-01
Over 50 years of observations from climate stations on the west coast of the Antarctic Peninsula show that this is a region of extreme interannual variability in near-surface temperatures. The region has also experienced more rapid warming than any other part of the Southern Hemisphere. In this paper we use a new dataset of satellite-derived surface temperatures to define the extent of the region of extreme variability more clearly than was possible using the sparse station data. The region in which satellite surface temperatures correlate strongly with west Peninsula station temperatures is found to be quite small and is largely confined to the seas just west of the Peninsula, with a northward and eastward extension into the Scotia Sea and a southward extension onto the western slopes of Palmer Land. Correlation of Peninsula surface temperatures with surface temperatures over the rest of continental Antarctica is poor confirming that the west Peninsula is in a different climate regime. The analysis has been used to identify sites where ice core proxy records might be representative of variations on the west coast of the Peninsula. Of the five existing core sites examined, only one is likely to provide a representative record for the west coast.
NASA Astrophysics Data System (ADS)
Chen, Y.; Xu, X.
2017-12-01
The broad band Lg 1/Q tomographic models in eastern Eurasia are inverted from source- and site-corrected path 1/Q data. The path 1/Q are measured between stations (or events) by the two-station (TS), reverse two-station (RTS) and reverse two-event (RTE) methods, respectively. Because path 1/Q are computed using logarithm of the product of observed spectral ratios and simplified 1D geometrical spreading correction, they are subject to "modeling errors" dominated by uncompensated 3D structural effects. We have found in Chen and Xie [2017] that these errors closely follow normal distribution after the long-tailed outliers are screened out (similar to teleseismic travel time residuals). We thus rigorously analyze the statistics of these errors collected from repeated samplings of station (and event) pairs from 1.0 to 10.0Hz and reject about 15% outliers at each frequency band. The resultant variance of Δ/Q decreases with frequency as 1/f2. The 1/Q tomography using screened data is now a stochastic inverse problem with solutions approximate the means of Gaussian random variables and the model covariance matrix is that of Gaussian variables with well-known statistical behavior. We adopt a new SVD based tomographic method to solve for 2D Q image together with its resolution and covariance matrices. The RTS and RTE yield the most reliable 1/Q data free of source and site effects, but the path coverage is rather sparse due to very strict recording geometry. The TS absorbs the effects of non-unit site response ratios into 1/Q data. The RTS also yields site responses, which can then be corrected from the path 1/Q of TS to make them also free of site effect. The site corrected TS data substantially improve path coverage, allowing able to solve for 1/Q tomography up to 6.0Hz. The model resolution and uncertainty are first quantitively accessed by spread functions (fulfilled by resolution matrix) and covariance matrix. The reliably retrieved Q models correlate well with the distinct tectonic blocks featured by the most recent major deformations and vary with frequencies. With the 1/Q tomographic model and its covariance matrix, we can formally estimate the uncertainty of any path-specific Lg 1/Q prediction. This new capability significantly benefits source estimation for which reliable uncertainty estimate is especially important.
Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method
NASA Astrophysics Data System (ADS)
Lazoglou, Georgia; Anagnostopoulou, Christina
2018-03-01
This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.
Cardone, A.; Bornstein, A.; Pant, H. C.; Brady, M.; Sriram, R.; Hassan, S. A.
2015-01-01
A method is proposed to study protein-ligand binding in a system governed by specific and non-specific interactions. Strong associations lead to narrow distributions in the proteins configuration space; weak and ultra-weak associations lead instead to broader distributions, a manifestation of non-specific, sparsely-populated binding modes with multiple interfaces. The method is based on the notion that a discrete set of preferential first-encounter modes are metastable states from which stable (pre-relaxation) complexes at equilibrium evolve. The method can be used to explore alternative pathways of complexation with statistical significance and can be integrated into a general algorithm to study protein interaction networks. The method is applied to a peptide-protein complex. The peptide adopts several low-population conformers and binds in a variety of modes with a broad range of affinities. The system is thus well suited to analyze general features of binding, including conformational selection, multiplicity of binding modes, and nonspecific interactions, and to illustrate how the method can be applied to study these problems systematically. The equilibrium distributions can be used to generate biasing functions for simulations of multiprotein systems from which bulk thermodynamic quantities can be calculated. PMID:25782918
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-01-01
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.
ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES
Fan, Jianqing; Rigollet, Philippe; Wang, Weichen
2016-01-01
High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓr norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics. PMID:26806986
Bayesian X-ray computed tomography using a three-level hierarchical prior model
NASA Astrophysics Data System (ADS)
Wang, Li; Mohammad-Djafari, Ali; Gac, Nicolas
2017-06-01
In recent decades X-ray Computed Tomography (CT) image reconstruction has been largely developed in both medical and industrial domain. In this paper, we propose using the Bayesian inference approach with a new hierarchical prior model. In the proposed model, a generalised Student-t distribution is used to enforce the Haar transformation of images to be sparse. Comparisons with some state of the art methods are presented. It is shown that by using the proposed model, the sparsity of sparse representation of images is enforced, so that edges of images are preserved. Simulation results are also provided to demonstrate the effectiveness of the new hierarchical model for reconstruction with fewer projections.
Weiss, Christian; Zoubir, Abdelhak M
2017-05-01
We propose a compressed sampling and dictionary learning framework for fiber-optic sensing using wavelength-tunable lasers. A redundant dictionary is generated from a model for the reflected sensor signal. Imperfect prior knowledge is considered in terms of uncertain local and global parameters. To estimate a sparse representation and the dictionary parameters, we present an alternating minimization algorithm that is equipped with a preprocessing routine to handle dictionary coherence. The support of the obtained sparse signal indicates the reflection delays, which can be used to measure impairments along the sensing fiber. The performance is evaluated by simulations and experimental data for a fiber sensor system with common core architecture.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Communication-satellite earth station complex. The term communication-satellite earth station complex includes transmitters, receivers, and communications antennas at the earth station site together with the... communication to terrestrial distribution system(s). (e) Communication-satellite earth station complex functions...
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Communication-satellite earth station complex. The term communication-satellite earth station complex includes transmitters, receivers, and communications antennas at the earth station site together with the... communication to terrestrial distribution system(s). (e) Communication-satellite earth station complex functions...
A framework for building real-time expert systems
NASA Technical Reports Server (NTRS)
Lee, S. Daniel
1991-01-01
The Space Station Freedom is an example of complex systems that require both traditional and artificial intelligence (AI) real-time methodologies. It was mandated that Ada should be used for all new software development projects. The station also requires distributed processing. Catastrophic failures on the station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the station. This is even more critical for other NASA projects that would have longer transmission delays (e.g., the lunar base, Mars missions, etc.). To address these issues, a distributed agent architecture (DAA) is proposed that can support a variety of paradigms based on both traditional real-time computing and AI. The proposed testbed for DAA is an autonomous power expert (APEX) which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of the space station.
Time-Frequency Signal Representations Using Interpolations in Joint-Variable Domains
2016-06-14
distribution kernels,” IEEE Trans. Signal Process., vol. 42, no. 5, pp. 1156–1165, May 1994. [25] G. S. Cunningham and W. J. Williams , “Kernel...interpolated data. For comparison, we include sparse reconstruction and WVD and Choi– Williams distribution (CWD) [23], which are directly applied to...Prentice-Hall, 1995. [23] H. I. Choi and W. J. Williams , “Improved time-frequency representa- tion of multicomponent signals using exponential kernels
Liao, Ke; Zhu, Min; Ding, Lei
2013-08-01
The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Zhang, Yong; Wang, Qing; Jiang, Xinyuan
2017-01-01
The real-time estimation of the wide-lane and narrow-lane Uncalibrated Phase Delay (UPD) of satellites is realized by real-time data received from regional reference station networks; The properties of the real-time UPD product and its influence on real-time precise point positioning ambiguity resolution (RTPPP-AR) are experimentally analyzed according to real-time data obtained from the regional Continuously Operating Reference Stations (CORS) network located in Tianjin, Shanghai, Hong Kong, etc. The results show that the real-time wide-lane and narrow-lane UPD products differ significantly from each other in time-domain characteristics; the wide-lane UPDs have daily stability, with a change rate of less than 0.1 cycle/day, while the narrow-lane UPDs have short-term stability, with significant change in one day. The UPD products generated by different regional networks have obvious spatial characteristics, thus significantly influencing RTPPP-AR: the adoption of real-time UPD products employing the sparse stations in the regional network for estimation is favorable for improving the regional RTPPP-AR up to 99%; the real-time UPD products of different regional networks slightly influence PPP-AR positioning accuracy. After ambiguities are successfully fixed, the real-time dynamic RTPPP-AR positioning accuracy is better than 3 cm in the plane and 8 cm in the upward direction. PMID:28534844
Zhang, Yong; Wang, Qing; Jiang, Xinyuan
2017-05-19
The real-time estimation of the wide-lane and narrow-lane Uncalibrated Phase Delay (UPD) of satellites is realized by real-time data received from regional reference station networks; The properties of the real-time UPD product and its influence on real-time precise point positioning ambiguity resolution (RTPPP-AR) are experimentally analyzed according to real-time data obtained from the regional Continuously Operating Reference Stations (CORS) network located in Tianjin, Shanghai, Hong Kong, etc. The results show that the real-time wide-lane and narrow-lane UPD products differ significantly from each other in time-domain characteristics; the wide-lane UPDs have daily stability, with a change rate of less than 0.1 cycle/day, while the narrow-lane UPDs have short-term stability, with significant change in one day. The UPD products generated by different regional networks have obvious spatial characteristics, thus significantly influencing RTPPP-AR: the adoption of real-time UPD products employing the sparse stations in the regional network for estimation is favorable for improving the regional RTPPP-AR up to 99%; the real-time UPD products of different regional networks slightly influence PPP-AR positioning accuracy. After ambiguities are successfully fixed, the real-time dynamic RTPPP-AR positioning accuracy is better than 3 cm in the plane and 8 cm in the upward direction.
NASA Astrophysics Data System (ADS)
van Osnabrugge, B.; Weerts, A. H.; Uijlenhoet, R.
2017-11-01
To enable operational flood forecasting and drought monitoring, reliable and consistent methods for precipitation interpolation are needed. Such methods need to deal with the deficiencies of sparse operational real-time data compared to quality-controlled offline data sources used in historical analyses. In particular, often only a fraction of the measurement network reports in near real-time. For this purpose, we present an interpolation method, generalized REGNIE (genRE), which makes use of climatological monthly background grids derived from existing gridded precipitation climatology data sets. We show how genRE can be used to mimic and extend climatological precipitation data sets in near real-time using (sparse) real-time measurement networks in the Rhine basin upstream of the Netherlands (approximately 160,000 km2). In the process, we create a 1.2 × 1.2 km transnational gridded hourly precipitation data set for the Rhine basin. Precipitation gauge data are collected, spatially interpolated for the period 1996-2015 with genRE and inverse-distance squared weighting (IDW), and then evaluated on the yearly and daily time scale against the HYRAS and EOBS climatological data sets. Hourly fields are compared qualitatively with RADOLAN radar-based precipitation estimates. Two sources of uncertainty are evaluated: station density and the impact of different background grids (HYRAS versus EOBS). The results show that the genRE method successfully mimics climatological precipitation data sets (HYRAS/EOBS) over daily, monthly, and yearly time frames. We conclude that genRE is a good interpolation method of choice for real-time operational use. genRE has the largest added value over IDW for cases with a low real-time station density and a high-resolution background grid.
Mapping of the Land Cover Spatiotemporal Characteristics in Northern Russia Caused by Climate Change
NASA Astrophysics Data System (ADS)
Panidi, E.; Tsepelev, V.; Torlopova, N.; Bobkov, A.
2016-06-01
The study is devoted to the investigation of regional climate change in Northern Russia. Due to sparseness of the meteorological observation network in northern regions, we investigate the application capabilities of remotely sensed vegetation cover as indicator of climate change at the regional scale. In previous studies, we identified statistically significant relationship between the increase of surface air temperature and increase of the shrub vegetation productivity. We verified this relationship using ground observation data collected at the meteorological stations and Normalised Difference Vegetation Index (NDVI) data produced from Terra/MODIS satellite imagery. Additionally, we designed the technique of growing seasons separation for detailed investigation of the land cover (shrub cover) dynamics. Growing seasons are the periods when the temperature exceeds +5°C and +10°C. These periods determine the vegetation productivity conditions (i.e., conditions that allow growth of the phytomass). We have discovered that the trend signs for the surface air temperature and NDVI coincide on planes and river floodplains. On the current stage of the study, we are working on the automated mapping technique, which allows to estimate the direction and magnitude of the climate change in Northern Russia. This technique will make it possible to extrapolate identified relationship between land cover and climate onto territories with sparse network of meteorological stations. We have produced the gridded maps of NDVI and NDWI for the test area in European part of Northern Russia covered with the shrub vegetation. Basing on these maps, we may determine the frames of growing seasons for each grid cell. It will help us to obtain gridded maps of the NDVI linear trend for growing seasons on cell-by-cell basis. The trend maps can be used as indicative maps for estimation of the climate change on the studied areas.
Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An
2018-05-01
In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.
NASA Astrophysics Data System (ADS)
Snyder, A.; Dietterich, T.; Selker, J. S.
2017-12-01
Many regions of the world lack ground-based weather data due to inadequate or unreliable weather station networks. For example, most countries in Sub-Saharan Africa have unreliable, sparse networks of weather stations. The absence of these data can have consequences on weather forecasting, prediction of severe weather events, agricultural planning, and climate change monitoring. The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to place weather stations within each country. We should consider how we can create accurate spatio-temporal maps of weather data and how to balance the desired accuracy of each weather variable of interest (precipitation, temperature, relative humidity, etc.). We can express this problem as a joint optimization of multiple weather variables, given a fixed number of weather stations. We use reanalysis data as the best representation of the "true" weather patterns that occur in the region of interest. For each possible combination of sites, we interpolate the reanalysis data between selected locations and calculate the mean average error between the reanalysis ("true") data and the interpolated data. In order to formulate our multi-variate optimization problem, we explore different methods of weighting each weather variable in our objective function. These methods include systematic variation of weights to determine which weather variables have the strongest influence on the network design, as well as combinations targeted for specific purposes. For example, we can use computed evapotranspiration as a metric that combines many weather variables in a way that is meaningful for agricultural and hydrological applications. We compare the errors of the weather station networks produced by each optimization problem formulation. We also compare these errors to those of manually designed weather station networks in West Africa, planned by the respective host-country's meteorological agency.
Digital Correlation In Laser-Speckle Velocimetry
NASA Technical Reports Server (NTRS)
Gilbert, John A.; Mathys, Donald R.
1992-01-01
Periodic recording helps to eliminate spurious results. Improved digital-correlation process extracts velocity field of two-dimensional flow from laser-speckle images of seed particles distributed sparsely in flow. Method which involves digital correlation of images recorded at unequal intervals, completely automated and has potential to be fastest yet.
Bayesian Semiparametric Structural Equation Models with Latent Variables
ERIC Educational Resources Information Center
Yang, Mingan; Dunson, David B.
2010-01-01
Structural equation models (SEMs) with latent variables are widely useful for sparse covariance structure modeling and for inferring relationships among latent variables. Bayesian SEMs are appealing in allowing for the incorporation of prior information and in providing exact posterior distributions of unknowns, including the latent variables. In…
Physically Based Mountain Hydrological Modelling using Reanalysis Data in Patagonia
NASA Astrophysics Data System (ADS)
Krogh, S.; Pomeroy, J. W.; McPhee, J. P.
2013-05-01
Remote regions in South America are often characterized by insufficient observations of meteorology for robust hydrological model operation. Yet water resources must be quantified, understood and predicted in order to develop effective water management policies. Here, we developed a physically based hydrological model for a major river in Patagonia using the modular Cold Regions Hydrological Modelling Platform (CRHM) in order to better understand hydrological processes leading to streamflow generation in this remote region. The Baker River -with the largest mean annual streamflow in Chile-, drains snowy mountains, glaciers, wet forests, peat and semi-arid pampas into a large lake. Meteorology over the basin is poorly monitored in that there are no high elevation weather stations and stations at low elevations are sparsely distributed, only measure temperature and rainfall and are poorly maintained. Streamflow in the basin is gauged at several points where there are high quality hydrometric stations. In order to quantify the impact of meteorological data scarcity on prediction, two additional data sources were used: the ERA-Interim (ECMWF Re-analyses) and CFSR (Climate Forecast System Reanalysis) atmospheric reanalyses. Precipitation temporal distribution and magnitude from the models and observations were compared and the reanalysis data was found to have about three times the number of days with precipitation than the observations did. Better synchronization between measured peak streamflows and modeled precipitation was found compared to observed precipitation. These differences are attributed to: (i) lack of any snowfall observations (so precipitation records does not consider snowfall events) and (ii) available rainfall observations are all located at low altitude (<500 m a.s.l), and miss the occurrence of high altitude precipitation events. CRHM parameterization was undertaken by using local physiographic and vegetation characteristics where available and transferring locally unknown hydrological process parameters from cold regions mountain environments in Canada. Some soil moisture parameters were calibrated from streamflow observations. Model performance was estimated through comparison with observed streamflow records. Simulations using observed precipitation had negligible representativeness of streamflow (Nash-Sutcliffe coefficient, NS ≈ 0.2), while those using any of the two reanalyses as forcing data had reasonable model performance (NS ≈ 0.7). In spite of the better spatial resolution of the CFSR, the ability to simulate streamflow were not significantly different using either CFSR or ERA-Interim. The modeled water balance shows that snowfall is about 30% of the total precipitation input, but snowmelt superficial runoff comprises about 10% of total runoff. About 75% of all precipitation is infiltrated, and approximately 15% of the losses are attributed to evapotranspiration from soil and lake evaporation.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1201 Definitions. (a) FM translator. A station in the broadcasting... another FM broadcast translator station without significantly altering any characteristics of the incoming...
47 CFR 74.1290 - FM translator and booster station information available on the Internet.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 4 2012-10-01 2012-10-01 false FM translator and booster station information... DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1290 FM translator and booster station information available on the Internet. The Media Bureau's Audio Division...
47 CFR 74.1290 - FM translator and booster station information available on the Internet.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 4 2014-10-01 2014-10-01 false FM translator and booster station information... DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1290 FM translator and booster station information available on the Internet. The Media Bureau's Audio Division...
47 CFR 74.1290 - FM translator and booster station information available on the Internet.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 4 2013-10-01 2013-10-01 false FM translator and booster station information... DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1290 FM translator and booster station information available on the Internet. The Media Bureau's Audio Division...
47 CFR 74.1234 - Unattended operation.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1234 Unattended operation. (a) A station authorized under this...
Canadian High Arctic Ionospheric Network (CHAIN)
NASA Astrophysics Data System (ADS)
Jayachandran, P. T.; Langley, R. B.; MacDougall, J. W.; Mushini, S. C.; Pokhotelov, D.; Hamza, A. M.; Mann, I. R.; Milling, D. K.; Kale, Z. C.; Chadwick, R.; Kelly, T.; Danskin, D. W.; Carrano, C. S.
2009-02-01
Polar cap ionospheric measurements are important for the complete understanding of the various processes in the solar wind-magnetosphere-ionosphere system as well as for space weather applications. Currently, the polar cap region is lacking high temporal and spatial resolution ionospheric measurements because of the orbit limitations of space-based measurements and the sparse network providing ground-based measurements. Canada has a unique advantage in remedying this shortcoming because it has the most accessible landmass in the high Arctic regions, and the Canadian High Arctic Ionospheric Network (CHAIN) is designed to take advantage of Canadian geographic vantage points for a better understanding of the Sun-Earth system. CHAIN is a distributed array of ground-based radio instruments in the Canadian high Arctic. The instrument components of CHAIN are 10 high data rate Global Positioning System ionospheric scintillation and total electron content monitors and six Canadian Advanced Digital Ionosondes. Most of these instruments have been sited within the polar cap region except for two GPS reference stations at lower latitudes. This paper briefly overviews the scientific capabilities, instrument components, and deployment status of CHAIN. This paper also reports a GPS signal scintillation episode associated with a magnetospheric impulse event. More details of the CHAIN project and data can be found at http://chain.physics.unb.ca/chain.
Tomography of the upper mantle beneath the African/Iberian collision zone
NASA Astrophysics Data System (ADS)
Bonnin, Mickael; Nolet, Guust; Thomas, Christine; Villaseñor, Antonio; Gallart, Josep; Levander, Alan
2013-04-01
In this study we take advantage of the dense broadband-station networks available in western Mediterranean region (IberArray, PICASSO and MOROCCO-MUENSTER networks) to develop a high-resolution 3D tomographic P velocity model of the upper mantle beneath the African/Iberian collision zone. This model is based on teleseismic arrival times recorded between 2008 and 2012 for which cross-correlation delays are measured with a new technique in different frequency bands centered between 0.03 and 1.0 Hz, and interpreted using multiple frequency tomography. Such a tomography is required to scrutinize the nature and extent of the thermal anomalies inferred beneath Northern Africa, especially in the Atlas ranges region and associated to sparse volcanic activities. Tomography is notably needed to help in determining the hypothetical connection between those hot anomalies and the Canary Island hotspot as proposed by geochemistry studies. It also provides new insights on the geometry of the subducting slab previously inferred from tomography, GPS measurements or shear-wave splitting patterns beneath the Alboran Sea and the Betic ranges and is indispensable for deciphering the complex geodynamic history of the Western Mediterranean region. We shall present the overall statistics of the delays, their geographical distribution, as well as the first inversion results.
Mapping ENSO: Precipitation for the U.S. Affiliated Pacific Islands
NASA Astrophysics Data System (ADS)
Wright, E.; Price, J.; Kruk, M. C.; Luchetti, N.; Marra, J. J.
2015-12-01
The United States Affiliated Pacific Islands (USAPI) are highly susceptible to extreme precipitation events such as drought and flooding, which directly affect their freshwater availability. Precipitation distribution differs by sub-region, and is predominantly influenced by phases of the El Niño Southern Oscillation (ENSO). Forecasters currently rely on ENSO climatologies from sparse in situ station data to inform their precipitation outlooks. This project provided an updated ENSO-based climatology of long-term precipitation patterns for each USAPI Exclusive Economic Zone (EEZ) using the NOAA PERSIANN Climate Data Record (CDR). This data provided a 30-year record (1984-2015) of daily precipitation at 0.25° resolution, which was used to calculate monthly, seasonal, and yearly precipitation. Results indicated that while the PERSIANN precipitation accurately described the monthly, seasonal, and annual trends, it under-predicted the precipitation on the islands. Additionally, maps showing percent departure from normal (30 year average) were made for each three month season based on the Oceanic Niño Index (ONI) for five ENSO phases (moderate-strong El Niño and La Niña, weak El Niño and La Niña, and neutral). Local weather service offices plan on using these results and maps to better understand how the different ENSO phases influence precipitation patterns.
Distribution and identification of airborne fungi in railway stations in Tokyo, Japan.
Kawasaki, Tamami; Kyotani, Takashi; Ushiogi, Tomoyoshi; Izumi, Yasuhiko; Lee, Hunjun; Hayakawa, Toshio
2010-01-01
The current study was performed to (1) understand the distribution of airborne fungi culturable on dichloran-glycerol agar (DG18) media over a one-year monitoring period, (2) identify the types of airborne fungi collected, and (3) compare and contrast under- and above-ground spaces, in two railway stations in Tokyo, Japan. Measurements of airborne fungi were taken at stations A and B located in Tokyo. Station A had under- and above-ground concourses and platforms whereas station B had spaces only above-ground. Airborne fungi at each measurement position were collected with an air sampler on DG18 media. After cultivation of the sample plates, the number of fungi colonies was counted on each agar plate. In station A, the underground platform was characterized as (1) having the highest humidity and (2) a high concentration of airborne fungi, with (3) a high proportion of non-sporulating fungi (NSF) and Aspergillus versicolor. There was a strong positive correlation between the concentrations of airborne particles and fungi in station A. Common aspects of the two stations were (1) that fungi were mostly detected in autumn, and (2) there was no correlation between the humidity and concentration of fungi throughout the year. The results of this study indicate that the distribution and composition of fungi differ depending on the structure of the station.
Ponzi, Adam; Wickens, Jeff
2010-04-28
The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.
Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.
Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen
2016-07-27
Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.
Validation of NH3 satellite observations by ground-based FTIR measurements
NASA Astrophysics Data System (ADS)
Dammers, Enrico; Palm, Mathias; Van Damme, Martin; Shephard, Mark; Cady-Pereira, Karen; Capps, Shannon; Clarisse, Lieven; Coheur, Pierre; Erisman, Jan Willem
2016-04-01
Global emissions of reactive nitrogen have been increasing to an unprecedented level due to human activities and are estimated to be a factor four larger than pre-industrial levels. Concentration levels of NOx are declining, but ammonia (NH3) levels are increasing around the globe. While NH3 at its current concentrations poses significant threats to the environment and human health, relatively little is known about the total budget and global distribution. Surface observations are sparse and mainly available for north-western Europe, the United States and China and are limited by the high costs and poor temporal and spatial resolution. Since the lifetime of atmospheric NH3 is short, on the order of hours to a few days, due to efficient deposition and fast conversion to particulate matter, the existing surface measurements are not sufficient to estimate global concentrations. Advanced space-based IR-sounders such as the Tropospheric Emission Spectrometer (TES), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared Sounder (CrIS) enable global observations of atmospheric NH3 that help overcome some of the limitations of surface observations. However, the satellite NH3 retrievals are complex requiring extensive validation. Presently there have only been a few dedicated satellite NH3 validation campaigns performed with limited spatial, vertical or temporal coverage. Recently a retrieval methodology was developed for ground-based Fourier Transform Infrared Spectroscopy (FTIR) instruments to obtain vertical concentration profiles of NH3. Here we show the applicability of retrieved columns from nine globally distributed stations with a range of NH3 pollution levels to validate satellite NH3 products.
47 CFR 74.793 - Digital low power TV and TV translator station protection of broadcast stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Digital low power TV and TV translator station... DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.793 Digital low power TV and TV translator station protection of broadcast stations. (a) An application to construct a new digital low power...
NASA Astrophysics Data System (ADS)
Thuillier, G.; Harder, J. W.; Shapiro, A.; Woods, T. N.; Perrin, J.-M.; Snow, M.; Sukhodolov, T.; Schmutz, W.
2015-06-01
A solar spectrum extending from the extreme ultraviolet to the near-infrared is an important input for solar physics, climate research, and atmospheric physics. Ultraviolet measurements have been conducted since the beginning of the space age, but measurements throughout the contiguous visible and infrared (IR) regions are much more sparse. Ageing is a key problem throughout the entire spectral domain, but most of the effort extended to understand degradation was concentrated on the ultraviolet spectral region, and these mechanisms may not be appropriate in the IR. This problem is further complicated by the scarcity of long-term data sets. Onboard the International Space Station, the SOLSPEC spectrometer measured an IR solar spectral irradiance lower than the one given by ATLAS 3, e.g. by about 7 % at 1 700 nm. We here evaluate the consequences of the lower solar spectral irradiance measurements and present a re-analysis of the on-orbit calibration lamp and solar data trend, which lead to a revised spectrum.
Solar Cycle Variation of Upper Thermospheric Temperature Over King Sejong Station, Antarctica
NASA Astrophysics Data System (ADS)
Chung, Jong-Kyun; Won, Young-In; Kim, Yong-Ha; Lee, Bang-Yong; Kim, Jhoon
2000-12-01
A ground Fabry-Perot interferometer has been used to measure atomic oxygen nightglow (OI 630.0 nm) from the thermosphere (about 250 km) at King Sejong station (KSS, geographic: 62.22oS, 301.25oE; geomagnetic: 50.65oS, 7.51oE), Antarctica. While numerous studies of the thermosphere have been performed on high latitude using ground-based Fabry-Perot interferometers, the thermospheric measurements in the Southern Hemisphere are relatively new and sparse. Therefore, the nightglow measurements at KSS play an important role in extending the thermospheric studies to the Southern Hemisphere. In this study, we investigated the effects of the geomagnetic and solar activities on the thermospheric neutral temperatures that have been observed at KSS in 1989 and 1997. The measured average temperatures are 1400 K in 1989 and 800 K in 1997, reflecting the influence of the solar activity. The measurements were compared with empirical models, MSIS-86 and semi-empirical model, VSH.
Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Slojkowski, Steven; Lowe, Jonathan; Woodburn, James
2015-01-01
Since launch, the FDF has performed daily OD for LRO using the Goddard Trajectory Determination System (GTDS). GTDS is a batch least-squares (BLS) estimator. The tracking data arc for OD is 36 hours. Current operational OD uses 200 x 200 lunar gravity, solid lunar tides, solar radiation pressure (SRP) using a spherical spacecraft area model, and point mass gravity for the Earth, Sun, and Jupiter. LRO tracking data consists of range and range-rate measurements from: Universal Space Network (USN) stations in Sweden, Germany, Australia, and Hawaii. A NASA antenna at White Sands, New Mexico (WS1S). NASA Deep Space Network (DSN) stations. DSN data was sparse and not included in this study. Tracking is predominantly (50) from WS1S. The OD accuracy requirements are: Definitive ephemeris accuracy of 500 meters total position root-mean-squared (RMS) and18 meters radial RMS. Predicted orbit accuracy less than 800 meters root sum squared (RSS) over an 84-hour prediction span.
Waterman, Stephen H; Escobedo, Miguel; Wilson, Todd; Edelson, Paul J; Bethel, Jeffrey W; Fishbein, Daniel B
2009-01-01
The Institute of Medicine (IOM) report Quarantine Stations at Ports of Entry: Protecting the Public's Health focused almost exclusively on U.S. airports and seaports, which served 106 million entries in 2005. IOM concluded that the primary function of these quarantine stations (QSs) should shift from providing inspection to providing strategic national public health leadership. The large expanse of our national borders, large number of crossings, sparse federal resources, and decreased regulation regarding conveyances crossing these borders make land borders more permeable to a variety of threats. To address the health challenges related to land borders, the QSs serving such borders must assume unique roles and partnerships to achieve the strategic leadership and public health research roles envisioned by the IOM. In this article, we examine how the IOM recommendations apply to the QSs that serve the land borders through which more than 319 million travelers, immigrants, and refugees entered the U.S. in 2005.
Point-source inversion techniques
NASA Astrophysics Data System (ADS)
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Populations of Bactrocera oleae (Diptera: Tephritidae) and Its Parasitoids in Himalayan Asia
USDA-ARS?s Scientific Manuscript database
For a biological control program against olive fruit fly, Bactrocera oleae Rossi, olives were collected in the Himalayan foothills (China, Nepal, India, and Pakistan) to discover new natural enemies. Wild olives, Olea europaea ssp. cuspidata (Wall ex. G. Don), were sparsely distributed and fly-infes...
Populations of Bactrocera oleae (Diptera: Tephritidae) and Its Parasitoids in Himalayan Asia
USDA-ARS?s Scientific Manuscript database
For a biological control program against olive fruit fly, Bactrocera oleae Rossi, olives were collected in the Himalayan foothills (China, Nepal, India, and Pakistan) to discover new natural enemies. Wild olives, Olea europaea ssp. cuspidata (Wall ex. G. Don), were sparsely distributed and fly-infe...
Feminism, Neoliberalism, and Social Studies
ERIC Educational Resources Information Center
Schmeichel, Mardi
2011-01-01
The purpose of this article is to analyze the sparse presence of women in social studies education and to consider the possibility of a confluence of feminism and neoliberalism within the most widely distributed National Council for the Social Studies (NCSS) publication, "Social Education." Using poststructural conceptions of discourse, the author…
Sparse Distributed Representation and Hierarchy: Keys to Scalable Machine Intelligence
2016-04-01
Lesher, Jasmin Leveille, and Oliver Layton Neurithmic Systems, LLC APRIL 2016 Final Report Approved for public release...61101E 6. AUTHOR(S) Gerard (Rod) Rinkus, Greg Lesher, Jasmin Leveille, and Oliver Layton 5d. PROJECT NUMBER 1000 5e. TASK NUMBER N/A 5f. WORK
Geographic Mobility of Manpower in the USSR.
ERIC Educational Resources Information Center
Kossov, V. V.; Tatevosoc, R. V.
1984-01-01
The Soviet Union is experiencing substantial reduction in the growth of the working-age population, accompanied by a shift in the distribution of population growth. The government is using various means to encourage workers to move to the sparsely populated developing regions and away from the large cities. (SK)
45 CFR 303.20 - Minimum organizational and staffing requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... payments or social services functions under title IV-A or XX of the Act. In the case of a sparsely... social worker. (2) The assistance payments function means activities related to determination of... financial and medical assistance and commodities distribution or food stamps. (3) The social services...
Monitoring NEON terrestrial sites phenology with daily MODIS BRDF/albedo product and landsat data
USDA-ARS?s Scientific Manuscript database
The MODerate resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) and albedo products (MCD43) have already been in production for more than a decade. The standard product makes use of a linear “kernel-driven” RossThick-LiSparse Reciprocal (RTLSR) BRDF m...
A shared-world conceptual model for integrating space station life sciences telescience operations
NASA Technical Reports Server (NTRS)
Johnson, Vicki; Bosley, John
1988-01-01
Mental models of the Space Station and its ancillary facilities will be employed by users of the Space Station as they draw upon past experiences, perform tasks, and collectively plan for future activities. The operational environment of the Space Station will incorporate telescience, a new set of operational modes. To investigate properties of the operational environment, distributed users, and the mental models they employ to manipulate resources while conducting telescience, an integrating shared-world conceptual model of Space Station telescience is proposed. The model comprises distributed users and resources (active elements); agents who mediate interactions among these elements on the basis of intelligent processing of shared information; and telescience protocols which structure the interactions of agents as they engage in cooperative, responsive interactions on behalf of users and resources distributed in space and time. Examples from the life sciences are used to instantiate and refine the model's principles. Implications for transaction management and autonomy are discussed. Experiments employing the model are described which the authors intend to conduct using the Space Station Life Sciences Telescience Testbed currently under development at Ames Research Center.
49 CFR 192.201 - Required capacity of pressure relieving and limiting stations.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Design of Pipeline Components § 192.201 Required capacity of pressure relieving and limiting stations. (a) Each pressure relief station or pressure limiting station or group of those stations installed to... part of the pipeline or distribution system in excess of those for which it was designed, or against...
Meyer, Frans J C; Davidson, David B; Jakobus, Ulrich; Stuchly, Maria A
2003-02-01
A hybrid finite-element method (FEM)/method of moments (MoM) technique is employed for specific absorption rate (SAR) calculations in a human phantom in the near field of a typical group special mobile (GSM) base-station antenna. The MoM is used to model the metallic surfaces and wires of the base-station antenna, and the FEM is used to model the heterogeneous human phantom. The advantages of each of these frequency domain techniques are, thus, exploited, leading to a highly efficient and robust numerical method for addressing this type of bioelectromagnetic problem. The basic mathematical formulation of the hybrid technique is presented. This is followed by a discussion of important implementation details-in particular, the linear algebra routines for sparse, complex FEM matrices combined with dense MoM matrices. The implementation is validated by comparing results to MoM (surface equivalence principle implementation) and finite-difference time-domain (FDTD) solutions of human exposure problems. A comparison of the computational efficiency of the different techniques is presented. The FEM/MoM implementation is then used for whole-body and critical-organ SAR calculations in a phantom at different positions in the near field of a base-station antenna. This problem cannot, in general, be solved using the MoM or FDTD due to computational limitations. This paper shows that the specific hybrid FEM/MoM implementation is an efficient numerical tool for accurate assessment of human exposure in the near field of base-station antennas.
Cost-effectiveness of the stream-gaging program in Maine; a prototype for nationwide implementation
Fontaine, Richard A.; Moss, M.E.; Smath, J.A.; Thomas, W.O.
1984-01-01
This report documents the results of a cost-effectiveness study of the stream-gaging program in Maine. Data uses and funding sources were identified for the 51 continuous stream gages currently being operated in Maine with a budget of $211,000. Three stream gages were identified as producing data no longer sufficiently needed to warrant continuing their operation. Operation of these stations should be discontinued. Data collected at three other stations were identified as having uses specific only to short-term studies; it is recommended that these stations be discontinued at the end of the data-collection phases of the studies. The remaining 45 stations should be maintained in the program for the foreseeable future. The current policy for operation of the 45-station program would require a budget of $180,300 per year. The average standard error of estimation of streamflow records is 17.7 percent. It was shown that this overall level of accuracy at the 45 sites could be maintained with a budget of approximately $170,000 if resources were redistributed among the gages. A minimum budget of $155,000 is required to operate the 45-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 25.1 percent. The maximum budget analyzed was $350,000, which resulted in an average standard error of 8.7 percent. Large parts of Maine's interior were identified as having sparse streamflow data. It was determined that this sparsity be remedied as funds become available.
Irvine, Kathryn M.; Thornton, Jamie; Backus, Vickie M.; Hohmann, Matthew G.; Lehnhoff, Erik A.; Maxwell, Bruce D.; Michels, Kurt; Rew, Lisa
2013-01-01
Commonly in environmental and ecological studies, species distribution data are recorded as presence or absence throughout a spatial domain of interest. Field based studies typically collect observations by sampling a subset of the spatial domain. We consider the effects of six different adaptive and two non-adaptive sampling designs and choice of three binary models on both predictions to unsampled locations and parameter estimation of the regression coefficients (species–environment relationships). Our simulation study is unique compared to others to date in that we virtually sample a true known spatial distribution of a nonindigenous plant species, Bromus inermis. The census of B. inermis provides a good example of a species distribution that is both sparsely (1.9 % prevalence) and patchily distributed. We find that modeling the spatial correlation using a random effect with an intrinsic Gaussian conditionally autoregressive prior distribution was equivalent or superior to Bayesian autologistic regression in terms of predicting to un-sampled areas when strip adaptive cluster sampling was used to survey B. inermis. However, inferences about the relationships between B. inermis presence and environmental predictors differed between the two spatial binary models. The strip adaptive cluster designs we investigate provided a significant advantage in terms of Markov chain Monte Carlo chain convergence when trying to model a sparsely distributed species across a large area. In general, there was little difference in the choice of neighborhood, although the adaptive king was preferred when transects were randomly placed throughout the spatial domain.
47 CFR 74.1202 - Frequency assignment.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and... translator station or for changes in the facilities of an authorized translator station shall endeavor to... will be assigned to each translator station. (b) Subject to compliance with all the requirements of...
47 CFR 74.1269 - Copies of rules.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1269 Copies of rules. The licensee or permittee of a station...
A detailed gravimetric geoid from North America to Eurasia
NASA Technical Reports Server (NTRS)
Vincent, S. F.; Strange, W. E.; Marsh, J. G.
1972-01-01
A detailed gravimetric geoid of the United States, North Atlantic, and Eurasia, which was computed from a combination of satellite derived and surface gravity data, is presented. The precision of this detailed geoid is + or - 2 to + or - 3 m in the continents but may be in the range of 5 to 7 m in those areas where data is sparse. Comparisons of the detailed gravimetric geoid with results of Rapp, Fischer, and Rice for the United States, Bomford in Europe, and Heiskanen and Fischer in India are presented. Comparisons are also presented with geoid heights from satellite solutions for geocentric station coordinates in North America, the Caribbean, and Europe.
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
Kuipers replaces the ESEM-1 with new ESEM in the U.S. Laboratory
2011-12-28
ISS030-E-033367 (28 Dec. 2011) --- In the International Space Station?s Destiny laboratory, European Space Agency astronaut Andre Kuipers, Expedition 30 flight engineer, replaces the faulty Exchangeable Standard Electronic Module 1 (ESEM-1) behind the front panel of the Microgravity Science Glovebox Remote Power Distribution Assembly (MSG RPDA) with the new spare. The ESEM is used to distribute station main power to the entire MSG facility.
Connecting Aerosol Size Distributions at Three Arctic Stations
NASA Astrophysics Data System (ADS)
Freud, E.; Krejci, R.; Tunved, P.; Barrie, L. A.
2015-12-01
Aerosols play an important role in Earth's energy balance mainly through interactions with solar radiation and cloud processes. There is a distinct annual cycle of arctic aerosols, with greatest mass concentrations in the spring and lowest in summer due to effective wet removal processes - allowing for new particles formation events to take place. Little is known about the spatial extent of these events as no previous studies have directly compared and linked aerosol measurements from different arctic stations during the same times. Although the arctic stations are hardly affected by local pollution, it is normally assumed that their aerosol measurements are indicative of a rather large area. It is, however, not clear if that assumption holds all the time, and how large may that area be. In this study, three different datasets of aerosol size distributions from Mt. Zeppelin in Svalbard, Station Nord in northern Greenland and Alert in the Canadian arctic, are analyzed for the measurement period of 2012-2013. All stations are 500 to 1000 km from each other, and the travel time from one station to the other is typically between 2 to 5 days. The meteorological parameters along the calculated trajectories are analyzed in order to estimate their role in the modification of the aerosol size distribution while the air is traveling from one field station to another. In addition, the exposure of the sampled air to open waters vs. frozen sea is assessed, due to the different fluxes of heat, moisture, gases and particles, that are expected to affect the aerosol size distribution. The results show that the general characteristics of the aerosol size distributions and their annual variation are not very different in all three stations, with Alert and Station Nord being more similar. This is more pronounced when looking into the cases for which the trajectory calculations indicated that the air traveled from one of the latter stations to the other. The probable causes for the measurements at Mt. Zeppelin to stand out are the greater exposure to ice-free water all year round. In addition, the air sampled at Mt. Zeppelin is sometimes decoupled from the air at sea level. This results in a greater potential contribution of long-range transport to the aerosols that are measured there, compared to the other low-altitude stations.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Microbial ecology of extreme environments: Antarctic yeasts and growth in substrate-limited habitats
NASA Technical Reports Server (NTRS)
Vishniac, H. S.
1984-01-01
An extreme environment is by definition one with a depauperate biota. While the Ross Desert is by no means homogeneous, the most exposed and arid habitats, soils in the unglaciated high valleys, do indeed contain a very sparse biota of low diversity. So sparse that the natives could easily be outnumbered by airborne exogenous microbes. Native biota must be capable of overwintering as well as growing in the high valley summer. Tourists may undergo a few divisions before contributing their enzymes and, ultimately, elements to the soil - or may die before landing. The simplest way to demonstrate the indigenicity of a particular microbe is therefore to establish unique distribution; occurrence only in the habitat in question precludes foreign origin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran
We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by everymore » processor. We also provide various numerical results to demonstrate the versatility and scalability of the parallel algorithm.« less
Assimilation of Spatially Sparse In Situ Soil Moisture Networks into a Continuous Model Domain
NASA Astrophysics Data System (ADS)
Gruber, A.; Crow, W. T.; Dorigo, W. A.
2018-02-01
Growth in the availability of near-real-time soil moisture observations from ground-based networks has spurred interest in the assimilation of these observations into land surface models via a two-dimensional data assimilation system. However, the design of such systems is currently hampered by our ignorance concerning the spatial structure of error afflicting ground and model-based soil moisture estimates. Here we apply newly developed triple collocation techniques to provide the spatial error information required to fully parameterize a two-dimensional (2-D) data assimilation system designed to assimilate spatially sparse observations acquired from existing ground-based soil moisture networks into a spatially continuous Antecedent Precipitation Index (API) model for operational agricultural drought monitoring. Over the contiguous United States (CONUS), the posterior uncertainty of surface soil moisture estimates associated with this 2-D system is compared to that obtained from the 1-D assimilation of remote sensing retrievals to assess the value of ground-based observations to constrain a surface soil moisture analysis. Results demonstrate that a fourfold increase in existing CONUS ground station density is needed for ground network observations to provide a level of skill comparable to that provided by existing satellite-based surface soil moisture retrievals.
An automated system for the study of ionospheric spatial structures
NASA Astrophysics Data System (ADS)
Belinskaya, I. V.; Boitman, O. N.; Vugmeister, B. O.; Vyborova, V. M.; Zakharov, V. N.; Laptev, V. A.; Mamchenko, M. S.; Potemkin, A. A.; Radionov, V. V.
The system is designed for the study of the vertical distribution of electron density and the parameters of medium-scale ionospheric irregularities over the sounding site as well as the reconstruction of the spatial distribution of electron density within the range of up to 300 km from the sounding location. The system comprises an active central station as well as passive companion stations. The central station is equipped with the digital ionosonde ``Basis'', the measuring-and-computing complex IVK-2, and the receiver-recorder PRK-3M. The companion stations are equipped with receivers-recorders PRK-3. The automated comlex software system includes 14 subsystems. Data transfer between them is effected using magnetic disk data sets. The system is operated in both ionogram mode and Doppler shift and angle-of-arrival mode. Using data obtained in these two modes, the reconstruction of the spatial distribution of electron density in the region is carried out. Reconstruction is checked for accuracy using data from companion stations.
Statistical regularities of art images and natural scenes: spectra, sparseness and nonlinearities.
Graham, Daniel J; Field, David J
2007-01-01
Paintings are the product of a process that begins with ordinary vision in the natural world and ends with manipulation of pigments on canvas. Because artists must produce images that can be seen by a visual system that is thought to take advantage of statistical regularities in natural scenes, artists are likely to replicate many of these regularities in their painted art. We have tested this notion by computing basic statistical properties and modeled cell response properties for a large set of digitized paintings and natural scenes. We find that both representational and non-representational (abstract) paintings from our sample (124 images) show basic similarities to a sample of natural scenes in terms of their spatial frequency amplitude spectra, but the paintings and natural scenes show significantly different mean amplitude spectrum slopes. We also find that the intensity distributions of paintings show a lower skewness and sparseness than natural scenes. We account for this by considering the range of luminances found in the environment compared to the range available in the medium of paint. A painting's range is limited by the reflective properties of its materials. We argue that artists do not simply scale the intensity range down but use a compressive nonlinearity. In our studies, modeled retinal and cortical filter responses to the images were less sparse for the paintings than for the natural scenes. But when a compressive nonlinearity was applied to the images, both the paintings' sparseness and the modeled responses to the paintings showed the same or greater sparseness compared to the natural scenes. This suggests that artists achieve some degree of nonlinear compression in their paintings. Because paintings have captivated humans for millennia, finding basic statistical regularities in paintings' spatial structure could grant insights into the range of spatial patterns that humans find compelling.
NASA Astrophysics Data System (ADS)
Mohamad Noor, Faris; Adipta, Agra
2018-03-01
Coal Bed Methane (CBM) as a newly developed resource in Indonesia is one of the alternatives to relieve Indonesia’s dependencies on conventional energies. Coal resource of Muara Enim Formation is known as one of the prolific reservoirs in South Sumatra Basin. Seismic inversion and well analysis are done to determine the coal seam characteristics of Muara Enim Formation. This research uses three inversion methods, which are: model base hard- constrain, bandlimited, and sparse-spike inversion. Each type of seismic inversion has its own advantages to display the coal seam and its characteristic. Interpretation result from the analysis data shows that the Muara Enim coal seam has 20 (API) gamma ray value, 1 (gr/cc) – 1.4 (gr/cc) from density log, and low AI cutoff value range between 5000-6400 (m/s)*(g/cc). The distribution of coal seam is laterally thinning northwest to southeast. Coal seam is seen biasedly on model base hard constraint inversion and discontinued on band-limited inversion which isn’t similar to the geological model. The appropriate AI inversion is sparse spike inversion which has 0.884757 value from cross plot inversion as the best correlation value among the chosen inversion methods. Sparse Spike inversion its self-has high amplitude as a proper tool to identify coal seam continuity which commonly appears as a thin layer. Cross-sectional sparse spike inversion shows that there are possible new boreholes in CDP 3662-3722, CDP 3586-3622, and CDP 4004-4148 which is seen in seismic data as a thick coal seam.
Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen
2016-07-07
Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
Distributed operating system for NASA ground stations
NASA Technical Reports Server (NTRS)
Doyle, John F.
1987-01-01
NASA ground stations are characterized by ever changing support requirements, so application software is developed and modified on a continuing basis. A distributed operating system was designed to optimize the generation and maintenance of those applications. Unusual features include automatic program generation from detailed design graphs, on-line software modification in the testing phase, and the incorporation of a relational database within a real-time, distributed system.
A modular Space Station/Base electrical power system - Requirements and design study.
NASA Technical Reports Server (NTRS)
Eliason, J. T.; Adkisson, W. B.
1972-01-01
The requirements and procedures necessary for definition and specification of an electrical power system (EPS) for the future space station are discussed herein. The considered space station EPS consists of a replaceable main power module with self-contained auxiliary power, guidance, control, and communication subsystems. This independent power source may 'plug into' a space station module which has its own electrical distribution, control, power conditioning, and auxiliary power subsystems. Integration problems are discussed, and a transmission system selected with local floor-by-floor power conditioning and distribution in the station module. This technique eliminates the need for an immediate long range decision on the ultimate space base power sources by providing capability for almost any currently considered option.
NASA Technical Reports Server (NTRS)
Youngblood, Wallace W.
1990-01-01
Viewgraphs of increased fire and toxic contaminant detection responsivity by use of distributed, aspirating sensors for space station are presented. Objectives of the concept described are (1) to enhance fire and toxic contaminant detection responsivity in habitable regions of space station; (2) to reduce system weight and complexity through centralized detector/monitor systems; (3) to increase fire signature information from selected locations in a space station module; and (4) to reduce false alarms.
NASA Astrophysics Data System (ADS)
Contractor, S.; Donat, M.; Alexander, L. V.
2017-12-01
Reliable observations of precipitation are necessary to determine past changes in precipitation and validate models, allowing for reliable future projections. Existing gauge based gridded datasets of daily precipitation and satellite based observations contain artefacts and have a short length of record, making them unsuitable to analyse precipitation extremes. The largest limiting factor for the gauge based datasets is a dense and reliable station network. Currently, there are two major data archives of global in situ daily rainfall data, first is Global Historical Station Network (GHCN-Daily) hosted by National Oceanic and Atmospheric Administration (NOAA) and the other by Global Precipitation Climatology Centre (GPCC) part of the Deutsche Wetterdienst (DWD). We combine the two data archives and use automated quality control techniques to create a reliable long term network of raw station data, which we then interpolate using block kriging to create a global gridded dataset of daily precipitation going back to 1950. We compare our interpolated dataset with existing global gridded data of daily precipitation: NOAA Climate Prediction Centre (CPC) Global V1.0 and GPCC Full Data Daily Version 1.0, as well as various regional datasets. We find that our raw station density is much higher than other datasets. To avoid artefacts due to station network variability, we provide multiple versions of our dataset based on various completeness criteria, as well as provide the standard deviation, kriging error and number of stations for each grid cell and timestep to encourage responsible use of our dataset. Despite our efforts to increase the raw data density, the in situ station network remains sparse in India after the 1960s and in Africa throughout the timespan of the dataset. Our dataset would allow for more reliable global analyses of rainfall including its extremes and pave the way for better global precipitation observations with lower and more transparent uncertainties.
2013-01-01
Abstract Tettigettalna mariae Quartau & Boulard 1995 is recorded for the first time in Spain. Thought to be endemic to Portugal (occurring in the southern province of Algarve), the present paper adds its distribution to southern Spain, being an Iberian endemism. The acoustic signals of the new specimens collected were recorded in different localities of Huelva province, in Andalusia during August 2012. According to their present known distribution, specimens of Tettigettalna mariae tend to be sparsely distributed in small range populations in southern Iberian Peninsula, favouring wooded areas with Pinus pinea. PMID:24723772
Roohi, Shahrokh; Grinnell, Margaret; Sandoval, Michelle; Cohen, Nicole J.; Crocker, Kimberly; Allen, Christopher; Dougherty, Cindy; Jolly, Julian; Pesik, Nicki
2018-01-01
The Centers for Disease Control and Prevention (CDC) Quarantine Stations distribute select lifesaving drug products that are not commercially available or are in limited supply in the United States for emergency treatment of certain health conditions. Following a retrospective analysis of shipment records, the authors estimated an average of 6.66 hours saved per shipment when drug products were distributed from quarantine stations compared to a hypothetical centralized site from CDC headquarters in Atlanta, GA. This evaluation supports the continued use of a decentralized model which leverages CDC's regional presence and maximizes efficiency in the distribution of lifesaving drugs. PMID:25779896
Roohi, Shahrokh; Grinnell, Margaret; Sandoval, Michelle; Cohen, Nicole J; Crocker, Kimberly; Allen, Christopher; Dougherty, Cindy; Jolly, Julian; Pesik, Nicki
2015-01-01
The Centers for Disease Control and Prevention (CDC) Quarantine Stations distribute select lifesaving drug products that are not commercially available or are in limited supply in the United States for emergency treatment of certain health conditions. Following a retrospective analysis of shipment records, the authors estimated an average of 6.66 hours saved per shipment when drug products were distributed from quarantine stations compared to a hypothetical centralized site from CDC headquarters in Atlanta, GA. This evaluation supports the continued use of a decentralized model which leverages CDC's regional presence and maximizes efficiency in the distribution of lifesaving drugs.
NASA Astrophysics Data System (ADS)
Yin, Qiang; Chen, Tian-jin; Li, Wei-yang; Xiong, Ze-cheng; Ma, Rui
2017-09-01
In order to obtain the deformation map and equivalent stress distribution of rectifier cabinet for nuclear power generating stations, the quality distribution of structure and electrical are described, the tensile bond strengths of the rings are checked, and the finite element model of cabinet is set up by ANSYS. The transport conditions of the hoisting state and fork loading state are analyzed. The deformation map and equivalent stress distribution are obtained. The attentive problems are put forward. It is a reference for analysis method and the obtained results for the transport of rectifier cabinet for nuclear power generating stations.
Space Station environmental control and life support system distribution and loop closure studies
NASA Technical Reports Server (NTRS)
Humphries, William R.; Reuter, James L.; Schunk, Richard G.
1986-01-01
The NASA Space Station's environmental control and life support system (ECLSS) encompasses functional elements concerned with temperature and humidity control, atmosphere control and supply, atmosphere revitalization, fire detection and suppression, water recovery and management, waste management, and EVA support. Attention is presently given to functional and physical module distributions of the ECLSS among these elements, with a view to resource requirements and safety implications. A strategy of physical distribution coupled with functional centralization is for the air revitalization and water reclamation systems. Also discussed is the degree of loop closure desirable in the initial operational capability status Space Station's oxygen and water reclamation loops.
Sparse and redundant representations for inverse problems and recognition
NASA Astrophysics Data System (ADS)
Patel, Vishal M.
Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.
Man-systems distributed system for Space Station Freedom
NASA Technical Reports Server (NTRS)
Lewis, J. L.
1990-01-01
Viewgraphs on man-systems distributed system for Space Station Freedom are presented. Topics addressed include: description of man-systems (definition, requirements, scope, subsystems, and topologies); implementation (approach, tools); man-systems interfaces (system to element and system to system); prime/supporting development relationship; selected accomplishments; and technical challenges.
Random-access scanning microscopy for 3D imaging in awake behaving animals
Nadella, K. M. Naga Srinivas; Roš, Hana; Baragli, Chiara; Griffiths, Victoria A.; Konstantinou, George; Koimtzis, Theo; Evans, Geoffrey J.; Kirkby, Paul A.; Silver, R. Angus
2018-01-01
Understanding how neural circuits process information requires rapid measurements from identified neurons distributed in 3D space. Here we describe an acousto-optic lens two-photon microscope that performs high-speed focussing and line-scanning within a volume spanning hundreds of micrometres. We demonstrate its random access functionality by selectively imaging cerebellar interneurons sparsely distributed in 3D and by simultaneously recording from the soma, proximal and distal dendrites of neocortical pyramidal cells in behaving mice. PMID:27749836
2014-06-17
100 0 2 4 Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function 0 50 100 0 2 4 L- Wigner distribution 0 50 100 0 0.5 1 Auto-correlation function ...bilinear or higher order autocorrelation functions will increase the number of missing samples, the analysis shows that accurate instantaneous...frequency estimation can be achieved even if we deal with only few samples, as long as the auto-correlation function is properly chosen to coincide with
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
Rouet, François-Henry; Li, Xiaoye S.; Ghysels, Pieter; ...
2016-06-30
In this paper, we present a distributed-memory library for computations with dense structured matrices. A matrix is considered structured if its off-diagonal blocks can be approximated by a rank-deficient matrix with low numerical rank. Here, we use Hierarchically Semi-Separable (HSS) representations. Such matrices appear in many applications, for example, finite-element methods, boundary element methods, and so on. Exploiting this structure allows for fast solution of linear systems and/or fast computation of matrix-vector products, which are the two main building blocks of matrix computations. The compression algorithm that we use, that computes the HSS form of an input dense matrix, reliesmore » on randomized sampling with a novel adaptive sampling mechanism. We discuss the parallelization of this algorithm and also present the parallelization of structured matrix-vector product, structured factorization, and solution routines. The efficiency of the approach is demonstrated on large problems from different academic and industrial applications, on up to 8,000 cores. Finally, this work is part of a more global effort, the STRUctured Matrices PACKage (STRUMPACK) software package for computations with sparse and dense structured matrices. Hence, although useful on their own right, the routines also represent a step in the direction of a distributed-memory sparse solver.« less
47 CFR 74.780 - Broadcast regulations applicable to translators, low power, and booster stations.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., low power, and booster stations. 74.780 Section 74.780 Telecommunication FEDERAL COMMUNICATIONS... PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.780 Broadcast regulations applicable to translators, low power, and booster stations. The following rules are applicable to...
47 CFR 74.780 - Broadcast regulations applicable to translators, low power, and booster stations.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., low power, and booster stations. 74.780 Section 74.780 Telecommunication FEDERAL COMMUNICATIONS... PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.780 Broadcast regulations applicable to translators, low power, and booster stations. The following rules are applicable to...
47 CFR 74.780 - Broadcast regulations applicable to translators, low power, and booster stations.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., low power, and booster stations. 74.780 Section 74.780 Telecommunication FEDERAL COMMUNICATIONS... PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74.780 Broadcast regulations applicable to translators, low power, and booster stations. The following rules are applicable to...
49 CFR 192.741 - Pressure limiting and regulating stations: Telemetering or recording gauges.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Pressure limiting and regulating stations... STANDARDS Maintenance § 192.741 Pressure limiting and regulating stations: Telemetering or recording gauges. (a) Each distribution system supplied by more than one district pressure regulating station must be...
Lahr, J.C.; Chouet, B.A.; Stephens, C.D.; Power, J.A.; Page, R.A.
1994-01-01
Determination of the precise locations of seismic events associated with the 1989-1990 eruptions of Redoubt Volcano posed a number of problems, including poorly known crustal velocities, a sparse station distribution, and an abundance of events with emergent phase onsets. In addition, the high relief of the volcano could not be incorporated into the hypoellipse earthquake location algorithm. This algorithm was modified to allow hypocenters to be located above the elevation of the seismic stations. The velocity model was calibrated on the basis of a posteruptive seismic survey, in which four chemical explosions were recorded by eight stations of the permanent network supplemented with 20 temporary seismographs deployed on and around the volcanic edifice. The model consists of a stack of homogeneous horizontal layers; setting the top of the model at the summit allows events to be located anywhere within the volcanic edifice. Detailed analysis of hypocentral errors shows that the long-period (LP) events constituting the vigorous 23-hour swarm that preceded the initial eruption on December 14 could have originated from a point 1.4 km below the crater floor. A similar analysis of LP events in the swarm preceding the major eruption on January 2 shows they also could have originated from a point, the location of which is shifted 0.8 km northwest and 0.7 km deeper than the source of the initial swarm. We suggest this shift in LP activity reflects a northward jump in the pathway for magmatic gases caused by the sealing of the initial pathway by magma extrusion during the last half of December. Volcano-tectonic (VT) earthquakes did not occur until after the initial 23-hour-long swarm. They began slowly just below the LP source and their rate of occurrence increased after the eruption of 01:52 AST on December 15, when they shifted to depths of 6 to 10 km. After January 2 the VT activity migrated gradually northward; this migration suggests northward propagating withdrawal of magma from a plexus of dikes and/or sills located in the 6 to 10 km depth range. Precise relocations of selected events prior to January 2 clearly resolve a narrow, steeply dipping, pencil-shaped concentration of activity in the depth range of 1-7 km, which illuminates the conduit along which magma was transported to the surface. A third event type, named hybrid, which blends the characteristics of both VT and LP events, originates just below the LP source, and may reflect brittle failure along a zone intersecting a fluid-filled crack. The distribution of hybrid events is elongated 0.2-0.4 km in an east-west direction. This distribution may offer constraints on the orientation and size of the fluid-filled crack inferred to be the source of the LP events. ?? 1994.
NASA Astrophysics Data System (ADS)
Tomaskovicova, Sonia; Paamand, Eskild; Ingeman-Nielsen, Thomas; Bauer-Gottwein, Peter
2013-04-01
The sedimentary settings of West Greenlandic towns with their fine-grained, often ice-rich marine deposits are of great concern in building and construction projects in Greenland, as they lose volume, strength and bearing capacity upon thaw. Since extensive permafrost thawing over large areas of inhabited Greenlandic coast has been predicted as a result of climate change, it is of great both technical and economical interest to assess the extent and thermal properties of such formations. Availability of methods able to determine the thermal parameters of permafrost and forecast its reaction to climate evolution is therefore crucial for sustainable infrastructure planning and development in the Arctic. We are developing a model of heat transport for permafrost able to assess the thermal properties of the ground based on calibration by surface geoelectrical measurements and ground surface temperature measurements. The advantages of modeling approach and use of exclusively surface measurements (in comparison with direct measurements on core samples) are smaller environmental impact, cheaper logistics, assessment of permafrost conditions over larger areas and possibility of forecasting of the fate of permafrost by application of climate forcing. In our approach, the heat model simulates temperature distribution in the ground based on ground surface temperature, specified proportions of the ground constituents and their estimated thermal parameters. The calculated temperatures in the specified model layers are governing the phase distribution between unfrozen water and ice. The changing proportion of unfrozen water content as function of temperature is the main parameter driving the evolution of electrical properties of the ground. We use a forward modeling scheme to calculate the apparent resistivity distribution of such a ground as if collected from a surface geoelectrical array. The calculated resistivity profile is compared to actual field measurements and a difference between the synthetic and the measured apparent resistivities is minimized in a least-squares inversion procedure by adjusting the thermal parameters of the heat model. A site-specific calibration is required since the relation between unfrozen water content and temperature is strongly dependent on the grain size of the soil. We present details of an automated permanent field measurement setup that has been established to collect the calibration data in Ilulissat, West Greenland. Considering the station location in high latitude environment, this setup is unique of its kind since the installation of automated geophysical stations in the Arctic conditions is a challenging task. The main issues are related to availability of adapted equipment, high demand on robustness of the equipment and method due to the harsh environment, remoteness of the field sites and related powering issues of such systems. By showing the results from the new-established geoelectrical station over the freezing period in autumn 2012, we prove the 2D time lapse resistivity tomography to be an effective method for permafrost monitoring in high latitudes. We demonstrate the effectivity of time lapse geoelectrical signal for petrophysical relationship calibration, which is enhanced comparing to sparse measurements.
Individual snag detection using neighborhood attribute filtered airborne lidar data
Brian M. Wing; Martin W. Ritchie; Kevin Boston; Warren B. Cohen; Michael J. Olsen
2015-01-01
The ability to estimate and monitor standing dead trees (snags) has been difficult due to their irregular and sparse distribution, often requiring intensive sampling methods to obtain statistically significant estimates. This study presents a new method for estimating and monitoring snags using neighborhood attribute filtered airborne discrete-return lidar data. The...
Matthew S. Lobdell; Patrick G. Thompson
2017-01-01
Quercus oglethorpensis (Oglethorpe oak) is an endangered species native to the southeastern United States. It is threatened by land use changes, competition, and chestnut blight disease caused by Cryphonectria parasitica. The species is distributed sparsely over a linear distance of ca. 950 km. Its range includes several...
A practical modification of horizontal line sampling for snag and cavity tree inventory
M. J. Ducey; G. J. Jordan; J. H. Gove; H. T. Valentine
2002-01-01
Snags and cavity trees are important structural features in forests, but they are often sparsely distributed, making efficient inventories problematic. We present a straightforward modification of horizontal line sampling designed to facilitate inventory of these features while remaining compatible with commonly employed sampling methods for the living overstory. The...
Sparse distributed memory prototype: Principles of operation
NASA Technical Reports Server (NTRS)
Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip
1988-01-01
Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.
Huh, Yang Hoon; Noh, Minsoo; Burden, Frank R.; Chen, Jennifer C.; Winkler, David A.; Sherley, James L.
2015-01-01
There is a long-standing unmet clinical need for biomarkers with high specificity for distributed stem cells (DSCs) in tissues, or for use in diagnostic and therapeutic cell preparations (e.g., bone marrow). Although DSCs are essential for tissue maintenance and repair, accurate determination of their numbers for medical applications has been problematic. Previous searches for biomarkers expressed specifically in DSCs were hampered by difficulty obtaining pure DSCs and by the challenges in mining complex molecular expression data. To identify DSC such useful and specific biomarkers, we combined a novel sparse feature selection method with combinatorial molecular expression data focused on asymmetric self-renewal, a conspicuous property of DSCs. The analysis identified reduced expression of the histone H2A variant H2A.Z as a superior molecular discriminator for DSC asymmetric self-renewal. Subsequent molecular expression studies showed H2A.Z to be a novel “pattern-specific biomarker” for asymmetrically self-renewing cells with sufficient specificity to count asymmetrically self-renewing DSCs in vitro and potentially in situ. PMID:25636161
Coverage maximization under resource constraints using a nonuniform proliferating random walk.
Saha, Sudipta; Ganguly, Niloy
2013-02-01
Information management services on networks, such as search and dissemination, play a key role in any large-scale distributed system. One of the most desirable features of these services is the maximization of the coverage, i.e., the number of distinctly visited nodes under constraints of network resources as well as time. However, redundant visits of nodes by different message packets (modeled, e.g., as walkers) initiated by the underlying algorithms for these services cause wastage of network resources. In this work, using results from analytical studies done in the past on a K-random-walk-based algorithm, we identify that redundancy quickly increases with an increase in the density of the walkers. Based on this postulate, we design a very simple distributed algorithm which dynamically estimates the density of the walkers and thereby carefully proliferates walkers in sparse regions. We use extensive computer simulations to test our algorithm in various kinds of network topologies whereby we find it to be performing particularly well in networks that are highly clustered as well as sparse.
Two-dimensional shape recognition using sparse distributed memory
NASA Technical Reports Server (NTRS)
Kanerva, Pentti; Olshausen, Bruno
1990-01-01
Researchers propose a method for recognizing two-dimensional shapes (hand-drawn characters, for example) with an associative memory. The method consists of two stages: first, the image is preprocessed to extract tangents to the contour of the shape; second, the set of tangents is converted to a long bit string for recognition with sparse distributed memory (SDM). SDM provides a simple, massively parallel architecture for an associative memory. Long bit vectors (256 to 1000 bits, for example) serve as both data and addresses to the memory, and patterns are grouped or classified according to similarity in Hamming distance. At the moment, tangents are extracted in a simple manner by progressively blurring the image and then using a Canny-type edge detector (Canny, 1986) to find edges at each stage of blurring. This results in a grid of tangents. While the technique used for obtaining the tangents is at present rather ad hoc, researchers plan to adopt an existing framework for extracting edge orientation information over a variety of resolutions, such as suggested by Watson (1987, 1983), Marr and Hildreth (1980), or Canny (1986).
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1989-01-01
To study the problems of encoding visual images for use with a Sparse Distributed Memory (SDM), I consider a specific class of images- those that consist of several pieces, each of which is a line segment or an arc of a circle. This class includes line drawings of characters such as letters of the alphabet. I give a method of representing a segment of an arc by five numbers in a continuous way; that is, similar arcs have similar representations. I also give methods for encoding these numbers as bit strings in an approximately continuous way. The set of possible segments and arcs may be viewed as a five-dimensional manifold M, whose structure is like a Mobious strip. An image, considered to be an unordered set of segments and arcs, is therefore represented by a set of points in M - one for each piece. I then discuss the problem of constructing a preprocessor to find the segments and arcs in these images, although a preprocessor has not been developed. I also describe a possible extension of the representation.
Spatial-temporal variation of marginal land suitable for energy plants from 1990 to 2010 in China
NASA Astrophysics Data System (ADS)
Jiang, Dong; Hao, Mengmeng; Fu, Jingying; Zhuang, Dafang; Huang, Yaohuan
2014-07-01
Energy plants are the main source of bioenergy which will play an increasingly important role in future energy supplies. With limited cultivated land resources in China, the development of energy plants may primarily rely on the marginal land. In this study, based on the land use data from 1990 to 2010(every 5 years is a period) and other auxiliary data, the distribution of marginal land suitable for energy plants was determined using multi-factors integrated assessment method. The variation of land use type and spatial distribution of marginal land suitable for energy plants of different decades were analyzed. The results indicate that the total amount of marginal land suitable for energy plants decreased from 136.501 million ha to 114.225 million ha from 1990 to 2010. The reduced land use types are primarily shrub land, sparse forest land, moderate dense grassland and sparse grassland, and large variation areas are located in Guangxi, Tibet, Heilongjiang, Xinjiang and Inner Mongolia. The results of this study will provide more effective data reference and decision making support for the long-term planning of bioenergy resources.
Evaluating the Capability of High-Altitude Infrasound Platforms to Cover Gaps in Existing Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Daniel
A variety of Earth surface and atmospheric sources generate low frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at the Earth's surface. The experiments have been limited to at most two stations at altitude, limiting their utility in acoustic event detection and localization. We describe the deployment of five drifting microphonemore » stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based explosions as well as the ocean microbarom while traveling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic broad band signals similar to those seen on previous flights in the same region were noted as well, but their source remains unclear. Background noise levels were commensurate with those on infrasound stations in the International Monitoring System (IMS) below 2 seconds, but sensor self noise appears to dominate at higher frequencies.« less
NASA Astrophysics Data System (ADS)
Bowman, Daniel C.; Albert, Sarah A.
2018-06-01
A variety of Earth surface and atmospheric sources generate low-frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at Earth's surface. The experiments have been limited to at most two stations at altitude, making acoustic event detection and localization difficult. We describe the deployment of four drifting microphone stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based chemical explosions as well as the ocean microbarom while travelling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. The waveforms and propagation patterns suggest interactions with gravity waves at 35-45 km altitude. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic signals similar to those seen on previous flights in the same region were noted, but their source remains unclear. Background noise levels were commensurate with those on infrasound stations in the International Monitoring System below 2 s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Daniel C.; Albert, Sarah A.
We present that a variety of Earth surface and atmospheric sources generate low frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at Earth’s surface. The experiments have been limited to at most two stations at altitude, making acoustic event detection and localization difficult. We describe the deployment of four drifting microphonemore » stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based chemical explosions as well as the ocean microbarom while traveling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. The waveforms and propagation patterns suggest interactions with gravity waves in the 35-45 km altitude. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic signals similar to those seen on previous flights in the same region were noted, but their source remains unclear. Lastly, background noise levels were commensurate with those on infrasound stations in the International Monitoring System below 2 seconds.« less
Bowman, Daniel C.; Albert, Sarah A.
2018-02-22
We present that a variety of Earth surface and atmospheric sources generate low frequency sound waves that can travel great distances. Despite a rich history of ground-based sensor studies, very few experiments have investigated the prospects of free floating microphone arrays at high altitudes. However, recent initiatives have shown that such networks have very low background noise and may sample an acoustic wave field that is fundamentally different than that at Earth’s surface. The experiments have been limited to at most two stations at altitude, making acoustic event detection and localization difficult. We describe the deployment of four drifting microphonemore » stations at altitudes between 21 and 24 km above sea level. The stations detected one of two regional ground-based chemical explosions as well as the ocean microbarom while traveling almost 500 km across the American Southwest. The explosion signal consisted of multiple arrivals; signal amplitudes did not correlate with sensor elevation or source range. The waveforms and propagation patterns suggest interactions with gravity waves in the 35-45 km altitude. A sparse network method that employed curved wave front corrections was able to determine the backazimuth from the free flying network to the acoustic source. Episodic signals similar to those seen on previous flights in the same region were noted, but their source remains unclear. Lastly, background noise levels were commensurate with those on infrasound stations in the International Monitoring System below 2 seconds.« less
NASA Technical Reports Server (NTRS)
Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.
1998-01-01
A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.
State-of-the art of dc components for secondary power distribution of Space Station Freedom
NASA Technical Reports Server (NTRS)
Krauthamer, Stanley; Gangal, Mukund; Das, Radhe S. L.
1991-01-01
120-V dc secondary power distribution has been selected for Space Station Freedom. State-of-the art components and subsystems are examined in terms of performance, size, and topology. One of the objectives of this work is to inform Space Station users what is available in power supplies and power control devices. The other objective is to stimulate interest in the component industry so that more focused product development can be started. Based on results of this study, it is estimated that, with some redesign, modifications, and space qualification, may of these components may be applied to Space Station needs.
Global distribution of ozone for various seasons
NASA Technical Reports Server (NTRS)
Koprova, L. I.
1979-01-01
A technique which was used to obtain a catalog of the seasonal global distribution of ozone is presented. The technique is based on the simultaneous use of 1964-1975 data on the total ozone content from a worldwide network of ozonometric stations and on the vertical ozone profile from ozone sounding stations.
Exercise of the SSM/PMAD Breadboard. [Space Station Module/Power Management And Distribution
NASA Technical Reports Server (NTRS)
Walls, Bryan
1989-01-01
The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard is a test facility designed for advanced development of space power automation. Originally designed for 20-kHz power, the system is being converted to work with direct current (dc). Power levels are on a par with those expected for a Space Station module. Some of the strengths and weaknesses of the SSM/PMAD system in design and function are examined, and the future directions foreseen for the system are outlined.
Dose-shaping using targeted sparse optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayre, George A.; Ruan, Dan
2013-07-15
Purpose: Dose volume histograms (DVHs) are common tools in radiation therapy treatment planning to characterize plan quality. As statistical metrics, DVHs provide a compact summary of the underlying plan at the cost of losing spatial information: the same or similar dose-volume histograms can arise from substantially different spatial dose maps. This is exactly the reason why physicians and physicists scrutinize dose maps even after they satisfy all DVH endpoints numerically. However, up to this point, little has been done to control spatial phenomena, such as the spatial distribution of hot spots, which has significant clinical implications. To this end, themore » authors propose a novel objective function that enables a more direct tradeoff between target coverage, organ-sparing, and planning target volume (PTV) homogeneity, and presents our findings from four prostate cases, a pancreas case, and a head-and-neck case to illustrate the advantages and general applicability of our method.Methods: In designing the energy minimization objective (E{sub tot}{sup sparse}), the authors utilized the following robust cost functions: (1) an asymmetric linear well function to allow differential penalties for underdose, relaxation of prescription dose, and overdose in the PTV; (2) a two-piece linear function to heavily penalize high dose and mildly penalize low and intermediate dose in organs-at risk (OARs); and (3) a total variation energy, i.e., the L{sub 1} norm applied to the first-order approximation of the dose gradient in the PTV. By minimizing a weighted sum of these robust costs, general conformity to dose prescription and dose-gradient prescription is achieved while encouraging prescription violations to follow a Laplace distribution. In contrast, conventional quadratic objectives are associated with a Gaussian distribution of violations, which is less forgiving to large violations of prescription than the Laplace distribution. As a result, the proposed objective E{sub tot}{sup sparse} improves tradeoff between planning goals by 'sacrificing' voxels that have already been violated to improve PTV coverage, PTV homogeneity, and/or OAR-sparing. In doing so, overall plan quality is increased since these large violations only arise if a net reduction in E{sub tot}{sup sparse} occurs as a result. For example, large violations to dose prescription in the PTV in E{sub tot}{sup sparse}-optimized plans will naturally localize to voxels in and around PTV-OAR overlaps where OAR-sparing may be increased without compromising target coverage. The authors compared the results of our method and the corresponding clinical plans using analyses of DVH plots, dose maps, and two quantitative metrics that quantify PTV homogeneity and overdose. These metrics do not penalize underdose since E{sub tot}{sup sparse}-optimized plans were planned such that their target coverage was similar or better than that of the clinical plans. Finally, plan deliverability was assessed with the 2D modulation index.Results: The proposed method was implemented using IBM's CPLEX optimization package (ILOG CPLEX, Sunnyvale, CA) and required 1-4 min to solve with a 12-core Intel i7 processor. In the testing procedure, the authors optimized for several points on the Pareto surface of four 7-field 6MV prostate cases that were optimized for different levels of PTV homogeneity and OAR-sparing. The generated results were compared against each other and the clinical plan by analyzing their DVH plots and dose maps. After developing intuition by planning the four prostate cases, which had relatively few tradeoffs, the authors applied our method to a 7-field 6 MV pancreas case and a 9-field 6MV head-and-neck case to test the potential impact of our method on more challenging cases. The authors found that our formulation: (1) provided excellent flexibility for balancing OAR-sparing with PTV homogeneity; and (2) permitted the dose planner more control over the evolution of the PTV's spatial dose distribution than conventional objective functions. In particular, E{sub tot}{sup sparse}-optimized plans for the pancreas case and head-and-neck case exhibited substantially improved sparing of the spinal cord and parotid glands, respectively, while maintaining or improving sparing for other OARs and markedly improving PTV homogeneity. Plan deliverability for E{sub tot}{sup sparse}-optimized plans was shown to be better than their associated clinical plans, according to the two-dimensional modulation index.Conclusions: These results suggest that our formulation may be used to improve dose-shaping and OAR-sparing for complicated disease sites, such as the pancreas or head and neck. Furthermore, our objective function and constraints are linear and constitute a linear program, which converges to the global minimum quickly, and can be easily implemented in treatment planning software. Thus, the authors expect fast translation of our method to the clinic where it may have a positive impact on plan quality for challenging disease sites.« less
Charging Guidance of Electric Taxis Based on Adaptive Particle Swarm Optimization
Niu, Liyong; Zhang, Di
2015-01-01
Electric taxis are playing an important role in the application of electric vehicles. The actual operational data of electric taxis in Shenzhen, China, is analyzed, and, in allusion to the unbalanced time availability of the charging station equipment, the electric taxis charging guidance system is proposed basing on the charging station information and vehicle information. An electric taxis charging guidance model is established and guides the charging based on the positions of taxis and charging stations with adaptive mutation particle swarm optimization. The simulation is based on the actual data of Shenzhen charging stations, and the results show that electric taxis can be evenly distributed to the appropriate charging stations according to the charging pile numbers in charging stations after the charging guidance. The even distribution among the charging stations in the area will be achieved and the utilization of charging equipment will be improved, so the proposed charging guidance method is verified to be feasible. The improved utilization of charging equipment can save public charging infrastructure resources greatly. PMID:26236770
Charging Guidance of Electric Taxis Based on Adaptive Particle Swarm Optimization.
Niu, Liyong; Zhang, Di
2015-01-01
Electric taxis are playing an important role in the application of electric vehicles. The actual operational data of electric taxis in Shenzhen, China, is analyzed, and, in allusion to the unbalanced time availability of the charging station equipment, the electric taxis charging guidance system is proposed basing on the charging station information and vehicle information. An electric taxis charging guidance model is established and guides the charging based on the positions of taxis and charging stations with adaptive mutation particle swarm optimization. The simulation is based on the actual data of Shenzhen charging stations, and the results show that electric taxis can be evenly distributed to the appropriate charging stations according to the charging pile numbers in charging stations after the charging guidance. The even distribution among the charging stations in the area will be achieved and the utilization of charging equipment will be improved, so the proposed charging guidance method is verified to be feasible. The improved utilization of charging equipment can save public charging infrastructure resources greatly.
Dictionary learning and time sparsity in dynamic MRI.
Caballero, Jose; Rueckert, Daniel; Hajnal, Joseph V
2012-01-01
Sparse representation methods have been shown to tackle adequately the inherent speed limits of magnetic resonance imaging (MRI) acquisition. Recently, learning-based techniques have been used to further accelerate the acquisition of 2D MRI. The extension of such algorithms to dynamic MRI (dMRI) requires careful examination of the signal sparsity distribution among the different dimensions of the data. Notably, the potential of temporal gradient (TG) sparsity in dMRI has not yet been explored. In this paper, a novel method for the acceleration of cardiac dMRI is presented which investigates the potential benefits of enforcing sparsity constraints on patch-based learned dictionaries and TG at the same time. We show that an algorithm exploiting sparsity on these two domains can outperform previous sparse reconstruction techniques.
Fast generation of sparse random kernel graphs
Hagberg, Aric; Lemons, Nathan; Du, Wen -Bo
2015-09-10
The development of kernel-based inhomogeneous random graphs has provided models that are flexible enough to capture many observed characteristics of real networks, and that are also mathematically tractable. We specify a class of inhomogeneous random graph models, called random kernel graphs, that produces sparse graphs with tunable graph properties, and we develop an efficient generation algorithm to sample random instances from this model. As real-world networks are usually large, it is essential that the run-time of generation algorithms scales better than quadratically in the number of vertices n. We show that for many practical kernels our algorithm runs in timemore » at most ο(n(logn)²). As an example, we show how to generate samples of power-law degree distribution graphs with tunable assortativity.« less
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Harris, D. B.; Dahl-Jensen, T.; Kværna, T.; Larsen, T. B.; Paulsen, B.; Voss, P. H.
2017-12-01
The oceanic boundary separating the Eurasian and North American plates between 70° and 84° north hosts large earthquakes which are well recorded teleseismically, and many more seismic events at far lower magnitudes that are well recorded only at regional distances. Existing seismic bulletins have considerable spread and bias resulting from limited station coverage and deficiencies in the velocity models applied. This is particularly acute for the lower magnitude events which may only be constrained by a small number of Pn and Sn arrivals. Over the past two decades there has been a significant improvement in the seismic network in the Arctic: a difficult region to instrument due to the harsh climate, a sparsity of accessible sites (particularly at significant distances from the sea), and the expense and difficult logistics of deploying and maintaining stations. New deployments and upgrades to stations on Greenland, Svalbard, Jan Mayen, Hopen, and Bjørnøya have resulted in a sparse but stable regional seismic network which results in events down to magnitudes below 3 generating high-quality Pn and Sn signals on multiple stations. A catalogue of several hundred events in the region since 1998 has been generated using many new phase readings on stations on both sides of the spreading ridge in addition to teleseismic P phases. A Bayesian multiple event relocation has resulted in a significant reduction in the spread of hypocentre estimates for both large and small events. Whereas single event location algorithms minimize vectors of time residuals on an event-by-event basis, the Bayesloc program finds a joint probability distribution of origins, hypocentres, and corrections to traveltime predictions for large numbers of events. The solutions obtained favour those event hypotheses resulting in time residuals which are most consistent over a given source region. The relocations have been performed with different 1-D velocity models applicable to the Arctic region and hypocentres obtained using Bayesloc have been shown to be relatively insensitive to the specified velocity structure in the crust and upper mantle, even for events only constrained by regional phases. The patterns of time residuals resulting from the multiple-event location procedure provide well-constrained time correction surfaces for single-event location estimates and are sufficiently stable to identify a number of picking errors and instrumental timing anomalies. This allows for subsequent quality control of the input data and further improvement in the location estimates. We use the relocated events to form narrowband empirical steering vectors for wave fronts arriving at the SPITS array on Svalbard for azimuth and apparent velocity estimation. We demonstrate that empirical matched field parameter estimation determined by source region is a viable supplement to planewave f-k analysis, mitigating bias and obviating the need for Slowness and Azimuth Station Corrections. A database of reference events and phase arrivals is provided to facilitate further refinement of event locations and the construction of empirical signal detectors.
NASA Technical Reports Server (NTRS)
Nessel, James A.; Acosta, Robert J.
2010-01-01
Widely distributed (sparse) ground-based arrays have been utilized for decades in the radio science community for imaging celestial objects, but have only recently become an option for deep space communications applications with the advent of the proposed Next Generation Deep Space Network (DSN) array. But whereas in astronomical imaging, observations (receive-mode only) are made on the order of minutes to hours and atmospheric-induced aberrations can be mostly corrected for in post-processing, communications applications require transmit capabilities and real-time corrections over time scales as short as fractions of a second. This presents an unavoidable problem with the use of sparse arrays for deep space communications at Ka-band which has yet to be successfully resolved, particularly for uplink arraying. In this paper, an analysis of the performance of a sparse antenna array, in terms of its directivity, is performed to derive a closed form solution to the expected array loss in the presence of atmospheric-induced phase fluctuations. The theoretical derivation for array directivity degradation is validated with interferometric measurements for a two-element array taken at Goldstone, California. With the validity of the model established, an arbitrary 27-element array geometry is defined at Goldstone, California, to ascertain its performance in the presence of phase fluctuations. It is concluded that a combination of compact array geometry and atmospheric compensation is necessary to ensure high levels of availability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cross, E.R.; Hyams, K.C.
1996-07-01
The distribution of Phlebotomus papatasi in Southwest Asia is thought to be highly dependent on temperature and relative humidity. A discriminant analysis model based on weather data and reported vector surveys was developed to predict the seasonal and geographic distribution of P. papatasi in this region. To simulate global warming, temperature values for 115 weather stations were increased by 1 {degrees}C, 3{degrees}C, and 5{degrees}C, and the outcome variable coded as unknown in the model. Probability of occurrence values were then predicted for each location with a weather station. Stations with positive probability of occurrence values for May, June, July, andmore » August were considered locations where two or more life cycles of P. papatasi could occur and which could support endemic transmission of leishmaniasis and sandfly fever. Among 115 weather stations, 71 (62%) would be considered endemic with current temperature conditions; 14 (12%) additional station could become endemic with an increase of 1 {degrees}C; 17 (15%) more than a 3{degrees}C increase; and 12 (10%) more (all but one station) with a t{degrees}C increase. In addition to increased geographic distribution, seasonality of disease transmission could be extended throughout 12 months of the year in 7 (6%) locations with at least a 3{degrees}C rise in temperature and in 29 (25%) locations with a 5{degrees}C rise. 15 refs., 4 figs.« less
Interpreting carnivore scent-station surveys
Sargeant, G.A.; Johnson, D.H.; Berg, W.E.
1998-01-01
The scent-station survey method has been widely used to estimate trends in carnivore abundance. However, statistical properties of scent-station data are poorly understood, and the relation between scent-station indices and carnivore abundance has not been adequately evaluated. We assessed properties of scent-station indices by analyzing data collected in Minnesota during 1986-03. Visits to stations separated by <2 km were correlated for all species because individual carnivores sometimes visited several stations in succession. Thus, visits to stations had an intractable statistical distribution. Dichotomizing results for lines of 10 stations (0 or 21 visits) produced binomially distributed data that were robust to multiple visits by individuals. We abandoned 2-way comparisons among years in favor of tests for population trend, which are less susceptible to bias, and analyzed results separately for biogeographic sections of Minnesota because trends differed among sections. Before drawing inferences about carnivore population trends, we reevaluated published validation experiments. Results implicated low statistical power and confounding as possible explanations for equivocal or conflicting results of validation efforts. Long-term trends in visitation rates probably reflect real changes in populations, but poor spatial and temporal resolution, susceptibility to confounding, and low statistical power limit the usefulness of this survey method.
NASA Astrophysics Data System (ADS)
Yenier, E.; Baturan, D.; Karimi, S.
2016-12-01
Monitoring of seismicity related to oil and gas operations is routinely performed nowadays using a number of different surface and downhole seismic array configurations and technologies. Here, we provide a hydraulic fracture (HF) monitoring case study that compares the data set generated by a sparse local surface network of broadband seismometers to a data set generated by a single downhole geophone string. Our data was collected during a 5-day single-well HF operation, by a temporary surface network consisting of 10 stations deployed within 5 km of the production well. The downhole data was recorded by a 20 geophone string deployed in an observation well located 15 m from the production well. Surface network data processing included standard STA/LTA event triggering enhanced by template-matching subspace detection, grid search locations which was improved using the double-differencing re-location technique, as well as Richter (ML) and moment (Mw) magnitude computations for all detected events. In addition, moment tensors were computed from first motion polarities and amplitudes for the subset of highest SNR events. The resulting surface event catalog shows a very weak spatio-temporal correlation to HF operations with only 43% of recorded seismicity occurring during HF stages times. This along with source mechanisms shows that the surface-recorded seismicity delineates the activation of several pre-existing structures striking NNE-SSW and consistent with regional stress conditions as indicated by the orientation of SHmax. Comparison of the sparse-surface and single downhole string datasets allows us to perform a cost-benefit analysis of the two monitoring methods. Our findings show that although the downhole array recorded ten times as many events, the surface network provides a more coherent delineation of the underlying structure and more accurate magnitudes for larger magnitude events. We attribute this to the enhanced focal coverage provided by the surface network and the use of broadband instrumentation. The results indicate that sparse surface networks of high quality instruments can provide rich and reliable datasets for evaluation of the impact and effectiveness of hydraulic fracture operations in regions with favorable surface noise, local stress and attenuation characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brook, I.M.
A lightweight, portable suction dredge has been used for five bottom types which usually present problems to benthic investigators. Water depth ranged from 0.25 m to 5 m. By use of a 0.25 m/sup 2/ quadrat or using the suction end as a probe with the depth of penetration limited by a collar, quantitative samples were taken in coarse sand, fine flocculent mud, dense turtle grass (Thalassia testudinum), sparse turtle grass over coralline rubble (Porites sp.) and carbonate rock with an overlay of shell rubble. The samples consisted of the material retained by a collecting bag attached to the suctionmore » dredge. None of the commonly used benthic sampling devices could obtain samples at all stations.« less
Extension of surface data by use of meteorological satellites
NASA Technical Reports Server (NTRS)
Giddings, L. E.
1976-01-01
Ways of using meteorological satellite data to extend surface data are summarized. Temperature models are prepared from infrared data from ITOS/NOAA, NIMBUS, SMS/GOES, or future LANDSAT satellites. Using temperatures for surface meteorological stations as anchors, an adjustment is made to temperature values for each pixel in the model. The result is an image with an estimated temperature for each pixel. This provides an economical way of producing detailed temperature information for data-sparse areas, such as are found in underdeveloped countries. Related uses of these satellite data are also given, including the use of computer prepared cloud-free composites to extend climatic zones, and their use in discrimination of reflectivity-thermal regime zones.
Modeling extreme PM10 concentration in Malaysia using generalized extreme value distribution
NASA Astrophysics Data System (ADS)
Hasan, Husna; Mansor, Nadiah; Salleh, Nur Hanim Mohd
2015-05-01
Extreme PM10 concentration from the Air Pollutant Index (API) at thirteen monitoring stations in Malaysia is modeled using the Generalized Extreme Value (GEV) distribution. The data is blocked into monthly selection period. The Mann-Kendall (MK) test suggests a non-stationary model so two models are considered for the stations with trend. The likelihood ratio test is used to determine the best fitted model and the result shows that only two stations favor the non-stationary model (Model 2) while the other eleven stations favor stationary model (Model 1). The return level of PM10 concentration that is expected to exceed the maximum once within a selected period is obtained.
Using an index of habitat patch proximity for landscape design
Eric J. Gustafson; George R. Parker
1994-01-01
A proximity index (PX) inspired by island biogeography theory is described which quantifies the spatial context of a habitat patch in relation to its neighbors. The index distinguishes sparse distributions of small habitat patches from clusters of large patches. An evaluation of the relationship between PX and variation in the spatial characteristics of clusters of...
The legacy and continuity of forest disturbance, succession, and species at the MOFEP sites
Richard Guyette; John M. Kabrick
2002-01-01
Information about the scale, frequency, and legacy of disturbance regimes and their relation to the distribution of forest species is sparse in Ozark ecosystems. Knowledge of these relationships is valuable for understanding present-day forest ecosystem species composition and structure and for predicting how Missouri's forests will respond to management. Here, we...
Heather T. Root; Linda H. Geiser; Sarah Jovan; Peter Neitlich
2015-01-01
Biomonitoring can provide cost-effective and practical information about the distribution of nitrogen(N) deposition, particularly in regions with complex topography and sparse instrumented monitoring sites. Because of their unique biology, lichens are very sensitive bioindicators of air quality. Lichens lack acuticle to control absorption or leaching of nutrients and...
ERIC Educational Resources Information Center
Thompson, Sharon H.; Lougheed, Eric
2012-01-01
Although a majority of young adults are members of at least one social networking site, peer reviewed research examining gender differences in social networking communication is sparse. This study examined gender differences in social networking, particularly for Facebook use, among undergraduates. A survey was distributed to 268 college students…
USDA-ARS?s Scientific Manuscript database
Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool.Within this context, we assimilate act...
Amanda Parks; Michael Jenkins; Michael Ostry; Peng Zhao; Keith Woeste
2014-01-01
The abundance of butternut (Juglans cinerea L.) trees has severely declined rangewide over the past 50 years. An important factor in the decline is butternut canker, a disease caused by the fungus Ophiognomonia clavigigentijuglandacearum, which has left the remaining butternuts isolated and sparsely distributed. To manage the...
Imam, S. H.; Gordon, S. H.; Shogren, R. L.; Tosteson, T. R.; Govind, N. S.; Greene, R. V.
1999-01-01
Extruded bioplastic was prepared from cornstarch or poly(β-hydroxybutyrate-co-β-hydroxyvalerate) (PHBV) or blends of cornstarch and PHBV. The blended formulations contained 30 or 50% starch in the presence or absence of polyethylene oxide (PEO), which enhances adherence of starch granules to PHBV. Degradation of these formulations was monitored for 1 year at four stations in coastal water southwest of Puerto Rico. Two stations were within a mangrove stand. The other two were offshore; one of these stations was on a shallow shoulder of a reef, and the other was at a location in deeper water. Microbial enumeration at the four stations revealed considerable flux in the populations over the course of the year. However, in general, the overall population densities were 1 order of magnitude less at the deeper-water station than at the other stations. Starch degraders were 10- to 50-fold more prevalent than PHBV degraders at all of the stations. Accordingly, degradation of the bioplastic, as determined by weight loss and deterioration of tensile properties, correlated with the amount of starch present (100% starch >50% starch > 30% starch > 100% PHBV). Incorporation of PEO into blends slightly retarded the rate of degradation. The rate of loss of starch from the 100% starch samples was about 2%/day, while the rate of loss of PHBV from the 100% PHBV samples was about 0.1%/day. Biphasic weight loss was observed for the starch-PHBV blends at all of the stations. A predictive mathematical model for loss of individual polymers from a 30% starch–70% PHBV formulation was developed and experimentally validated. The model showed that PHBV degradation was delayed 50 days until more than 80% of the starch was consumed and predicted that starch and PHBV in the blend had half-lives of 19 and 158 days, respectively. Consistent with the relatively low microbial populations, bioplastic degradation at the deeper-water station exhibited an initial lag period, after which degradation rates comparable to the degradation rates at the other stations were observed. Presumably, significant biodegradation occurred only after colonization of the plastic, a parameter that was dependent on the resident microbial populations. Therefore, it can be reasonably inferred that extended degradation lags would occur in open ocean water where microbes are sparse. PMID:9925564
Fitting monthly Peninsula Malaysian rainfall using Tweedie distribution
NASA Astrophysics Data System (ADS)
Yunus, R. M.; Hasan, M. M.; Zubairi, Y. Z.
2017-09-01
In this study, the Tweedie distribution was used to fit the monthly rainfall data from 24 monitoring stations of Peninsula Malaysia for the period from January, 2008 to April, 2015. The aim of the study is to determine whether the distributions within the Tweedie family fit well the monthly Malaysian rainfall data. Within the Tweedie family, the gamma distribution is generally used for fitting the rainfall totals, however the Poisson-gamma distribution is more useful to describe two important features of rainfall pattern, which are the occurrences (dry months) and the amount (wet months). First, the appropriate distribution of the monthly rainfall was identified within the Tweedie family for each station. Then, the Tweedie Generalised Linear Model (GLM) with no explanatory variable was used to model the monthly rainfall data. Graphical representation was used to assess model appropriateness. The QQ plots of quantile residuals show that the Tweedie models fit the monthly rainfall data better for majority of the stations in the west coast and mid land than those in the east coast of Peninsula. This significant finding suggests that the best fitted distribution depends on the geographical location of the monitoring station. In this paper, a simple model is developed for generating synthetic rainfall data for use in various areas, including agriculture and irrigation. We have showed that the data that were simulated using the Tweedie distribution have fairly similar frequency histogram to that of the actual data. Both the mean number of rainfall events and mean amount of rain for a month were estimated simultaneously for the case that the Poisson gamma distribution fits the data reasonably well. Thus, this work complements previous studies that fit the rainfall amount and the occurrence of rainfall events separately, each to a different distribution.
Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H.; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6–8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods. PMID:24505729
Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang
2013-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.
Wang, Li-wen; Wei, Ya-xing; Niu, Zheng
2008-06-01
1 km MODIS NDVI time series data combining with decision tree classification, supervised classification and unsupervised classification was used to classify land cover type of Qinghai Province into 14 classes. In our classification system, sparse grassland and sparse shrub were emphasized, and their spatial distribution locations were labeled. From digital elevation model (DEM) of Qinghai Province, five elevation belts were achieved, and we utilized geographic information system (GIS) software to analyze vegetation cover variation on different elevation belts. Our research result shows that vegetation cover in Qinghai Province has been improved in recent five years. Vegetation cover area increases from 370047 km2 in 2001 to 374576 km2 in 2006, and vegetation cover rate increases by 0.63%. Among five grade elevation belts, vegetation cover ratio of high mountain belt is the highest (67.92%). The area of middle density grassland in high mountain belt is the largest, of which area is 94 003 km2. Increased area of dense grassland in high mountain belt is the greatest (1280 km2). During five years, the biggest variation is the conversion from sparse grassland to middle density grassland in high mountain belt, of which area is 15931 km2.
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Segmentation of High Angular Resolution Diffusion MRI using Sparse Riemannian Manifold Clustering
Wright, Margaret J.; Thompson, Paul M.; Vidal, René
2015-01-01
We address the problem of segmenting high angular resolution diffusion imaging (HARDI) data into multiple regions (or fiber tracts) with distinct diffusion properties. We use the orientation distribution function (ODF) to represent HARDI data and cast the problem as a clustering problem in the space of ODFs. Our approach integrates tools from sparse representation theory and Riemannian geometry into a graph theoretic segmentation framework. By exploiting the Riemannian properties of the space of ODFs, we learn a sparse representation for each ODF and infer the segmentation by applying spectral clustering to a similarity matrix built from these representations. In cases where regions with similar (resp. distinct) diffusion properties belong to different (resp. same) fiber tracts, we obtain the segmentation by incorporating spatial and user-specified pairwise relationships into the formulation. Experiments on synthetic data evaluate the sensitivity of our method to image noise and the presence of complex fiber configurations, and show its superior performance compared to alternative segmentation methods. Experiments on phantom and real data demonstrate the accuracy of the proposed method in segmenting simulated fibers, as well as white matter fiber tracts of clinical importance in the human brain. PMID:24108748
NASA Astrophysics Data System (ADS)
Havens, Scott; Marks, Danny; Kormos, Patrick; Hedrick, Andrew
2017-12-01
In the Western US and many mountainous regions of the world, critical water resources and climate conditions are difficult to monitor because the observation network is generally very sparse. The critical resource from the mountain snowpack is water flowing into streams and reservoirs that will provide for irrigation, flood control, power generation, and ecosystem services. Water supply forecasting in a rapidly changing climate has become increasingly difficult because of non-stationary conditions. In response, operational water supply managers have begun to move from statistical techniques towards the use of physically based models. As we begin to transition physically based models from research to operational use, we must address the most difficult and time-consuming aspect of model initiation: the need for robust methods to develop and distribute the input forcing data. In this paper, we present a new open source framework, the Spatial Modeling for Resources Framework (SMRF), which automates and simplifies the common forcing data distribution methods. It is computationally efficient and can be implemented for both research and operational applications. We present an example of how SMRF is able to generate all of the forcing data required to a run physically based snow model at 50-100 m resolution over regions of 1000-7000 km2. The approach has been successfully applied in real time and historical applications for both the Boise River Basin in Idaho, USA and the Tuolumne River Basin in California, USA. These applications use meteorological station measurements and numerical weather prediction model outputs as input. SMRF has significantly streamlined the modeling workflow, decreased model set up time from weeks to days, and made near real-time application of a physically based snow model possible.
NASA Astrophysics Data System (ADS)
Garambois, Pierre-Andre; Biancamaria, Sylvian; Monnier, Jerome; Roux, Helene; Dartus, Denis
2013-09-01
For continental water bodies and river hydraulic studies, water level measurements are fundamental information, yet they are currently mostly provided by punctual gauging stations located on the main river channel. That is why they are sparsely distributed in space and can have gaps in their time series (e.g. sensors failures). These issues can be compensated by remote sensing data, which have considerably contributed to improve the observation and understanding of physical processes in hydrology and hydraulics in general. Satellites such as SWOT (Surface Water and Ocean Topography) would give spatially distributed information on water elevations at an unprecedented resolution. Gathering pre-mission data over specific and varied science targets is the purpose of the AirSWOT airborne campaign in order to implement and test SWOT products retrieval algorithms. A reach of the Garonne River, downstream of Toulouse (FRANCE), is a proposed study area for AirSWOT flights. This choice is motivated by previous studies already performed on this section of 100km reach of the river. Moreover, on this highly instrumented and studied portion of river many typical free surface flow modelling issue has been encountered, and this river reach represents the limit of SWOT observation capability. The 2D hydrodynamic model DassFlow especially designed for variational data assimilation will be used on this portion of the Garonne River with cartographic sensitivity analysis. An identification strategy would allow retrieving spatial roughness along the main channel, variation of the local topographic slope or else temporal evolution of the streamflow. Addressing such problems and studying horizontal and vertical river sinuosity would improve fine scale hydraulics representation and understanding, which could additionally help to improve global discharge algorithms with different scales and complexity levels.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salam, Norfatin; Kassim, Suraiya
2013-04-01
Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.
Orion Navigation Sensitivities to Ground Station Infrastructure for Lunar Missions
NASA Technical Reports Server (NTRS)
Getchius, Joel; Kukitschek, Daniel; Crain, Timothy
2008-01-01
The Orion Crew Exploration Vehicle (CEV) will replace the Space Shuttle and serve as the next-generation spaceship to carry humans to the International Space Station and back to the Moon for the first time since the Apollo program. As in the Apollo and Space Shuttle programs, the Mission Control Navigation team will utilize radiometric measurements to determine the position and velocity of the CEV. In the case of lunar missions, the ground station infrastructure consisting of approximately twelve stations distributed about the Earth and known as the Apollo Manned Spaceflight Network, no longer exists. Therefore, additional tracking resources will have to be allocated or constructed to support mission operations for Orion lunar missions. This paper examines the sensitivity of Orion navigation for lunar missions to the number and distribution of tracking sites that form the ground station infrastructure.
Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection
NASA Astrophysics Data System (ADS)
Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang
2017-07-01
It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the wind turbine (WT) bearing fault detection and its effectiveness is sufficiently verified. Compared with the current popular bearing fault diagnosis techniques, wavelet analysis and spectral kurtosis, our model achieves a higher diagnostic accuracy.
NASA Astrophysics Data System (ADS)
Sloan, B.; Ebtehaj, A. M.; Guala, M.
2017-12-01
The understanding of heat and water vapor transfer from the land surface to the atmosphere by evapotranspiration (ET) is crucial for predicting the hydrologic water balance and climate forecasts used in water resources decision-making. However, the complex distribution of vegetation, soil and atmospheric conditions makes large-scale prognosis of evaporative fluxes difficult. Current ET models, such as Penman-Monteith and flux-gradient methods, are challenging to apply at the microscale due to ambiguity in determining resistance factors to momentum, heat and vapor transport for realistic landscapes. Recent research has made progress in modifying Monin-Obukhov similarity theory for dense plant canopies as well as providing clearer description of diffusive controls on evaporation at a smooth soil surface, which both aid in calculating more accurate resistance parameters. However, in nature, surfaces typically tend to be aerodynamically rough and vegetation is a mixture of sparse and dense canopies in non-uniform configurations. The goal of our work is to parameterize the resistances to evaporation based on spatial distributions of sparse plant canopies using novel wind tunnel experimentation at the St. Anthony Falls Laboratory (SAFL). The state-of-the-art SAFL wind tunnel was updated with a retractable soil box test section (shown in Figure 1), complete with a high-resolution scale and soil moisture/temperature sensors for recording evaporative fluxes and drying fronts. The existing capabilities of the tunnel were used to create incoming non-neutral stability conditions and measure 2-D velocity fields as well as momentum and heat flux profiles through PIV and hotwire anemometry, respectively. Model trees (h = 5 cm) were placed in structured and random configurations based on a probabilistic spacing that was derived from aerial imagery. The novel wind tunnel dataset provides the surface energy budget, turbulence statistics and spatial soil moisture data under varying atmospheric stability for each sparse canopy configuration. We will share initial data results and progress toward the development of new parametrizations that can account for the evolution of a canopy roughness sublayer on the momentum, heat and vapor resistance terms as a function of a stochastic representation of canopy spacing.
Universal inverse power-law distribution for temperature and rainfall in the UK region
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2014-06-01
Meteorological parameters, such as temperature, rainfall, pressure, etc., exhibit selfsimilar space-time fractal fluctuations generic to dynamical systems in nature such as fluid flows, spread of forest fires, earthquakes, etc. The power spectra of fractal fluctuations display inverse power-law form signifying long-range correlations. A general systems theory model predicts universal inverse power-law form incorporating the golden mean for the fractal fluctuations. The model predicted distribution was compared with observed distribution of fractal fluctuations of all size scales (small, large and extreme values) in the historic month-wise temperature (maximum and minimum) and total rainfall for the four stations Oxford, Armagh, Durham and Stornoway in the UK region, for data periods ranging from 92 years to 160 years. For each parameter, the two cumulative probability distributions, namely cmax and cmin starting from respectively maximum and minimum data value were used. The results of the study show that (i) temperature distributions (maximum and minimum) follow model predicted distribution except for Stornowy, minimum temperature cmin. (ii) Rainfall distribution for cmin follow model predicted distribution for all the four stations. (iii) Rainfall distribution for cmax follows model predicted distribution for the two stations Armagh and Stornoway. The present study suggests that fractal fluctuations result from the superimposition of eddy continuum fluctuations.
47 CFR 74.882 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... transmission or intermittent transmissions pertaining to a single event. (b) Each wireless video assist device..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power Auxiliary Stations § 74.882 Station identification. (a) For transmitters used for voice transmissions and having a transmitter output...
Code of Federal Regulations, 2011 CFR
2011-10-01
... AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74... applicable to translators, low power, and booster stations (except § 73.653—Operation of TV aural and visual...
Code of Federal Regulations, 2013 CFR
2013-10-01
... AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74... applicable to translators, low power, and booster stations (except § 73.653—Operation of TV aural and visual...
Code of Federal Regulations, 2014 CFR
2014-10-01
... AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74... applicable to translators, low power, and booster stations (except § 73.653—Operation of TV aural and visual...
Code of Federal Regulations, 2012 CFR
2012-10-01
... AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Low Power TV, TV Translator, and TV Booster Stations § 74... applicable to translators, low power, and booster stations (except § 73.653—Operation of TV aural and visual...
47 CFR 74.1203 - Interference.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1203 Interference. (a) An authorized FM translator or booster... transmission of any authorized broadcast station; or (2) The reception of the input signal of any TV translator...
Influence of air mass origin on aerosol properties at a remote Michigan forest site
NASA Astrophysics Data System (ADS)
VanReken, T. M.; Mwaniki, G. R.; Wallace, H. W.; Pressley, S. N.; Erickson, M. H.; Jobson, B. T.; Lamb, B. K.
2015-04-01
The northern Great Lakes region of North America is a large, relatively pristine area. To date, there has only been limited study of the atmospheric aerosol in this region. During summer 2009, a detailed characterization of the atmospheric aerosol was conducted at the University of Michigan Biological Station (UMBS) as part of the Community Atmosphere-Biosphere Interactions Experiment (CABINEX). Measurements included particle size distribution, water-soluble composition, and CCN activity. Aerosol properties were strongly dependent on the origin of the air masses reaching the site. For ∼60% of the study period, air was transported from sparsely populated regions to the northwest. During these times aerosol loadings were low, with mean number and volume concentrations of 1630 cm-3 and 1.91 μm3 cm-3, respectively. The aerosol during clean periods was dominated by organics, and exhibited low hygroscopicities (mean κ = 0.18 at s = 0.3%). When air was from more populated regions to the east and south (∼29% of the time), aerosol properties reflected a stronger anthropogenic influence, with 85% greater particle number concentrations, 2.5 times greater aerosol volume, six times more sulfate mass, and increased hygroscopicity (mean к = 0.24 at s = 0.3%). These trends are have the potential to influence forest-atmosphere interactions and should be targeted for future study.
Influence of air mass origin on aerosol properties at a remote Michigan forest site
VanReken, T. M.; Mwaniki, G. R.; Wallace, H. W.; ...
2015-02-10
The northern Great Lakes region of North America is a large, relatively pristine area. To date, there has only been limited study of the atmospheric aerosol in this region. During summer 2009, a detailed characterization of the atmospheric aerosol was conducted at the University of Michigan Biological Station (UMBS) as part of the Community Atmosphere–Biosphere Interactions Experiment (CABINEX). Measurements included particle size distribution, water-soluble composition, and CCN activity. Aerosol properties were strongly dependent on the origin of the air masses reaching the site. For ~60% of the study period, air was transported from sparsely populated regions to the northwest. Duringmore » these times aerosol loadings were low, with mean number and volume concentrations of 1630 cm -3 and 1.91 μm 3 cm -3, respectively. The aerosol during clean periods was dominated by organics, and exhibited low hygroscopicities (mean κ = 0.18 at s = 0.3%). When air was from more populated regions to the east and south (~29% of the time), aerosol properties reflected a stronger anthropogenic influence, with 85% greater particle number concentrations, 2.5 times greater aerosol volume, six times more sulfate mass, and increased hygroscopicity (mean к = 0.24 at s = 0.3%). Furthermore, these trends are have the potential to influence forest–atmosphere interactions and should be targeted for future study.« less
Comparison of Land Skin Temperature from a Land Model, Remote Sensing, and In-situ Measurement
NASA Technical Reports Server (NTRS)
Wang, Aihui; Barlage, Michael; Zeng, Xubin; Draper, Clara Sophie
2014-01-01
Land skin temperature (Ts) is an important parameter in the energy exchange between the land surface and atmosphere. Here hourly Ts from the Community Land Model Version 4.0, MODIS satellite observations, and in-situ observations in 2003 were compared. Compared with the in-situ observations over four semi-arid stations, both MODIS and modeled Ts show negative biases, but MODIS shows an overall better performance. Global distribution of differences between MODIS and modeled Ts shows diurnal, seasonal, and spatial variations. Over sparsely vegetated areas, the model Ts is generally lower than the MODIS observed Ts during the daytime, while the situation is opposite at nighttime. The revision of roughness length for heat and the constraint of minimum friction velocity from Zeng et al. [2012] bring the modeled Ts closer to MODIS during the day, and have little effect on Ts at night. Five factors contributing to the Ts differences between the model and MODIS are identified, including the difficulty in properly accounting for cloud cover information at the appropriate temporal and spatial resolutions, and uncertainties in surface energy balance computation, atmospheric forcing data, surface emissivity, and MODIS Ts data. These findings have implications for the cross-evaluation of modeled and remotely sensed Ts, as well as the data assimilation of Ts observations into Earth system models.
Automation of Space Station module power management and distribution system
NASA Technical Reports Server (NTRS)
Bechtel, Robert; Weeks, Dave; Walls, Bryan
1990-01-01
Viewgraphs on automation of space station module (SSM) power management and distribution (PMAD) system are presented. Topics covered include: reasons for power system automation; SSM/PMAD approach to automation; SSM/PMAD test bed; SSM/PMAD topology; functional partitioning; SSM/PMAD control; rack level autonomy; FRAMES AI system; and future technology needs for power system automation.
Semipermanent GPS (SPGPS) as a volcano monitoring tool: Rationale, method, and applications
Dzurisin, Daniel; Lisowski, Michael; Wicks, Charles W.
2017-01-01
Semipermanent GPS (SPGPS) is an alternative to conventional campaign or survey-mode GPS (SGPS) and to continuous GPS (CGPS) that offers several advantages for monitoring ground deformation. Unlike CGPS installations, SPGPS stations can be deployed quickly in response to changing volcanic conditions or earthquake activity such as a swarm or aftershock sequence. SPGPS networks can be more focused or more extensive than CGPS installations, because SPGPS equipment can be moved from station to station quickly to increase the total number of stations observed in a given time period. SPGPS networks are less intrusive on the landscape than CGPS installations, which makes it easier to satisfy land-use restrictions in ecologically sensitive areas. SPGPS observations are preferred over SGPS measurements because they provide better precision with only a modest increase in the amount of time, equipment, and personnel required in the field. We describe three applications of the SPGPS method that demonstrate its utility and flexibility. At the Yellowstone caldera, Wyoming, a 9-station SPGPS network serves to densify larger preexisting networks of CGPS and SGPS stations. At the Three Sisters volcanic center, Oregon, a 14-station SPGPS network complements an SGPS network and extends the geographic coverage provided by 3 CGPS stations permitted under wilderness land-use restrictions. In the Basin and Range province in northwest Nevada, a 6-station SPGPS network has been established in response to a prolonged earthquake swarm in an area with only sparse preexisting geodetic coverage. At Three Sisters, the estimated precision of station velocities based on annual ~ 3 month summertime SPGPS occupations from 2009 to 2015 is approximately half that for nearby CGPS stations. Conversely, SPGPS-derived station velocities are about twice as precise as those based on annual ~ 1 week SGPS measurements. After 5 years of SPGPS observations at Three Sisters, the precision of velocity determinations is estimated to be 0.5 mm/yr in longitude, 0.6 mm/yr in latitude, and 0.8 mm/yr in height. We conclude that an optimal approach to monitoring volcano deformation includes complementary CGPS and SPGPS networks, periodic InSAR observations, and measurements from in situ borehole sensors such as tiltmeters or strainmeters. This comprehensive approach provides the spatial and temporal detail necessary to adequately characterize a complex and evolving deformation pattern. Such information is essential to multi-parameter models of magmatic or tectonic processes that can help to guide research efforts, and also to inform hazards assessments and land-use planning decisions.
Semipermanent GPS (SPGPS) as a volcano monitoring tool: Rationale, method, and applications
NASA Astrophysics Data System (ADS)
Dzurisin, Daniel; Lisowski, Michael; Wicks, Charles W.
2017-09-01
Semipermanent GPS (SPGPS) is an alternative to conventional campaign or survey-mode GPS (SGPS) and to continuous GPS (CGPS) that offers several advantages for monitoring ground deformation. Unlike CGPS installations, SPGPS stations can be deployed quickly in response to changing volcanic conditions or earthquake activity such as a swarm or aftershock sequence. SPGPS networks can be more focused or more extensive than CGPS installations, because SPGPS equipment can be moved from station to station quickly to increase the total number of stations observed in a given time period. SPGPS networks are less intrusive on the landscape than CGPS installations, which makes it easier to satisfy land-use restrictions in ecologically sensitive areas. SPGPS observations are preferred over SGPS measurements because they provide better precision with only a modest increase in the amount of time, equipment, and personnel required in the field. We describe three applications of the SPGPS method that demonstrate its utility and flexibility. At the Yellowstone caldera, Wyoming, a 9-station SPGPS network serves to densify larger preexisting networks of CGPS and SGPS stations. At the Three Sisters volcanic center, Oregon, a 14-station SPGPS network complements an SGPS network and extends the geographic coverage provided by 3 CGPS stations permitted under wilderness land-use restrictions. In the Basin and Range province in northwest Nevada, a 6-station SPGPS network has been established in response to a prolonged earthquake swarm in an area with only sparse preexisting geodetic coverage. At Three Sisters, the estimated precision of station velocities based on annual 3 month summertime SPGPS occupations from 2009 to 2015 is approximately half that for nearby CGPS stations. Conversely, SPGPS-derived station velocities are about twice as precise as those based on annual 1 week SGPS measurements. After 5 years of SPGPS observations at Three Sisters, the precision of velocity determinations is estimated to be 0.5 mm/yr in longitude, 0.6 mm/yr in latitude, and 0.8 mm/yr in height. We conclude that an optimal approach to monitoring volcano deformation includes complementary CGPS and SPGPS networks, periodic InSAR observations, and measurements from in situ borehole sensors such as tiltmeters or strainmeters. This comprehensive approach provides the spatial and temporal detail necessary to adequately characterize a complex and evolving deformation pattern. Such information is essential to multi-parameter models of magmatic or tectonic processes that can help to guide research efforts, and also to inform hazards assessments and land-use planning decisions.
NASA Astrophysics Data System (ADS)
Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung
2007-07-01
This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.
Statistical distributions of extreme dry spell in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Jemain, Abdul Aziz
2010-11-01
Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.
Vegetation dynamics and responses to climate change and human activities in Central Asia.
Jiang, Liangliang; Guli Jiapaer; Bao, Anming; Guo, Hao; Ndayisaba, Felix
2017-12-01
Knowledge of the current changes and dynamics of different types of vegetation in relation to climatic changes and anthropogenic activities is critical for developing adaptation strategies to address the challenges posed by climate change and human activities for ecosystems. Based on a regression analysis and the Hurst exponent index method, this research investigated the spatial and temporal characteristics and relationships between vegetation greenness and climatic factors in Central Asia using the Normalized Difference Vegetation Index (NDVI) and gridded high-resolution station (land) data for the period 1984-2013. Further analysis distinguished between the effects of climatic change and those of human activities on vegetation dynamics by means of a residual analysis trend method. The results show that vegetation pixels significantly decreased for shrubs and sparse vegetation compared with those for the other vegetation types and that the degradation of sparse vegetation was more serious in the Karakum and Kyzylkum Deserts, the Ustyurt Plateau and the wetland delta of the Large Aral Sea than in other regions. The Hurst exponent results indicated that forests are more sustainable than grasslands, shrubs and sparse vegetation. Precipitation is the main factor affecting vegetation growth in the Kazakhskiy Melkosopochnik. Moreover, temperature is a controlling factor that influences the seasonal variation of vegetation greenness in the mountains and the Aral Sea basin. Drought is the main factor affecting vegetation degradation as a result of both increased temperature and decreased precipitation in the Kyzylkum Desert and the northern Ustyurt Plateau. The residual analysis highlighted that sparse vegetation and the degradation of some shrubs in the southern part of the Karakum Desert, the southern Ustyurt Plateau and the wetland delta of the Large Aral Sea were mainly triggered by human activities: the excessive exploitation of water resources in the upstream areas of the Amu Darya basin and oil and natural gas extraction in the southern part of the Karakum Desert and the southern Ustyurt Plateau. The results also indicated that after the collapse of the Soviet Union, abandoned pastures gave rise to increased vegetation in eastern Kazakhstan, Kyrgyzstan and Tajikistan, and abandoned croplands reverted to grasslands in northern Kazakhstan, leading to a decrease in cropland greenness. Shrubs and sparse vegetation were extremely sensitive to short-term climatic variations, and our results demonstrated that these vegetation types were the most seriously degraded by human activities. Therefore, regional governments should strive to restore vegetation to sustain this fragile arid ecological environment. Copyright © 2017 Elsevier B.V. All rights reserved.
Main shock and aftershock records of the 1999 Izmit and Duzce, Turkey earthquakes
Celebi, M.; Akkar, Sinan; Gulerce, U.; Sanli, A.; Bundock, H.; Salkin, A.
2001-01-01
The August 17, 1999 Izmit (Turkey) earthquake (Mw=7.4) will be remembered as one of the largest earthquakes of recent times that affected a large urban environment (U.S. Geological Survey, 1999). This significant event was followed by many significant aftershocks and another main event (Mw=7.2) that occurred on November 12, 1999 near Duzce (Turkey). The shaking that caused the widespread damage and destruction was recorded by a handful of accelerographs (~30) in the earthquake area operated by different networks. The characteristics of these records show that the recorded peak accelerations, shown in Figure 1, even those from near field stations, are smaller than expected (Çelebi, 1999, 2000). Following this main event, several organizations from Turkey, Japan, France and the USA deployed temporary accelerographs and other aftershock recording hardware. Thus, the number of recording stations in the earthquake affected area was quadrupled (~130). As a result, as seen in Figure 2, smaller magnitude aftershocks yielded larger peak accelerations, indicating that because of the sparse networks, recording of larger motions during the main shock of August 17, 1999 were possibly missed.
Funk, Chris; Peterson, Pete; Landsfeld, Martin; Pedreros, Diego; Verdin, James; Shukla, Shraddhanand; Husak, Gregory; Rowland, James; Harrison, Laura; Hoell, Andrew; Michaelsen, Joel
2015-01-01
The Climate Hazards group Infrared Precipitation with Stations (CHIRPS) dataset builds on previous approaches to ‘smart’ interpolation techniques and high resolution, long period of record precipitation estimates based on infrared Cold Cloud Duration (CCD) observations. The algorithm i) is built around a 0.05° climatology that incorporates satellite information to represent sparsely gauged locations, ii) incorporates daily, pentadal, and monthly 1981-present 0.05° CCD-based precipitation estimates, iii) blends station data to produce a preliminary information product with a latency of about 2 days and a final product with an average latency of about 3 weeks, and iv) uses a novel blending procedure incorporating the spatial correlation structure of CCD-estimates to assign interpolation weights. We present the CHIRPS algorithm, global and regional validation results, and show how CHIRPS can be used to quantify the hydrologic impacts of decreasing precipitation and rising air temperatures in the Greater Horn of Africa. Using the Variable Infiltration Capacity model, we show that CHIRPS can support effective hydrologic forecasts and trend analyses in southeastern Ethiopia.
Survey views of the Mir space station taken during rendezvous
1997-01-16
STS081-709-061 (12-22 Jan. 1997) --- As recorded while Space Shuttle Atlantis was docked with Russia's Mir Space Station, this 70mm camera's frame shows South Africa's wine growing country (immediately right of the solar panel) in a southwest-looking perspective. Most of the population in the Western Cape Province, as it is known, is clustered in the wet extreme south of the country identified here with denser cloud masses. This is the Mediterranean region of the country, experiencing summer drought when the photograph was taken. Cape Town lies immediately right of the solar panel and the Swartland wheat country to the left. The darker green areas are more heavily vegetated regions on the continental escarpment. The large bay in the region is the remote St. Helena Bay (Africa's southernmost point, Cape Agulhas, lies behind the solar panel). The cloud-free parts of the country in the foreground is the sparsely populated semidesert known as the Karroo, a quiet region to which people retire both for its rare dry climate and its beauty.
Funk, Chris; Peterson, Pete; Landsfeld, Martin; Pedreros, Diego; Verdin, James; Shukla, Shraddhanand; Husak, Gregory; Rowland, James; Harrison, Laura; Hoell, Andrew; Michaelsen, Joel
2015-01-01
The Climate Hazards group Infrared Precipitation with Stations (CHIRPS) dataset builds on previous approaches to ‘smart’ interpolation techniques and high resolution, long period of record precipitation estimates based on infrared Cold Cloud Duration (CCD) observations. The algorithm i) is built around a 0.05° climatology that incorporates satellite information to represent sparsely gauged locations, ii) incorporates daily, pentadal, and monthly 1981-present 0.05° CCD-based precipitation estimates, iii) blends station data to produce a preliminary information product with a latency of about 2 days and a final product with an average latency of about 3 weeks, and iv) uses a novel blending procedure incorporating the spatial correlation structure of CCD-estimates to assign interpolation weights. We present the CHIRPS algorithm, global and regional validation results, and show how CHIRPS can be used to quantify the hydrologic impacts of decreasing precipitation and rising air temperatures in the Greater Horn of Africa. Using the Variable Infiltration Capacity model, we show that CHIRPS can support effective hydrologic forecasts and trend analyses in southeastern Ethiopia. PMID:26646728
Code of Federal Regulations, 2013 CFR
2013-10-01
...), and (d) of this section. (b) Local distribution service (LDS) station. A fixed CARS station used... headend of a cable television system. (d) Cable Television Relay Service PICKUP station. A land mobile.... For other definitions, see part 76 (Cable Television Service) of this chapter. (a) Cable television...
Code of Federal Regulations, 2010 CFR
2010-10-01
...), and (d) of this section. (b) Local distribution service (LDS) station. A fixed CARS station used... headend of a cable television system. (d) Cable Television Relay Service PICKUP station. A land mobile.... For other definitions, see part 76 (Cable Television Service) of this chapter. (a) Cable television...
Code of Federal Regulations, 2011 CFR
2011-10-01
...), and (d) of this section. (b) Local distribution service (LDS) station. A fixed CARS station used... headend of a cable television system. (d) Cable Television Relay Service PICKUP station. A land mobile.... For other definitions, see part 76 (Cable Television Service) of this chapter. (a) Cable television...
Code of Federal Regulations, 2012 CFR
2012-10-01
...), and (d) of this section. (b) Local distribution service (LDS) station. A fixed CARS station used... headend of a cable television system. (d) Cable Television Relay Service PICKUP station. A land mobile.... For other definitions, see part 76 (Cable Television Service) of this chapter. (a) Cable television...
Code of Federal Regulations, 2014 CFR
2014-10-01
...), and (d) of this section. (b) Local distribution service (LDS) station. A fixed CARS station used... headend of a cable television system. (d) Cable Television Relay Service PICKUP station. A land mobile.... For other definitions, see part 76 (Cable Television Service) of this chapter. (a) Cable television...
Cucurbit germplasm collections at the North Central Regional Plant Introduction Station
USDA-ARS?s Scientific Manuscript database
The North Central Regional Plant Introduction Station (NCRPIS) in Ames, Iowa, USA is one of four primary Plant Introduction Stations in the National Plant Germplasm System (NPGS), and has responsibility for maintenance, regeneration, characterization, and distribution of the NPGS Cucumis and Cucurbi...
47 CFR 74.1236 - Emission and bandwidth.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1236 Emission and bandwidth. (a) The license of a station...) apply. (b) Standard width FM channels will be assigned and the transmitting apparatus shall be operated...
Mass Balance Modelling of Saskatchewan Glacier, Canada Using Empirically Downscaled Reanalysis Data
NASA Astrophysics Data System (ADS)
Larouche, O.; Kinnard, C.; Demuth, M. N.
2017-12-01
Observations show that glaciers around the world are retreating. As sites with long-term mass balance observations are scarce, models are needed to reconstruct glacier mass balance and assess its sensitivity to climate. In regions with discontinuous and/or sparse meteorological data, high-resolution climate reanalysis data provide a convenient alternative to in situ weather observations, but can also suffer from strong bias due to the spatial and temporal scale mismatch. In this study we used data from the North American Regional Reanalysis (NARR) project with a 30 x 30 km spatial resolution and 3-hour temporal resolution to produce the meteorological forcings needed to drive a physically-based, distributed glacier mass balance model (DEBAM, Hock and Holmgren 2005) for the historical period 1979-2016. A two-year record from an automatic weather station (AWS) operated on Saskatchewan Glacier (2014-2016) was used to downscale air temperature, relative humidity, wind speed and incoming solar radiation from the nearest NARR gridpoint to the glacier AWS site. An homogenized historical precipitation record was produced using data from two nearby, low-elevation weather stations and used to downscale the NARR precipitation data. Three bias correction methods were applied (scaling, delta and empirical quantile mapping - EQM) and evaluated using split sample cross-validation. The EQM method gave better results for precipitation and for air temperature. Only a slight improvement in the relative humidity was obtained using the scaling method, while none of the methods improved the wind speed. The later correlates poorly with AWS observations, probably because the local glacier wind is decoupled from the larger scale NARR wind field. The downscaled data was used to drive the DEBAM model in order to reconstruct the mass balance of Saskatchewan Glacier over the past 30 years. The model was validated using recent snow thickness measurements and previously published geodetic mass balance estimates.
NASA Astrophysics Data System (ADS)
Xin, L.; Kawakatsu, H.; Takeuchi, N.
2017-12-01
Differential travel time residuals of PKPbc and PKPdf for the path from South Sandwich Islands (SSI) to Alaska are usually used to constrain anisotropy of the western hemisphere of the Earth's inner-core. For this polar path, it has been found that PKPbc-df differential residuals are generally anomalously larger than data that sample other regions, and also show strong lateral variation. Due to sparse distribution of seismic stations in Alaska in early times, previous researches have been unable to propose a good model to explain this particular data set. Using data recorded by the current dense stations in Alaska for SSI earthquakes, we reexamine the anomalous behavior of core phase PKPbc-df differential travel times and try to explain the origin. The data sample the inner-core for the polar paths, as well as the lowermost mantle beneath Alaska. Our major observations are: (1) fractional travel time residuals of PKPbc-df increase rapidly within 2° (up to 1%). (2) A clear shift of the residual pattern could be seen for earthquakes with different locations. (3) The residual shows systematic lateral variation: at northern part, no steep increase of residual can be seen. A sharp lateral structural boundary with a P-wave velocity contrast of about 3% at lowermost mantle beneath East Alaska is invoked to explain the steep increase of the observed residuals. By combining the effects of a uniformly anisotropic inner-core and the heterogeneity, the observed residual patterns could be well reproduced. This high velocity anomaly might be related with an ancient subducted slab. Lateral variation of the PKPbc-df residuals suggests that the heterogeneity layer is not laterally continuous and may terminate beneath Northeastern Alaska. We also conclude that core phases may be strongly affected by heterogeneities at lowermost mantle, and should be carefully treated if they are used to infer the inner-core structure.
NASA Astrophysics Data System (ADS)
Key, K.; Bedrosian, P.; Egbert, G. D.; Livelybrooks, D.; Parris, B. A.; Schultz, A.
2015-12-01
The Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment was carried out to study the nature of the seismogenic locked zone and the down-dip transition zone where episodic tremor and slip (ETS) originates. This amphibious magnetotelluric (MT) data set consists of 8 offshore and 15 onshore profiles crossing from just seaward of the trench to the western front of the Cascades, with a north-south extent spanning from central Oregon to central Washington. The 71 offshore stations and the 75 onshore stations (red triangles in the image below) fit into the broader context of the more sparsely sampled EarthScope MT transportable array (black triangles) and other previous and pending MT surveys (other symbols). These data allows us to image variations in electrical conductivity along distinct segments of the Cascadia subduction zone defined by ETS recurrence intervals. Since bulk conductivity in this setting depends primarily on porosity, fluid content and temperature, the conductivity images created from the MOCHA data offer unique insights on fluid processes in the crust and mantle, and how the distribution of fluid along the plate interface relates to observed variations in ETS behavior. This abstract explores the across- and along-strike variations in the incoming plate and the shallow offshore forearc. In particular we examine how conductivity variations, and the inferred fluid content and porosity variations, are related to tectonic segmentation, seismicity and deformation patterns, and arc magma variations along-strike. Porosity inferred in the forearc crust can be interpreted in conjunction with active and passive seismic imaging results and may provide new insights on the origin of recently observed extremely high heat flow values. A companion abstract (Parris et al.) examines the deeper conductivity structure of the locked and ETS zones along the plate interface in order to identify correlations between ETS occurrence rates and inferred fluid concentrations.
Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes
NASA Astrophysics Data System (ADS)
Białek, Małgorzata
2006-03-01
Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.
Wang, Kang; Zhang, Tingjun; Zhang, Xiangdong; ...
2017-09-13
Historically, in-situ measurements have been notoriously sparse over the Arctic. As a consequence, the existing gridded data of Surface Air Temperature (SAT) may have large biases in estimating the warming trend in this region. Using data from an expanded monitoring network with 31 stations in the Alaskan Arctic, we demonstrate that the SAT has increased by 2.19 °C in this region, or at a rate of 0.23 °C/decade during 1921-2015. Mean- while, we found that the SAT warmed at 0.71 °C/decade over 1998-2015, which is two to three times faster than the rate established from the gridded datasets. Focusing onmore » the "hiatus" period 1998-2012 as identied by the Intergovernmental Panel on Climate Change (IPCC) report, the SAT has increased at 0.45 °C/decade, which captures more than 90% of the regional trend for 1951- 2012. We suggest that sparse in-situ measurements are responsible for underestimation of the SAT change in the gridded datasets. It is likely that enhanced climate warming may also have happened in the other regions of the Arctic since the late 1990s but left undetected because of incomplete observational coverage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Kang; Zhang, Tingjun; Zhang, Xiangdong
Historically, in-situ measurements have been notoriously sparse over the Arctic. As a consequence, the existing gridded data of Surface Air Temperature (SAT) may have large biases in estimating the warming trend in this region. Using data from an expanded monitoring network with 31 stations in the Alaskan Arctic, we demonstrate that the SAT has increased by 2.19 °C in this region, or at a rate of 0.23 °C/decade during 1921-2015. Mean- while, we found that the SAT warmed at 0.71 °C/decade over 1998-2015, which is two to three times faster than the rate established from the gridded datasets. Focusing onmore » the "hiatus" period 1998-2012 as identied by the Intergovernmental Panel on Climate Change (IPCC) report, the SAT has increased at 0.45 °C/decade, which captures more than 90% of the regional trend for 1951- 2012. We suggest that sparse in-situ measurements are responsible for underestimation of the SAT change in the gridded datasets. It is likely that enhanced climate warming may also have happened in the other regions of the Arctic since the late 1990s but left undetected because of incomplete observational coverage.« less
Scaling an in situ network for high resolution modeling during SMAPVEX15
NASA Astrophysics Data System (ADS)
Coopersmith, E. J.; Cosh, M. H.; Jacobs, J. M.; Jackson, T. J.; Crow, W. T.; Holifield Collins, C.; Goodrich, D. C.; Colliander, A.
2015-12-01
Among the greatest challenges within the field of soil moisture estimation is that of scaling sparse point measurements within a network to produce higher resolution map products. Large-scale field experiments present an ideal opportunity to develop methodologies for this scaling, by coupling in situ networks, temporary networks, and aerial mapping of soil moisture. During the Soil Moisture Active Passive Validation Experiments in 2015 (SMAPVEX15) in and around the USDA-ARS Walnut Gulch Experimental Watershed and LTAR site in southeastern Arizona, USA, a high density network of soil moisture stations was deployed across a sparse, permanent in situ network in coordination with intensive soil moisture sampling and an aircraft campaign. This watershed is also densely instrumented with precipitation gages (one gauge/0.57 km2) to monitor the North American Monsoon System, which dominates the hydrologic cycle during the summer months in this region. Using the precipitation and soil moisture time series values provided, a physically-based model is calibrated that will provide estimates at the 3km, 9km, and 36km scales. The results from this model will be compared with the point-scale gravimetric samples, aircraft-based sensor, and the satellite-based products retrieved from NASA's Soil Moisture Active Passive mission.
Wang, Kang; Zhang, Tingjun; Zhang, Xiangdong; Clow, Gary D.; Jafarov, Elchin E.; Overeem, Irina; Romanovsky, Vladimir; Peng, Xiaoqing; Cao, Bin
2017-01-01
Historically, in situ measurements have been notoriously sparse over the Arctic. As a consequence, the existing gridded data of surface air temperature (SAT) may have large biases in estimating the warming trend in this region. Using data from an expanded monitoring network with 31 stations in the Alaskan Arctic, we demonstrate that the SAT has increased by 2.19°C in this region, or at a rate of 0.23°C/decade during 1921–2015. Meanwhile, we found that the SAT warmed at 0.71°C/decade over 1998–2015, which is 2 to 3 times faster than the rate established from the gridded data sets. Focusing on the “hiatus” period 1998–2012 as identified by the Intergovernmental Panel on Climate Change (IPCC) report, the SAT has increased at 0.45°C/decade, which captures more than 90% of the regional trend for 1951–2012. We suggest that sparse in situ measurements are responsible for underestimation of the SAT change in the gridded data sets. It is likely that enhanced climate warming may also have happened in the other regions of the Arctic since the late 1990s but left undetected because of incomplete observational coverage.
NASA Technical Reports Server (NTRS)
Mckay, C. W.; Bown, R. L.
1985-01-01
The space station data management system involves networks of computing resources that must work cooperatively and reliably over an indefinite life span. This program requires a long schedule of modular growth and an even longer period of maintenance and operation. The development and operation of space station computing resources will involve a spectrum of systems and software life cycle activities distributed across a variety of hosts, an integration, verification, and validation host with test bed, and distributed targets. The requirement for the early establishment and use of an apporopriate Computer Systems and Software Engineering Support Environment is identified. This environment will support the Research and Development Productivity challenges presented by the space station computing system.
Web Information Systems for Monitoring and Control of Indoor Air Quality at Subway Stations
NASA Astrophysics Data System (ADS)
Choi, Gi Heung; Choi, Gi Sang; Jang, Joo Hyoung
In crowded subway stations indoor air quality (IAQ) is a key factor for ensuring the safety, health and comfort of passengers. In this study, a framework for web-based information system in VDN environment for monitoring and control of IAQ in subway stations is suggested. Since physical variables that describing IAQ need to be closely monitored and controlled in multiple locations in subway stations, concept of distributed monitoring and control network using wireless media needs to be implemented. Connecting remote wireless sensor network and device (LonWorks) networks to the IP network based on the concept of VDN can provide a powerful, integrated, distributed monitoring and control performance, making a web-based information system possible.
Broadband seismology and the detection and verification of underground nuclear explosions
NASA Astrophysics Data System (ADS)
Tinker, Mark Andrew
1997-10-01
On September 24, 1996, President Clinton signed the Comprehensive Test Ban Treaty (CTBT), which bans the testing of all nuclear weapons thereby limiting their future development. Seismology is the primary tool used for the detection and identification of underground explosions and thus, will play a key role in monitoring a CTBT. The detection and identification of low yield explosions requires seismic stations at regional distances (<1500 km). However, because the regional wavefield propagates within the extremely heterogeneous crustal waveguide, the seismic waveforms are also very complicated. Therefore, it is necessary to have a solid understanding of how the phases used in regional discriminants develop within different tectonic regimes. Thus, the development of the seismic phases Pn and Lg, which compose the seismic discriminant Pn/Lg, within the western U.S. from the Non-Proliferation Experiment are evaluated. The most fundamental discriminant is event location as 90% of all seismic sources occur too deep within the earth to be unnatural. France resumed its nuclear testing program after a four year moratorium and conducted six tests during a five month period starting in September of 1995. Using teleseismic data, a joint hypocenter determination algorithm was used to determine the hypocenters of these six explosions. One of the most important problems in monitoring a CTBT is the detection and location of small seismic events. Although seismic arrays have become the central tool for event detection, in the context of a global monitoring treaty, there will be some dependence on sparse regional networks of three-component broadband seismic stations to detect low yield explosions. However, the full power of the data has not been utilized, namely using phases other than P and S. Therefore, the information in the surface wavetrain is used to improve the locations of small seismic events recorded on a sparse network in Bolivia. Finally, as a discrimination example in a complex region, P to S ratios are used to determine source parameters of the Msb{w} 8.3 deep Bolivia earthquake.
NASA Astrophysics Data System (ADS)
Heim, B.; Beamish, A. L.; Walker, D. A.; Epstein, H. E.; Sachs, T.; Chabrillat, S.; Buchhorn, M.; Prakash, A.
2016-12-01
Ground data for the validation of satellite-derived terrestrial Essential Climate Variables (ECVs) at high latitudes are sparse. Also for regional model evaluation (e.g. climate models, land surface models, permafrost models), we lack accurate ranges of terrestrial ground data and face the problem of a large mismatch in scale. Within the German research programs `Regional Climate Change' (REKLIM) and the Environmental Mapping and Analysis Program (EnMAP), we conducted a study on ground data representativeness for vegetation-related variables within a monitoring grid at the Toolik Lake Long-Term Ecological Research station; the Toolik Lake station lies in the Kuparuk River watershed on the North Slope of the Brooks Mountain Range in Alaska. The Toolik Lake grid covers an area of 1 km2 containing Eight five grid points spaced 100 meters apart. Moist acidic tussock tundra is the most dominant vegetation type within the grid. Eight five permanent 1 m2 plots were also established to be representative of the individual gridpoints. Researchers from the University of Alaska Fairbanks have undertaken assessments at these plots, including Leaf Area Index (LAI) and field spectrometry to derive the Normalized Difference Vegetation Index (NDVI). During summer 2016, we conducted field spectrometry and LAI measurements at selected plots during early, peak and late summer. We experimentally measured LAI on more spatially extensive Elementary Sampling Units (ESUs) to investigate the spatial representativeness of the permanent 1 m2 plots and to map ESUs for various tundra types. LAI measurements are potentially influenced by landscape-inherent microtopography, sparse vascular plant cover, and dead woody matter. From field spectrometer measurements, we derived a clear-sky mid-day Fraction of Absorbed Photosynthetically Active Radiation (FAPAR). We will present the first data analyses comparing FAPAR and LAI, and maps of biophysically-focused ESUs for evaluation of the use of remote sensing data to estimate these ecosystem properties.
Space Station Freedom power management and distribution design status
NASA Technical Reports Server (NTRS)
Javidi, S.; Gholdston, E.; Stroh, P.
1989-01-01
The design status of the power management and distribution electric power system for the Space Station Freedom is presented. The current design is a star architecture, which has been found to be the best approach for meeting the requirement to deliver 120 V dc to the user interface. The architecture minimizes mass and power losses while improving element-to-element isolation and system flexibility. The design is partitioned into three elements: energy collection, storage and conversion, system protection and distribution, and management and control.
Status of 20 kHz space station power distribution technology
NASA Technical Reports Server (NTRS)
Hansen, Irving G.
1988-01-01
Power Distribution on the NASA Space Station will be accomplished by a 20 kHz sinusoidal, 440 VRMS, single phase system. In order to minimize both system complexity and the total power coversion steps required, high frequency power will be distributed end-to-end in the system. To support the final design of flight power system hardware, advanced development and demonstrations have been made on key system technologies and components. The current status of this program is discussed.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Heber, Gerd; Biswas, Rupak
2000-01-01
The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. A sparse matrix-vector multiply (SPMV) usually accounts for most of the floating-point operations within a CG iteration. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and SPMV using different programming paradigms and architectures. Results show that for this class of applications, ordering significantly improves overall performance, that cache reuse may be more important than reducing communication, and that it is possible to achieve message passing performance using shared memory constructs through careful data ordering and distribution. However, a multi-threaded implementation of CG on the Tera MTA does not require special ordering or partitioning to obtain high efficiency and scalability.
Improved parallel data partitioning by nested dissection with applications to information retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Michael M.; Chevalier, Cedric; Boman, Erik Gunnar
The computational work in many information retrieval and analysis algorithms is based on sparse linear algebra. Sparse matrix-vector multiplication is a common kernel in many of these computations. Thus, an important related combinatorial problem in parallel computing is how to distribute the matrix and the vectors among processors so as to minimize the communication cost. We focus on minimizing the total communication volume while keeping the computation balanced across processes. In [1], the first two authors presented a new 2D partitioning method, the nested dissection partitioning algorithm. In this paper, we improve on that algorithm and show that it ismore » a good option for data partitioning in information retrieval. We also show partitioning time can be substantially reduced by using the SCOTCH software, and quality improves in some cases, too.« less
Joint Inversion of Earthquake Source Parameters with local and teleseismic body waves
NASA Astrophysics Data System (ADS)
Chen, W.; Ni, S.; Wang, Z.
2011-12-01
In the classical source parameter inversion algorithm of CAP (Cut and Paste method, by Zhao and Helmberger), waveform data at near distances (typically less than 500km) are partitioned into Pnl and surface waves to account for uncertainties in the crustal models and different amplitude weight of body and surface waves. The classical CAP algorithms have proven effective for resolving source parameters (focal mechanisms, depth and moment) for earthquakes well recorded on relatively dense seismic network. However for regions covered with sparse stations, it is challenging to achieve precise source parameters . In this case, a moderate earthquake of ~M6 is usually recorded on only one or two local stations with epicentral distances less than 500 km. Fortunately, an earthquake of ~M6 can be well recorded on global seismic networks. Since the ray paths for teleseismic and local body waves sample different portions of the focal sphere, combination of teleseismic and local body wave data helps constrain source parameters better. Here we present a new CAP mothod (CAPjoint), which emploits both teleseismic body waveforms (P and SH waves) and local waveforms (Pnl, Rayleigh and Love waves) to determine source parameters. For an earthquake in Nevada that is well recorded with dense local network (USArray stations), we compare the results from CAPjoint with those from the traditional CAP method involving only of local waveforms , and explore the efficiency with bootstraping statistics to prove the results derived by CAPjoint are stable and reliable. Even with one local station included in joint inversion, accuracy of source parameters such as moment and strike can be much better improved.
Nephin, Jessica; Juniper, S. Kim; Archambault, Philippe
2014-01-01
Diversity and community patterns of macro- and megafauna were compared on the Canadian Beaufort shelf and slope. Faunal sampling collected 247 taxa from 48 stations with box core and trawl gear over the summers of 2009–2011 between 50 and 1,000 m in depth. Of the 80 macrofaunal and 167 megafaunal taxa, 23% were uniques, present at only one station. Rare taxa were found to increase proportional to total taxa richness and differ between the shelf ( 100 m) where they tended to be sparse and the slope where they were relatively abundant. The macrofauna principally comprised polychaetes with nephtyid polychaetes dominant on the shelf and maldanid polychaetes (up to 92% in relative abundance/station) dominant on the slope. The megafauna principally comprised echinoderms with Ophiocten sp. (up to 90% in relative abundance/station) dominant on the shelf and Ophiopleura sp. dominant on the slope. Macro- and megafauna had divergent patterns of abundance, taxa richness ( diversity) and diversity. A greater degree of macrofaunal than megafaunal variation in abundance, richness and diversity was explained by confounding factors: location (east-west), sampling year and the timing of sampling with respect to sea-ice conditions. Change in megafaunal abundance, richness and diversity was greatest across the depth gradient, with total abundance and richness elevated on the shelf compared to the slope. We conclude that megafaunal slope taxa were differentiated from shelf taxa, as faunal replacement not nestedness appears to be the main driver of megafaunal diversity across the depth gradient. PMID:25007347
Nephin, Jessica; Juniper, S Kim; Archambault, Philippe
2014-01-01
Diversity and community patterns of macro- and megafauna were compared on the Canadian Beaufort shelf and slope. Faunal sampling collected 247 taxa from 48 stations with box core and trawl gear over the summers of 2009-2011 between 50 and 1,000 m in depth. Of the 80 macrofaunal and 167 megafaunal taxa, 23% were uniques, present at only one station. Rare taxa were found to increase proportional to total taxa richness and differ between the shelf (< 100 m) where they tended to be sparse and the slope where they were relatively abundant. The macrofauna principally comprised polychaetes with nephtyid polychaetes dominant on the shelf and maldanid polychaetes (up to 92% in relative abundance/station) dominant on the slope. The megafauna principally comprised echinoderms with Ophiocten sp. (up to 90% in relative abundance/station) dominant on the shelf and Ophiopleura sp. dominant on the slope. Macro- and megafauna had divergent patterns of abundance, taxa richness (α diversity) and β diversity. A greater degree of macrofaunal than megafaunal variation in abundance, richness and β diversity was explained by confounding factors: location (east-west), sampling year and the timing of sampling with respect to sea-ice conditions. Change in megafaunal abundance, richness and β diversity was greatest across the depth gradient, with total abundance and richness elevated on the shelf compared to the slope. We conclude that megafaunal slope taxa were differentiated from shelf taxa, as faunal replacement not nestedness appears to be the main driver of megafaunal β diversity across the depth gradient.
The Caucasus Seismic Network (CNET): Seismic Structure of the Greater and Lesser Caucasus
NASA Astrophysics Data System (ADS)
Sandvol, E. A.; Mackey, K. G.; Nabelek, J.; Yetermishli, G.; Godoladze, T.; Babayan, H.; Malovichko, A.
2017-12-01
The Greater Caucasus are a portion of the Alpine-Himalayan mountain belt that has undergone rapid uplift in the past 5 million years, thus serving as a unique natural laboratory to study the early stages of orogenesis. Relatively lower resolution seismic velocity models of this region show contradictory lateral variability. Furthermore, recent waveform modeling of seismograms has clearly demonstrated the presence of deep earthquakes (with a maximum hypocentral depth of 175 km) below the Greater Caucasus. The region has been largely unexplored in terms of the detailed uppermost mantle and crustal seismic structure due in part to the disparate data sets that have not yet been merged as well as key portions being sparsely instrumented. We have established collaborative agreements across the region. Building on these agreements we recently deployed a major multi-national seismic array across the Greater Caucasus to address fundamental questions about the nature of continental deformation in this poorly understood region. Our seismic array has two components: (1) a grid of stations spanning the entire Caucasus and (2) two seismic transects consisting of stations spaced at distances of less than 10 km that cross the Greater Caucasus. In addition to the temporary stations, we are working to integrate data from the national networks to produce high resolution images of the seismic structure. Using data from over 106 new seismic stations in Azerbaijan, Armenia, Russia, and Georgia, we hope to gain a better understanding of the recent uplift ( 5 Ma) of the Greater Caucasus and the nature of seismogenic deformation in the region.
NASA Astrophysics Data System (ADS)
Baish, A. S.; Vivoni, E. R.; Payan, J. G.; Robles-Morua, A.; Basile, G. M.
2011-12-01
A distributed hydrologic model can help bring consensus among diverse stakeholders in regional flood planning by producing quantifiable sets of alternative futures. This value is acute in areas with high uncertainties in hydrologic conditions and sparse observations. In this study, we conduct an application of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS) in the Santa Catarina basin of Nuevo Leon, Mexico, where Hurricane Alex in July 2010 led to catastrophic flooding of the capital city of Monterrey. Distributed model simulations utilize best-available information on the regional topography, land cover, and soils obtained from Mexican government agencies or analysis of remotely-sensed imagery from MODIS and ASTER. Furthermore, we developed meteorological forcing for the flood event based on multiple data sources, including three local gauge networks, satellite-based estimates from TRMM and PERSIANN, and the North American Land Data Assimilation System (NLDAS). Remotely-sensed data allowed us to quantify rainfall distributions in the upland, rural portions of the Santa Catarina that are sparsely populated and ungauged. Rural areas had significant contributions to the flood event and as a result were considered by stakeholders for flood control measures, including new reservoirs and upland vegetation management. Participatory modeling workshops with the stakeholders revealed a disconnect between urban and rural populations in regard to understanding the hydrologic conditions of the flood event and the effectiveness of existing and potential flood control measures. Despite these challenges, the use of the distributed flood forecasts developed within this participatory framework facilitated building consensus among diverse stakeholders and exploring alternative futures in the basin.
Probable LAGEOS contributions to a worldwide geodynamics control network
NASA Technical Reports Server (NTRS)
Bender, P. L.; Goad, C. C.
1979-01-01
The paper describes simulations performed on the contributions which LAGEOS laser ranging data can make to the establishment of a worldwide geodynamics control network. A distribution of 10 fixed ranging stations was assumed for most of the calculations, and a single 7-day arc was used, measurements assumed to be made every 10 minutes in order to avoid artificial reductions in the uncertainties due to oversampling. Computer simulations were carried out in which the coordinates of the stations and improvements in the gravity field coefficients were solved for simultaneously. It is suggested that good accuracy for station coordinates can be expected, even with the present gravity field model uncertainties, if sufficient measurement accuracy is achieved at a reasonable distribution of stations. Further, it is found that even 2-cm range measurement errors would be likely to be the main source of station coordinate errors in retrospective analyses of LAGEOS ranging results five or six years from now.
34. Site Plan: Fort Custer Air Force Station, Fort Custer, ...
34. Site Plan: Fort Custer Air Force Station, Fort Custer, Michigan, Modification of Electrical Distribution, General Site Plan, USACOE, no date. - Fort Custer Military Reservation, P-67 Radar Station, .25 mile north of Dickman Road, east of Clark Road, Battle Creek, Calhoun County, MI
47 CFR 74.537 - Temporary authorizations.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Aural Broadcast Auxiliary Stations... STL or intercity relay station operation which cannot be conducted in accordance with § 74.24. Such... intercity relay station must be made in accordance with the procedures of § 1.931(b) of this chapter. (c...
47 CFR 74.537 - Temporary authorizations.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Aural Broadcast Auxiliary Stations... STL or intercity relay station operation which cannot be conducted in accordance with § 74.24. Such... intercity relay station must be made in accordance with the procedures of § 1.931(b) of this chapter. (c...
47 CFR 74.537 - Temporary authorizations.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES Aural Broadcast Auxiliary Stations... STL or intercity relay station operation which cannot be conducted in accordance with § 74.24. Such... intercity relay station must be made in accordance with the procedures of § 1.931(b) of this chapter. (c...
47 CFR 74.1263 - Time of operation.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., AUXILIARY, SPECIAL BROADCAST AND OTHER PROGRAM DISTRIBUTIONAL SERVICES FM Broadcast Translator Stations and FM Broadcast Booster Stations § 74.1263 Time of operation. (a) The licensee of an FM translator or... an FM translator or booster station is expected to provide a dependable service to the extent that...